Compare commits

...

36 commits

Author SHA1 Message Date
9d1c915c42 [#1251] pilorama: Allow traversing multiple branches in parallel
All checks were successful
DCO action / DCO (pull_request) Successful in 2m40s
Vulncheck / Vulncheck (pull_request) Successful in 3m53s
Build / Build Components (1.20) (pull_request) Successful in 5m3s
Build / Build Components (1.21) (pull_request) Successful in 5m9s
Tests and linters / gopls check (pull_request) Successful in 5m48s
Tests and linters / Lint (pull_request) Successful in 6m41s
Tests and linters / Tests (1.20) (pull_request) Successful in 10m56s
Tests and linters / Tests (1.21) (pull_request) Successful in 11m13s
Tests and linters / Tests with -race (pull_request) Successful in 11m40s
Tests and linters / Staticcheck (pull_request) Successful in 2m27s
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-07-18 14:18:06 +03:00
4ef441d4a3 [#1255] go.mod: Update api-go
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-07-17 14:44:05 +03:00
306927faf2 [#1255] go.mod: Update grpc version
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-07-17 14:44:05 +03:00
3d514c9418 [#1250] *: Reformat proto filets with clang-format
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-07-17 14:44:05 +03:00
c09c870df4 [#1234] pilorama: Fix GetByPath() on duplicate directories
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-07-17 14:44:05 +03:00
034061dbfc [#1234] pilorama: Add test for duplicate directory behaviour
When AddByPath() is called concurrently on 2 different nodes,
internal path components may be created twice. This violates some
of our assumptions in GetByPath() and, indirectly, in S3 handling of
GetSubTree() results.

Add a test for the correct behaviour, fixes will follow.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-07-17 14:44:05 +03:00
6bec1b9d89 [#1181] shard: Set Disabled as default mode for components
All checks were successful
DCO action / DCO (pull_request) Successful in 4m20s
Build / Build Components (1.20) (pull_request) Successful in 5m31s
Build / Build Components (1.21) (pull_request) Successful in 5m30s
Vulncheck / Vulncheck (pull_request) Successful in 5m12s
Tests and linters / gopls check (pull_request) Successful in 39m31s
Tests and linters / Staticcheck (pull_request) Successful in 48m29s
Tests and linters / Lint (pull_request) Successful in 2m59s
Tests and linters / Tests (1.20) (pull_request) Successful in 4m6s
Tests and linters / Tests (1.21) (pull_request) Successful in 4m38s
Tests and linters / Tests with -race (pull_request) Successful in 4m47s
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2024-06-18 11:00:26 +03:00
6067e644d6 [#1175] shard: Update metric mode_info on Init
All checks were successful
DCO action / DCO (pull_request) Successful in 2m29s
Vulncheck / Vulncheck (pull_request) Successful in 3m9s
Build / Build Components (1.21) (pull_request) Successful in 4m45s
Build / Build Components (1.20) (pull_request) Successful in 4m56s
Tests and linters / gopls check (pull_request) Successful in 6m47s
Tests and linters / Staticcheck (pull_request) Successful in 7m48s
Tests and linters / Lint (pull_request) Successful in 8m9s
Tests and linters / Tests (1.20) (pull_request) Successful in 9m13s
Tests and linters / Tests (1.21) (pull_request) Successful in 9m41s
Tests and linters / Tests with -race (pull_request) Successful in 9m54s
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2024-06-13 20:09:16 +03:00
ba374c7907 Reapply "[#446] engine: Move to read-only on blobstor errors"
All checks were successful
DCO action / DCO (pull_request) Successful in 3m39s
Vulncheck / Vulncheck (pull_request) Successful in 4m6s
Build / Build Components (1.21) (pull_request) Successful in 4m34s
Build / Build Components (1.20) (pull_request) Successful in 4m45s
Tests and linters / gopls check (pull_request) Successful in 5m7s
Tests and linters / Staticcheck (pull_request) Successful in 6m5s
Tests and linters / Lint (pull_request) Successful in 7m6s
Tests and linters / Tests (1.20) (pull_request) Successful in 9m0s
Tests and linters / Tests (1.21) (pull_request) Successful in 9m25s
Tests and linters / Tests with -race (pull_request) Successful in 9m46s
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-06-11 18:25:31 +03:00
67a6da470f [#1171] objectsvc: Fix linter warning
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-06-11 18:25:18 +03:00
dff4dd545e [#1095] cli: Support user/group target for local overrides
All checks were successful
DCO action / DCO (pull_request) Successful in 2m22s
Build / Build Components (1.20) (pull_request) Successful in 3m8s
Vulncheck / Vulncheck (pull_request) Successful in 3m46s
Build / Build Components (1.21) (pull_request) Successful in 4m46s
Tests and linters / gopls check (pull_request) Successful in 6m30s
Tests and linters / Staticcheck (pull_request) Successful in 7m12s
Tests and linters / Lint (pull_request) Successful in 7m39s
Tests and linters / Tests (1.21) (pull_request) Successful in 9m19s
Tests and linters / Tests with -race (pull_request) Successful in 9m33s
Tests and linters / Tests (1.20) (pull_request) Successful in 9m43s
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-05-17 17:06:29 +03:00
5a2a877cca [#1110] node: Use single handler for new epoch event
Bootstrap logic depends on the netmap status, which in turn depends on
the node info. Updating them in a single thread makes things more
predictable.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-05-02 11:42:09 +00:00
62b5175f60 [#1110] node: Log maintenance stop only if it was enabled
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-05-02 11:42:09 +00:00
e7cfa4f1e7 [#1110] node: Rename handleLocalNodeInfo()
It is used in "handler" only once, what we really do is set the
variable. And we have another "local" node info in `cfgNodeInfo`, this
one is not really local (node info), more like (local node) info, so use
setContractNodeInfo to distinguish it from the local view on the node
info.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-05-02 11:42:09 +00:00
96c784cbe5 [#1110] node: Fix comment about nodeInfo type
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-05-02 11:42:09 +00:00
348c400544 [#1086] engine: Do not use metabase if shard looks bad
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-05-02 11:33:22 +00:00
175e9c902f [#1086] engine: Change mode in case of errors async
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-05-02 11:33:22 +00:00
89a68ca836 [#1108] ape: Update policy-engine version for listing by iteration
All checks were successful
DCO action / DCO (pull_request) Successful in 6m40s
Build / Build Components (1.20) (pull_request) Successful in 11m56s
Build / Build Components (1.21) (pull_request) Successful in 12m8s
Vulncheck / Vulncheck (pull_request) Successful in 15m17s
Tests and linters / Staticcheck (pull_request) Successful in 5m35s
Tests and linters / Lint (pull_request) Successful in 7m38s
Tests and linters / gopls check (pull_request) Successful in 10m45s
Tests and linters / Tests (1.20) (pull_request) Successful in 11m5s
Tests and linters / Tests (1.21) (pull_request) Successful in 11m45s
Tests and linters / Tests with -race (pull_request) Successful in 13m52s
* Update go.mod with a new version of policy-engine pacakge.
* Adapt SwitchRPCGuardedActor to ContractStorage interface.
* Fix `frostfs-adm` util.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-05-02 10:32:53 +03:00
498f9955ea [#1089] control: Add USER and GROUP targets for local override storage
All checks were successful
DCO action / DCO (pull_request) Successful in 1m35s
Build / Build Components (1.20) (pull_request) Successful in 3m21s
Vulncheck / Vulncheck (pull_request) Successful in 3m2s
Build / Build Components (1.21) (pull_request) Successful in 3m52s
Tests and linters / Staticcheck (pull_request) Successful in 4m21s
Tests and linters / Lint (pull_request) Successful in 5m38s
Tests and linters / gopls check (pull_request) Successful in 5m48s
Tests and linters / Tests (1.20) (pull_request) Successful in 7m34s
Tests and linters / Tests (1.21) (pull_request) Successful in 8m1s
Tests and linters / Tests with -race (pull_request) Successful in 8m27s
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-04-12 17:35:50 +03:00
f8973f9b05 [#1089] control: Format proto files with clang-format
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-04-12 17:35:50 +03:00
50ec4febcc [#1089] ape: Provide request actor as an additional target
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-04-12 17:35:50 +03:00
59d7a6940d [#1090] tree: Make workaround for APE checks
All checks were successful
DCO action / DCO (pull_request) Successful in 2m22s
Build / Build Components (1.21) (pull_request) Successful in 3m16s
Build / Build Components (1.20) (pull_request) Successful in 3m56s
Vulncheck / Vulncheck (pull_request) Successful in 3m32s
Tests and linters / gopls check (pull_request) Successful in 6m27s
Tests and linters / Staticcheck (pull_request) Successful in 6m49s
Tests and linters / Lint (pull_request) Successful in 7m42s
Tests and linters / Tests (1.21) (pull_request) Successful in 8m56s
Tests and linters / Tests (1.20) (pull_request) Successful in 9m9s
Tests and linters / Tests with -race (pull_request) Successful in 9m8s
* Make `verifyClient` method perform APE check if a container
  was created with zero-filled basic ACL.
* Object verbs are used in APE, until tree verbs are introduced.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-04-12 12:02:28 +03:00
2ecd427df4 [#1090] ape: Move ape request and resource implementations to common package
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-04-12 12:02:23 +03:00
573ca6d0d5 [#1090] go.mod: Update policy-engine version
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-04-12 12:02:18 +03:00
1eb47ab2ce [#1080] metabase: Add StorageID metric
All checks were successful
DCO action / DCO (pull_request) Successful in 4m22s
Vulncheck / Vulncheck (pull_request) Successful in 6m22s
Build / Build Components (1.20) (pull_request) Successful in 7m55s
Build / Build Components (1.21) (pull_request) Successful in 7m50s
Tests and linters / gopls check (pull_request) Successful in 10m10s
Tests and linters / Staticcheck (pull_request) Successful in 10m18s
Tests and linters / Tests (1.20) (pull_request) Successful in 12m11s
Tests and linters / Tests (1.21) (pull_request) Successful in 12m31s
Tests and linters / Tests with -race (pull_request) Successful in 13m45s
Tests and linters / Lint (pull_request) Successful in 12m42s
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-04-10 10:00:58 +03:00
954881f1ef [#1080] metabase: Open bucket for container counter once
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-04-10 10:00:58 +03:00
7809928b64 [#1080] ape: Use value for APE request
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-04-09 18:55:27 +03:00
4b902be81e [#1080] ape: Do not read object headers before Head/Get
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-04-09 18:55:10 +03:00
161d33c2b7 [#1062] object: Fix buffer allocation for PayloadRange
All checks were successful
DCO action / DCO (pull_request) Successful in 1m57s
Vulncheck / Vulncheck (pull_request) Successful in 2m54s
Build / Build Components (1.21) (pull_request) Successful in 4m8s
Build / Build Components (1.20) (pull_request) Successful in 6m33s
Tests and linters / gopls check (pull_request) Successful in 7m9s
Tests and linters / Staticcheck (pull_request) Successful in 7m32s
Tests and linters / Lint (pull_request) Successful in 8m33s
Tests and linters / Tests (1.20) (pull_request) Successful in 10m16s
Tests and linters / Tests with -race (pull_request) Successful in 10m24s
Tests and linters / Tests (1.21) (pull_request) Successful in 10m39s
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2024-04-09 13:10:41 +03:00
1a7c3db67f [#1077] objectsvc: Fix possible panic in GetRange()
All checks were successful
DCO action / DCO (pull_request) Successful in 2m26s
Vulncheck / Vulncheck (pull_request) Successful in 3m24s
Build / Build Components (1.21) (pull_request) Successful in 4m8s
Build / Build Components (1.20) (pull_request) Successful in 4m22s
Tests and linters / Staticcheck (pull_request) Successful in 6m36s
Tests and linters / Lint (pull_request) Successful in 7m11s
Tests and linters / Tests (1.20) (pull_request) Successful in 9m18s
Tests and linters / gopls check (pull_request) Successful in 9m30s
Tests and linters / Tests (1.21) (pull_request) Successful in 10m13s
Tests and linters / Tests with -race (pull_request) Successful in 10m39s
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-04-05 16:13:09 +03:00
c1d90f018b [#1074] pilorama: Allow empty filenames in SortedByFilename()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-04-04 10:27:45 +00:00
064e18b277 [#1074] pilorama: Remove debug print in tests
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-04-04 10:27:45 +00:00
21caa904f4 [#1072] Fix issue from govulncheck
All checks were successful
Vulncheck / Vulncheck (pull_request) Successful in 3m56s
DCO action / DCO (pull_request) Successful in 4m28s
Build / Build Components (1.21) (pull_request) Successful in 8m44s
Tests and linters / Staticcheck (pull_request) Successful in 8m44s
Build / Build Components (1.20) (pull_request) Successful in 9m58s
Tests and linters / gopls check (pull_request) Successful in 11m8s
Tests and linters / Lint (pull_request) Successful in 12m11s
Tests and linters / Tests (1.20) (pull_request) Successful in 16m28s
Tests and linters / Tests with -race (pull_request) Successful in 16m26s
Tests and linters / Tests (1.21) (pull_request) Successful in 16m32s
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2024-04-04 11:37:53 +03:00
f74d058c2e [#1072] node, ir, morph: Set scope None when in upgrade mode
Some checks failed
DCO action / DCO (pull_request) Successful in 1m50s
Build / Build Components (1.21) (pull_request) Successful in 4m25s
Build / Build Components (1.20) (pull_request) Successful in 4m35s
Vulncheck / Vulncheck (pull_request) Failing after 3m51s
Tests and linters / gopls check (pull_request) Successful in 6m0s
Tests and linters / Staticcheck (pull_request) Successful in 6m6s
Tests and linters / Lint (pull_request) Successful in 6m42s
Tests and linters / Tests (1.20) (pull_request) Successful in 8m12s
Tests and linters / Tests (1.21) (pull_request) Successful in 8m34s
Tests and linters / Tests with -race (pull_request) Successful in 8m33s
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2024-04-04 11:24:45 +03:00
c8ce6e9fe4 [#1072] node, ir: Add new config option kludge_compatibility_mode
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2024-04-04 11:24:37 +03:00
748da78dc7 [#1072] Fix gofumpt issues
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2024-04-04 11:24:27 +03:00
70 changed files with 1855 additions and 1052 deletions

View file

@ -94,6 +94,16 @@ func parseChainName(cmd *cobra.Command) apechain.Name {
return apeChainName return apeChainName
} }
// invokerAdapter adapats invoker.Invoker to ContractStorageInvoker interface.
type invokerAdapter struct {
*invoker.Invoker
rpcActor invoker.RPCInvoke
}
func (n *invokerAdapter) GetRPCInvoker() invoker.RPCInvoke {
return n.rpcActor
}
func newPolicyContractReaderInterface(cmd *cobra.Command) (*morph.ContractStorageReader, *invoker.Invoker) { func newPolicyContractReaderInterface(cmd *cobra.Command) (*morph.ContractStorageReader, *invoker.Invoker) {
c, err := helper.GetN3Client(viper.GetViper()) c, err := helper.GetN3Client(viper.GetViper())
commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err) commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err)
@ -107,7 +117,12 @@ func newPolicyContractReaderInterface(cmd *cobra.Command) (*morph.ContractStorag
ch, err = helper.NNSResolveHash(inv, nnsCs.Hash, helper.DomainOf(constants.PolicyContract)) ch, err = helper.NNSResolveHash(inv, nnsCs.Hash, helper.DomainOf(constants.PolicyContract))
commonCmd.ExitOnErr(cmd, "unable to resolve policy contract hash: %w", err) commonCmd.ExitOnErr(cmd, "unable to resolve policy contract hash: %w", err)
return morph.NewContractStorageReader(inv, ch), inv invokerAdapter := &invokerAdapter{
Invoker: inv,
rpcActor: c,
}
return morph.NewContractStorageReader(invokerAdapter, ch), inv
} }
func newPolicyContractInterface(cmd *cobra.Command) (*morph.ContractStorage, *helper.LocalActor) { func newPolicyContractInterface(cmd *cobra.Command) (*morph.ContractStorage, *helper.LocalActor) {

View file

@ -23,9 +23,10 @@ import (
// LocalActor is a kludge, do not use it outside of the morph commands. // LocalActor is a kludge, do not use it outside of the morph commands.
type LocalActor struct { type LocalActor struct {
neoActor *actor.Actor neoActor *actor.Actor
accounts []*wallet.Account accounts []*wallet.Account
Invoker *invoker.Invoker Invoker *invoker.Invoker
rpcInvoker invoker.RPCInvoke
} }
// NewLocalActor create LocalActor with accounts form provided wallets. // NewLocalActor create LocalActor with accounts form provided wallets.
@ -68,9 +69,10 @@ func NewLocalActor(cmd *cobra.Command, c actor.RPCActor) (*LocalActor, error) {
} }
} }
return &LocalActor{ return &LocalActor{
neoActor: act, neoActor: act,
accounts: accounts, accounts: accounts,
Invoker: &act.Invoker, Invoker: &act.Invoker,
rpcInvoker: c,
}, nil }, nil
} }
@ -167,3 +169,7 @@ func (a *LocalActor) MakeUnsignedRun(_ []byte, _ []transaction.Attribute) (*tran
func (a *LocalActor) MakeCall(_ util.Uint160, _ string, _ ...any) (*transaction.Transaction, error) { func (a *LocalActor) MakeCall(_ util.Uint160, _ string, _ ...any) (*transaction.Transaction, error) {
panic("unimplemented") panic("unimplemented")
} }
func (a *LocalActor) GetRPCInvoker() invoker.RPCInvoke {
return a.rpcInvoker
}

View file

@ -27,6 +27,8 @@ const (
defaultNamespace = "root" defaultNamespace = "root"
namespaceTarget = "namespace" namespaceTarget = "namespace"
containerTarget = "container" containerTarget = "container"
userTarget = "user"
groupTarget = "group"
) )
const ( const (
@ -66,6 +68,16 @@ func parseTarget(cmd *cobra.Command) *control.ChainTarget {
Name: name, Name: name,
Type: control.ChainTarget_CONTAINER, Type: control.ChainTarget_CONTAINER,
} }
case userTarget:
return &control.ChainTarget{
Name: name,
Type: control.ChainTarget_USER,
}
case groupTarget:
return &control.ChainTarget{
Name: name,
Type: control.ChainTarget_GROUP,
}
default: default:
commonCmd.ExitOnErr(cmd, "read target type error: %w", errUnknownTargetType) commonCmd.ExitOnErr(cmd, "read target type error: %w", errUnknownTargetType)
} }

View file

@ -66,7 +66,7 @@ func move(cmd *cobra.Command, _ []string) {
Body: &tree.GetSubTreeRequest_Body{ Body: &tree.GetSubTreeRequest_Body{
ContainerId: rawCID, ContainerId: rawCID,
TreeId: tid, TreeId: tid,
RootId: nid, RootId: []uint64{nid},
Depth: 1, Depth: 1,
BearerToken: bt, BearerToken: bt,
}, },

View file

@ -68,7 +68,7 @@ func getSubTree(cmd *cobra.Command, _ []string) {
Body: &tree.GetSubTreeRequest_Body{ Body: &tree.GetSubTreeRequest_Body{
ContainerId: rawCID, ContainerId: rawCID,
TreeId: tid, TreeId: tid,
RootId: rid, RootId: []uint64{rid},
Depth: depth, Depth: depth,
BearerToken: bt, BearerToken: bt,
}, },
@ -83,10 +83,15 @@ func getSubTree(cmd *cobra.Command, _ []string) {
for ; err == nil; subtreeResp, err = resp.Recv() { for ; err == nil; subtreeResp, err = resp.Recv() {
b := subtreeResp.GetBody() b := subtreeResp.GetBody()
cmd.Printf("Node ID: %d\n", b.GetNodeId()) if len(b.GetNodeId()) == 1 {
cmd.Printf("Node ID: %d\n", b.GetNodeId())
cmd.Println("\tParent ID: ", b.GetParentId()) cmd.Println("\tParent ID: ", b.GetParentId())
cmd.Println("\tTimestamp: ", b.GetTimestamp()) cmd.Println("\tTimestamp: ", b.GetTimestamp())
} else {
cmd.Printf("Node IDs: %v\n", b.GetNodeId())
cmd.Println("\tParent IDs: ", b.GetParentId())
cmd.Println("\tTimestamps: ", b.GetTimestamp())
}
if meta := b.GetMeta(); len(meta) > 0 { if meta := b.GetMeta(); len(meta) > 0 {
cmd.Println("\tMeta pairs: ") cmd.Println("\tMeta pairs: ")

View file

@ -34,6 +34,7 @@ func reloadConfig() error {
if err != nil { if err != nil {
return err return err
} }
cmode.Store(cfg.GetBool("node.kludge_compatibility_mode"))
err = logPrm.SetLevelString(cfg.GetString("logger.level")) err = logPrm.SetLevelString(cfg.GetString("logger.level"))
if err != nil { if err != nil {
return err return err

View file

@ -43,6 +43,8 @@ func defaultConfiguration(cfg *viper.Viper) {
setControlDefaults(cfg) setControlDefaults(cfg)
cfg.SetDefault("governance.disable", false) cfg.SetDefault("governance.disable", false)
cfg.SetDefault("node.kludge_compatibility_mode", false)
} }
func setControlDefaults(cfg *viper.Viper) { func setControlDefaults(cfg *viper.Viper) {

View file

@ -6,6 +6,7 @@ import (
"fmt" "fmt"
"os" "os"
"sync" "sync"
"sync/atomic"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/misc" "git.frostfs.info/TrueCloudLab/frostfs-node/misc"
@ -37,6 +38,7 @@ var (
cfg *viper.Viper cfg *viper.Viper
configFile *string configFile *string
configDir *string configDir *string
cmode = &atomic.Bool{}
) )
func exitErr(err error) { func exitErr(err error) {
@ -62,6 +64,8 @@ func main() {
cfg, err = newConfig() cfg, err = newConfig()
exitErr(err) exitErr(err)
cmode.Store(cfg.GetBool("node.kludge_compatibility_mode"))
metrics := irMetrics.NewInnerRingMetrics() metrics := irMetrics.NewInnerRingMetrics()
err = logPrm.SetLevelString( err = logPrm.SetLevelString(
@ -84,7 +88,7 @@ func main() {
metricsCmp = newMetricsComponent() metricsCmp = newMetricsComponent()
metricsCmp.init() metricsCmp.init()
innerRing, err = innerring.New(ctx, log, cfg, intErr, metrics) innerRing, err = innerring.New(ctx, log, cfg, intErr, metrics, cmode)
exitErr(err) exitErr(err)
pprofCmp.start() pprofCmp.start()

View file

@ -109,6 +109,9 @@ type applicationConfiguration struct {
lowMem bool lowMem bool
rebuildWorkers uint32 rebuildWorkers uint32
} }
// if need to run node in compatibility with other versions mode
cmode *atomic.Bool
} }
type shardCfg struct { type shardCfg struct {
@ -204,10 +207,13 @@ func (a *applicationConfiguration) readConfig(c *config.Config) error {
} }
// clear if it is rereading // clear if it is rereading
cmode := a.cmode
*a = applicationConfiguration{} *a = applicationConfiguration{}
a.cmode = cmode
} }
a._read = true a._read = true
a.cmode.Store(nodeconfig.CompatibilityMode(c))
// Logger // Logger
@ -375,8 +381,9 @@ func (c *cfg) startMaintenance() {
// stops node's maintenance. // stops node's maintenance.
func (c *internals) stopMaintenance() { func (c *internals) stopMaintenance() {
c.isMaintenance.Store(false) if c.isMaintenance.CompareAndSwap(true, false) {
c.log.Info(logs.FrostFSNodeStoppedLocalNodesMaintenance) c.log.Info(logs.FrostFSNodeStoppedLocalNodesMaintenance)
}
} }
// IsMaintenance checks if storage node is under maintenance. // IsMaintenance checks if storage node is under maintenance.
@ -648,7 +655,11 @@ type cfgControlService struct {
var persistateSideChainLastBlockKey = []byte("side_chain_last_processed_block") var persistateSideChainLastBlockKey = []byte("side_chain_last_processed_block")
func initCfg(appCfg *config.Config) *cfg { func initCfg(appCfg *config.Config) *cfg {
c := &cfg{} c := &cfg{
applicationConfiguration: applicationConfiguration{
cmode: &atomic.Bool{},
},
}
err := c.readConfig(appCfg) err := c.readConfig(appCfg)
if err != nil { if err != nil {
@ -1135,13 +1146,25 @@ func (c *cfg) LocalNodeInfo() (*netmapV2.NodeInfo, error) {
return &res, nil return &res, nil
} }
// handleLocalNodeInfo rewrites local node info from the FrostFS network map. // setContractNodeInfo rewrites local node info from the FrostFS network map.
// Called with nil when storage node is outside the FrostFS network map // Called with nil when storage node is outside the FrostFS network map
// (before entering the network and after leaving it). // (before entering the network and after leaving it).
func (c *cfg) handleLocalNodeInfo(ni *netmap.NodeInfo) { func (c *cfg) setContractNodeInfo(ni *netmap.NodeInfo) {
c.cfgNetmap.state.setNodeInfo(ni) c.cfgNetmap.state.setNodeInfo(ni)
} }
func (c *cfg) updateContractNodeInfo(epoch uint64) {
ni, err := c.netmapLocalNodeState(epoch)
if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotUpdateNodeStateOnNewEpoch,
zap.Uint64("epoch", epoch),
zap.String("error", err.Error()))
return
}
c.setContractNodeInfo(ni)
}
// bootstrapWithState calls "addPeer" method of the Sidechain Netmap contract // bootstrapWithState calls "addPeer" method of the Sidechain Netmap contract
// with the binary-encoded information from the current node's configuration. // with the binary-encoded information from the current node's configuration.
// The state is set using the provided setter which MUST NOT be nil. // The state is set using the provided setter which MUST NOT be nil.

View file

@ -292,3 +292,8 @@ func (l PersistentPolicyRulesConfig) Perm() fs.FileMode {
func (l PersistentPolicyRulesConfig) NoSync() bool { func (l PersistentPolicyRulesConfig) NoSync() bool {
return config.BoolSafe((*config.Config)(l.cfg), "no_sync") return config.BoolSafe((*config.Config)(l.cfg), "no_sync")
} }
// CompatibilityMode returns true if need to run node in compatibility with previous versions mode.
func CompatibilityMode(c *config.Config) bool {
return config.BoolSafe(c.Sub(subsection), "kludge_compatibility_mode")
}

View file

@ -48,6 +48,7 @@ func initMorphComponents(ctx context.Context, c *cfg) {
}), }),
client.WithSwitchInterval(morphconfig.SwitchInterval(c.appCfg)), client.WithSwitchInterval(morphconfig.SwitchInterval(c.appCfg)),
client.WithMorphCacheMetrics(c.metricsCollector.MorphCacheMetrics()), client.WithMorphCacheMetrics(c.metricsCollector.MorphCacheMetrics()),
client.WithCompatibilityMode(c.cmode),
) )
if err != nil { if err != nil {
c.log.Info(logs.FrostFSNodeFailedToCreateNeoRPCClient, c.log.Info(logs.FrostFSNodeFailedToCreateNeoRPCClient,

View file

@ -31,7 +31,7 @@ type networkState struct {
controlNetStatus atomic.Int32 // control.NetmapStatus controlNetStatus atomic.Int32 // control.NetmapStatus
nodeInfo atomic.Value // *netmapSDK.NodeInfo nodeInfo atomic.Value // netmapSDK.NodeInfo
metrics *metrics.NodeMetrics metrics *metrics.NodeMetrics
} }
@ -176,7 +176,11 @@ func addNewEpochNotificationHandlers(c *cfg) {
c.cfgNetmap.state.setCurrentEpoch(ev.(netmapEvent.NewEpoch).EpochNumber()) c.cfgNetmap.state.setCurrentEpoch(ev.(netmapEvent.NewEpoch).EpochNumber())
}) })
addNewEpochAsyncNotificationHandler(c, func(_ event.Event) { addNewEpochAsyncNotificationHandler(c, func(ev event.Event) {
e := ev.(netmapEvent.NewEpoch).EpochNumber()
c.updateContractNodeInfo(e)
if !c.needBootstrap() || c.cfgNetmap.reBoostrapTurnedOff.Load() { // fixes #470 if !c.needBootstrap() || c.cfgNetmap.reBoostrapTurnedOff.Load() { // fixes #470
return return
} }
@ -186,22 +190,6 @@ func addNewEpochNotificationHandlers(c *cfg) {
} }
}) })
addNewEpochAsyncNotificationHandler(c, func(ev event.Event) {
e := ev.(netmapEvent.NewEpoch).EpochNumber()
ni, err := c.netmapLocalNodeState(e)
if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotUpdateNodeStateOnNewEpoch,
zap.Uint64("epoch", e),
zap.String("error", err.Error()),
)
return
}
c.handleLocalNodeInfo(ni)
})
if c.cfgMorph.notaryEnabled { if c.cfgMorph.notaryEnabled {
addNewEpochAsyncNotificationHandler(c, func(_ event.Event) { addNewEpochAsyncNotificationHandler(c, func(_ event.Event) {
_, err := makeNotaryDeposit(c) _, err := makeNotaryDeposit(c)
@ -270,7 +258,7 @@ func initNetmapState(c *cfg) {
c.cfgNetmap.state.setCurrentEpoch(epoch) c.cfgNetmap.state.setCurrentEpoch(epoch)
c.cfgNetmap.startEpoch = epoch c.cfgNetmap.startEpoch = epoch
c.handleLocalNodeInfo(ni) c.setContractNodeInfo(ni)
} }
func nodeState(ni *netmapSDK.NodeInfo) string { func nodeState(ni *netmapSDK.NodeInfo) string {

View file

@ -63,7 +63,9 @@ func initTreeService(c *cfg) {
tree.WithReplicationChannelCapacity(treeConfig.ReplicationChannelCapacity()), tree.WithReplicationChannelCapacity(treeConfig.ReplicationChannelCapacity()),
tree.WithReplicationWorkerCount(treeConfig.ReplicationWorkerCount()), tree.WithReplicationWorkerCount(treeConfig.ReplicationWorkerCount()),
tree.WithAuthorizedKeys(treeConfig.AuthorizedKeys()), tree.WithAuthorizedKeys(treeConfig.AuthorizedKeys()),
tree.WithMetrics(c.metricsCollector.TreeService())) tree.WithMetrics(c.metricsCollector.TreeService()),
tree.WithAPERouter(c.cfgObject.cfgAccessPolicyEngine.accessPolicyEngine),
)
c.cfgGRPC.performAndSave(func(_ string, _ net.Listener, s *grpc.Server) { c.cfgGRPC.performAndSave(func(_ string, _ net.Listener, s *grpc.Server) {
tree.RegisterTreeServiceServer(s, c.treeService) tree.RegisterTreeServiceServer(s, c.treeService)

32
go.mod
View file

@ -4,17 +4,16 @@ go 1.20
require ( require (
code.gitea.io/sdk/gitea v0.17.1 code.gitea.io/sdk/gitea v0.17.1
git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.1-0.20240215124401-634e24aba715 git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.1-0.20240717110908-4e13f713f156
git.frostfs.info/TrueCloudLab/frostfs-contract v0.19.0 git.frostfs.info/TrueCloudLab/frostfs-contract v0.19.3-0.20240409111539-e7a05a49ff45
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20231101111734-b3ad3335ff65 git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20231101111734-b3ad3335ff65
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20240301150205-6fe4e2541d0b git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20240301150205-6fe4e2541d0b
git.frostfs.info/TrueCloudLab/hrw v1.2.1 git.frostfs.info/TrueCloudLab/hrw v1.2.1
git.frostfs.info/TrueCloudLab/policy-engine v0.0.0-20240307151106-2ec958cbfdfd git.frostfs.info/TrueCloudLab/policy-engine v0.0.0-20240426062043-c5397286410f
git.frostfs.info/TrueCloudLab/tzhash v1.8.0 git.frostfs.info/TrueCloudLab/tzhash v1.8.0
git.frostfs.info/TrueCloudLab/zapjournald v0.0.0-20240124114243-cb2e66427d02 git.frostfs.info/TrueCloudLab/zapjournald v0.0.0-20240124114243-cb2e66427d02
github.com/cheggaaa/pb v1.0.29 github.com/cheggaaa/pb v1.0.29
github.com/chzyer/readline v1.5.1 github.com/chzyer/readline v1.5.1
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc
github.com/flynn-archive/go-shlex v0.0.0-20150515145356-3f9db97f8568 github.com/flynn-archive/go-shlex v0.0.0-20150515145356-3f9db97f8568
github.com/google/uuid v1.6.0 github.com/google/uuid v1.6.0
github.com/hashicorp/golang-lru/v2 v2.0.7 github.com/hashicorp/golang-lru/v2 v2.0.7
@ -39,11 +38,11 @@ require (
go.opentelemetry.io/otel/trace v1.22.0 go.opentelemetry.io/otel/trace v1.22.0
go.uber.org/zap v1.26.0 go.uber.org/zap v1.26.0
golang.org/x/exp v0.0.0-20240119083558-1b970713d09a golang.org/x/exp v0.0.0-20240119083558-1b970713d09a
golang.org/x/sync v0.6.0 golang.org/x/sync v0.7.0
golang.org/x/sys v0.16.0 golang.org/x/sys v0.22.0
golang.org/x/term v0.16.0 golang.org/x/term v0.22.0
google.golang.org/grpc v1.61.0 google.golang.org/grpc v1.61.2
google.golang.org/protobuf v1.33.0 google.golang.org/protobuf v1.34.2
gopkg.in/yaml.v3 v3.0.1 gopkg.in/yaml.v3 v3.0.1
) )
@ -61,17 +60,18 @@ require (
github.com/beorn7/perks v1.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect
github.com/bits-and-blooms/bitset v1.13.0 // indirect github.com/bits-and-blooms/bitset v1.13.0 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect github.com/cenkalti/backoff/v4 v4.2.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/consensys/bavard v0.1.13 // indirect github.com/consensys/bavard v0.1.13 // indirect
github.com/consensys/gnark-crypto v0.12.2-0.20231222162921-eb75782795d2 // indirect github.com/consensys/gnark-crypto v0.12.2-0.20231222162921-eb75782795d2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/davidmz/go-pageant v1.0.2 // indirect github.com/davidmz/go-pageant v1.0.2 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.2.0 // indirect github.com/decred/dcrd/dcrec/secp256k1/v4 v4.2.0 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/go-fed/httpsig v1.1.0 // indirect github.com/go-fed/httpsig v1.1.0 // indirect
github.com/go-logr/logr v1.4.1 // indirect github.com/go-logr/logr v1.4.1 // indirect
github.com/go-logr/stdr v1.2.2 // indirect github.com/go-logr/stdr v1.2.2 // indirect
github.com/golang/protobuf v1.5.3 // indirect github.com/golang/protobuf v1.5.4 // indirect
github.com/golang/snappy v0.0.4 // indirect github.com/golang/snappy v0.0.4 // indirect
github.com/gorilla/websocket v1.5.1 // indirect github.com/gorilla/websocket v1.5.1 // indirect
github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus v1.0.0 // indirect github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus v1.0.0 // indirect
@ -121,11 +121,11 @@ require (
go.opentelemetry.io/otel/sdk v1.22.0 // indirect go.opentelemetry.io/otel/sdk v1.22.0 // indirect
go.opentelemetry.io/proto/otlp v1.1.0 // indirect go.opentelemetry.io/proto/otlp v1.1.0 // indirect
go.uber.org/multierr v1.11.0 // indirect go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.18.0 // indirect golang.org/x/crypto v0.25.0 // indirect
golang.org/x/net v0.20.0 // indirect golang.org/x/net v0.27.0 // indirect
golang.org/x/text v0.14.0 // indirect golang.org/x/text v0.16.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80 // indirect google.golang.org/genproto/googleapis/api v0.0.0-20240102182953-50ed04b92917 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20240711142825-46eb208f015d // indirect
gopkg.in/ini.v1 v1.67.0 // indirect gopkg.in/ini.v1 v1.67.0 // indirect
lukechampine.com/blake3 v1.2.1 // indirect lukechampine.com/blake3 v1.2.1 // indirect
rsc.io/tmplfunc v0.0.3 // indirect rsc.io/tmplfunc v0.0.3 // indirect

BIN
go.sum

Binary file not shown.

View file

@ -0,0 +1,44 @@
package converter
import (
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
nativeschema "git.frostfs.info/TrueCloudLab/policy-engine/schema/native"
)
func SchemaRoleFromACLRole(role acl.Role) (string, error) {
switch role {
case acl.RoleOwner:
return nativeschema.PropertyValueContainerRoleOwner, nil
case acl.RoleContainer:
return nativeschema.PropertyValueContainerRoleContainer, nil
case acl.RoleInnerRing:
return nativeschema.PropertyValueContainerRoleIR, nil
case acl.RoleOthers:
return nativeschema.PropertyValueContainerRoleOthers, nil
default:
return "", fmt.Errorf("failed to convert %s", role.String())
}
}
func SchemaMethodFromACLOperation(op acl.Op) (string, error) {
switch op {
case acl.OpObjectGet:
return nativeschema.MethodGetObject, nil
case acl.OpObjectHead:
return nativeschema.MethodHeadObject, nil
case acl.OpObjectPut:
return nativeschema.MethodPutObject, nil
case acl.OpObjectDelete:
return nativeschema.MethodDeleteObject, nil
case acl.OpObjectSearch:
return nativeschema.MethodSearchObject, nil
case acl.OpObjectRange:
return nativeschema.MethodRangeObject, nil
case acl.OpObjectHash:
return nativeschema.MethodHashObject, nil
default:
return "", fmt.Errorf("operation cannot be converted: %d", op)
}
}

View file

@ -0,0 +1,55 @@
package ape
import (
aperesource "git.frostfs.info/TrueCloudLab/policy-engine/pkg/resource"
)
type Request struct {
operation string
resource Resource
properties map[string]string
}
func NewRequest(operation string, resource Resource, properties map[string]string) Request {
return Request{
operation: operation,
resource: resource,
properties: properties,
}
}
var _ aperesource.Request = Request{}
func (r Request) Operation() string {
return r.operation
}
func (r Request) Property(key string) string {
return r.properties[key]
}
func (r Request) Resource() aperesource.Resource {
return r.resource
}
type Resource struct {
name string
properties map[string]string
}
var _ aperesource.Resource = Resource{}
func NewResource(name string, properties map[string]string) Resource {
return Resource{
name: name,
properties: properties,
}
}
func (r Resource) Name() string {
return r.name
}
func (r Resource) Property(key string) string {
return r.properties[key]
}

View file

@ -462,6 +462,7 @@ func (s *Server) initMorph(ctx context.Context, cfg *viper.Viper, errChan chan<-
name: morphPrefix, name: morphPrefix,
from: fromSideChainBlock, from: fromSideChainBlock,
morphCacheMetric: s.irMetrics.MorphCacheMetrics(), morphCacheMetric: s.irMetrics.MorphCacheMetrics(),
cmode: s.cmode,
} }
// create morph client // create morph client

View file

@ -103,6 +103,8 @@ type (
// should report start errors // should report start errors
// to the application. // to the application.
runners []func(chan<- error) error runners []func(chan<- error) error
cmode *atomic.Bool
} }
chainParams struct { chainParams struct {
@ -113,6 +115,7 @@ type (
sgn *transaction.Signer sgn *transaction.Signer
from uint32 // block height from uint32 // block height
morphCacheMetric metrics.MorphCacheMetrics morphCacheMetric metrics.MorphCacheMetrics
cmode *atomic.Bool
} }
) )
@ -330,12 +333,13 @@ func (s *Server) registerStarter(f func() error) {
// New creates instance of inner ring sever structure. // New creates instance of inner ring sever structure.
func New(ctx context.Context, log *logger.Logger, cfg *viper.Viper, errChan chan<- error, func New(ctx context.Context, log *logger.Logger, cfg *viper.Viper, errChan chan<- error,
metrics *metrics.InnerRingServiceMetrics, metrics *metrics.InnerRingServiceMetrics, cmode *atomic.Bool,
) (*Server, error) { ) (*Server, error) {
var err error var err error
server := &Server{ server := &Server{
log: log, log: log,
irMetrics: metrics, irMetrics: metrics,
cmode: cmode,
} }
server.sdNotify, err = server.initSdNotify(cfg) server.sdNotify, err = server.initSdNotify(cfg)
@ -485,6 +489,7 @@ func createClient(ctx context.Context, p *chainParams, errChan chan<- error) (*c
}), }),
client.WithSwitchInterval(p.cfg.GetDuration(p.name+".switch_interval")), client.WithSwitchInterval(p.cfg.GetDuration(p.name+".switch_interval")),
client.WithMorphCacheMetrics(p.morphCacheMetric), client.WithMorphCacheMetrics(p.morphCacheMetric),
client.WithCompatibilityMode(p.cmode),
) )
} }

View file

@ -54,6 +54,7 @@ func initConfig(c *cfg) {
// New creates, initializes and returns new BlobStor instance. // New creates, initializes and returns new BlobStor instance.
func New(opts ...Option) *BlobStor { func New(opts ...Option) *BlobStor {
bs := new(BlobStor) bs := new(BlobStor)
bs.mode = mode.Disabled
initConfig(&bs.cfg) initConfig(&bs.cfg)
for i := range opts { for i := range opts {

View file

@ -8,6 +8,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/internal/metaerr"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/util/logicerr" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/util/logicerr"
@ -49,6 +50,7 @@ type shardWrapper struct {
type setModeRequest struct { type setModeRequest struct {
sh *shard.Shard sh *shard.Shard
isMeta bool
errorCount uint32 errorCount uint32
} }
@ -74,7 +76,7 @@ func (e *StorageEngine) setModeLoop() {
if !ok { if !ok {
inProgress[sid] = struct{}{} inProgress[sid] = struct{}{}
go func() { go func() {
e.moveToDegraded(r.sh, r.errorCount) e.moveToDegraded(r.sh, r.errorCount, r.isMeta)
mtx.Lock() mtx.Lock()
delete(inProgress, sid) delete(inProgress, sid)
@ -86,7 +88,7 @@ func (e *StorageEngine) setModeLoop() {
} }
} }
func (e *StorageEngine) moveToDegraded(sh *shard.Shard, errCount uint32) { func (e *StorageEngine) moveToDegraded(sh *shard.Shard, errCount uint32, isMeta bool) {
sid := sh.ID() sid := sh.ID()
log := e.log.With( log := e.log.With(
zap.Stringer("shard_id", sid), zap.Stringer("shard_id", sid),
@ -95,21 +97,23 @@ func (e *StorageEngine) moveToDegraded(sh *shard.Shard, errCount uint32) {
e.mtx.RLock() e.mtx.RLock()
defer e.mtx.RUnlock() defer e.mtx.RUnlock()
err := sh.SetMode(mode.DegradedReadOnly) if isMeta {
if err != nil { err := sh.SetMode(mode.DegradedReadOnly)
if err == nil {
log.Info(logs.EngineShardIsMovedInDegradedModeDueToErrorThreshold)
return
}
log.Error(logs.EngineFailedToMoveShardInDegradedreadonlyModeMovingToReadonly, log.Error(logs.EngineFailedToMoveShardInDegradedreadonlyModeMovingToReadonly,
zap.Error(err)) zap.Error(err))
err = sh.SetMode(mode.ReadOnly)
if err != nil {
log.Error(logs.EngineFailedToMoveShardInReadonlyMode,
zap.Error(err))
} else {
log.Info(logs.EngineShardIsMovedInReadonlyModeDueToErrorThreshold)
}
} else {
log.Info(logs.EngineShardIsMovedInDegradedModeDueToErrorThreshold)
} }
err := sh.SetMode(mode.ReadOnly)
if err != nil {
log.Error(logs.EngineFailedToMoveShardInReadonlyMode, zap.Error(err))
return
}
log.Info(logs.EngineShardIsMovedInReadonlyModeDueToErrorThreshold)
} }
// reportShardErrorBackground increases shard error counter and logs an error. // reportShardErrorBackground increases shard error counter and logs an error.
@ -133,7 +137,7 @@ func (e *StorageEngine) reportShardErrorBackground(id string, msg string, err er
errCount := sh.errorCount.Add(1) errCount := sh.errorCount.Add(1)
sh.Shard.IncErrorCounter() sh.Shard.IncErrorCounter()
e.reportShardErrorWithFlags(sh.Shard, errCount, false, msg, err) e.reportShardErrorWithFlags(sh.Shard, errCount, msg, err)
} }
// reportShardError checks that the amount of errors doesn't exceed the configured threshold. // reportShardError checks that the amount of errors doesn't exceed the configured threshold.
@ -153,13 +157,12 @@ func (e *StorageEngine) reportShardError(
errCount := sh.errorCount.Add(1) errCount := sh.errorCount.Add(1)
sh.Shard.IncErrorCounter() sh.Shard.IncErrorCounter()
e.reportShardErrorWithFlags(sh.Shard, errCount, true, msg, err, fields...) e.reportShardErrorWithFlags(sh.Shard, errCount, msg, err, fields...)
} }
func (e *StorageEngine) reportShardErrorWithFlags( func (e *StorageEngine) reportShardErrorWithFlags(
sh *shard.Shard, sh *shard.Shard,
errCount uint32, errCount uint32,
block bool,
msg string, msg string,
err error, err error,
fields ...zap.Field, fields ...zap.Field,
@ -175,23 +178,20 @@ func (e *StorageEngine) reportShardErrorWithFlags(
return return
} }
if block { req := setModeRequest{
e.moveToDegraded(sh, errCount) errorCount: errCount,
} else { sh: sh,
req := setModeRequest{ isMeta: errors.As(err, new(metaerr.Error)),
errorCount: errCount, }
sh: sh,
}
select { select {
case e.setModeCh <- req: case e.setModeCh <- req:
default: default:
// For background workers we can have a lot of such errors, // For background workers we can have a lot of such errors,
// thus logging is done with DEBUG level. // thus logging is done with DEBUG level.
e.log.Debug(logs.EngineModeChangeIsInProgressIgnoringSetmodeRequest, e.log.Debug(logs.EngineModeChangeIsInProgressIgnoringSetmodeRequest,
zap.Stringer("shard_id", sid), zap.Stringer("shard_id", sid),
zap.Uint32("error_count", errCount)) zap.Uint32("error_count", errCount))
}
} }
} }

View file

@ -7,6 +7,7 @@ import (
"path/filepath" "path/filepath"
"strconv" "strconv"
"testing" "testing"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/object" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/object"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor"
@ -153,7 +154,7 @@ func TestErrorReporting(t *testing.T) {
for i := uint32(0); i < 2; i++ { for i := uint32(0); i < 2; i++ {
_, err = te.ng.Get(context.Background(), GetPrm{addr: object.AddressOf(obj)}) _, err = te.ng.Get(context.Background(), GetPrm{addr: object.AddressOf(obj)})
require.Error(t, err) require.Error(t, err)
checkShardState(t, te.ng, te.shards[0].id, errThreshold+i, mode.DegradedReadOnly) checkShardState(t, te.ng, te.shards[0].id, errThreshold+i, mode.ReadOnly)
checkShardState(t, te.ng, te.shards[1].id, 0, mode.ReadWrite) checkShardState(t, te.ng, te.shards[1].id, 0, mode.ReadWrite)
} }
@ -229,6 +230,8 @@ func checkShardState(t *testing.T, e *StorageEngine, id *shard.ID, errCount uint
sh := e.shards[id.String()] sh := e.shards[id.String()]
e.mtx.RUnlock() e.mtx.RUnlock()
require.Equal(t, errCount, sh.errorCount.Load()) require.Eventually(t, func() bool {
require.Equal(t, mode, sh.GetMode()) return errCount == sh.errorCount.Load() &&
mode == sh.GetMode()
}, 10*time.Second, 10*time.Millisecond, "shard mode doesn't changed to expected state in 10 seconds")
} }

View file

@ -85,6 +85,7 @@ func (e *StorageEngine) head(ctx context.Context, prm HeadPrm) (HeadRes, error)
shPrm.SetRaw(prm.raw) shPrm.SetRaw(prm.raw)
e.iterateOverSortedShards(prm.addr, func(_ int, sh hashedShard) (stop bool) { e.iterateOverSortedShards(prm.addr, func(_ int, sh hashedShard) (stop bool) {
shPrm.ShardLooksBad = sh.errorCount.Load() >= e.errorsThreshold
res, err := sh.Head(ctx, shPrm) res, err := sh.Head(ctx, shPrm)
if err != nil { if err != nil {
switch { switch {

View file

@ -210,19 +210,18 @@ func (e *StorageEngine) TreeGetChildren(ctx context.Context, cid cidSDK.ID, tree
} }
// TreeSortedByFilename implements the pilorama.Forest interface. // TreeSortedByFilename implements the pilorama.Forest interface.
func (e *StorageEngine) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, treeID string, nodeID pilorama.Node, last string, count int) ([]pilorama.NodeInfo, string, error) { func (e *StorageEngine) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, treeID string, nodeID pilorama.MultiNode, last *string, count int) ([]pilorama.MultiNodeInfo, *string, error) {
ctx, span := tracing.StartSpanFromContext(ctx, "StorageEngine.TreeSortedByFilename", ctx, span := tracing.StartSpanFromContext(ctx, "StorageEngine.TreeSortedByFilename",
trace.WithAttributes( trace.WithAttributes(
attribute.String("container_id", cid.EncodeToString()), attribute.String("container_id", cid.EncodeToString()),
attribute.String("tree_id", treeID), attribute.String("tree_id", treeID),
attribute.String("node_id", strconv.FormatUint(nodeID, 10)),
), ),
) )
defer span.End() defer span.End()
var err error var err error
var nodes []pilorama.NodeInfo var nodes []pilorama.MultiNodeInfo
var cursor string var cursor *string
for _, sh := range e.sortShards(cid) { for _, sh := range e.sortShards(cid) {
nodes, cursor, err = sh.TreeSortedByFilename(ctx, cid, treeID, nodeID, last, count) nodes, cursor, err = sh.TreeSortedByFilename(ctx, cid, treeID, nodeID, last, count)
if err != nil { if err != nil {

View file

@ -232,14 +232,19 @@ func (db *DB) ContainerCount(ctx context.Context, id cid.ID) (ObjectCounters, er
} }
func (db *DB) incCounters(tx *bbolt.Tx, cnrID cid.ID, isUserObject bool) error { func (db *DB) incCounters(tx *bbolt.Tx, cnrID cid.ID, isUserObject bool) error {
if err := db.updateShardObjectCounter(tx, phy, 1, true); err != nil { b := tx.Bucket(shardInfoBucket)
if b == nil {
return db.incContainerObjectCounter(tx, cnrID, isUserObject)
}
if err := db.updateShardObjectCounterBucket(b, phy, 1, true); err != nil {
return fmt.Errorf("could not increase phy object counter: %w", err) return fmt.Errorf("could not increase phy object counter: %w", err)
} }
if err := db.updateShardObjectCounter(tx, logical, 1, true); err != nil { if err := db.updateShardObjectCounterBucket(b, logical, 1, true); err != nil {
return fmt.Errorf("could not increase logical object counter: %w", err) return fmt.Errorf("could not increase logical object counter: %w", err)
} }
if isUserObject { if isUserObject {
if err := db.updateShardObjectCounter(tx, user, 1, true); err != nil { if err := db.updateShardObjectCounterBucket(b, user, 1, true); err != nil {
return fmt.Errorf("could not increase user object counter: %w", err) return fmt.Errorf("could not increase user object counter: %w", err)
} }
} }
@ -252,6 +257,10 @@ func (db *DB) updateShardObjectCounter(tx *bbolt.Tx, typ objectType, delta uint6
return nil return nil
} }
return db.updateShardObjectCounterBucket(b, typ, delta, inc)
}
func (*DB) updateShardObjectCounterBucket(b *bbolt.Bucket, typ objectType, delta uint64, inc bool) error {
var counter uint64 var counter uint64
var counterKey []byte var counterKey []byte

View file

@ -107,6 +107,7 @@ func New(opts ...Option) *DB {
matchBucket: stringCommonPrefixMatcherBucket, matchBucket: stringCommonPrefixMatcherBucket,
}, },
}, },
mode: mode.Disabled,
} }
} }

View file

@ -36,6 +36,14 @@ func (r StorageIDRes) StorageID() []byte {
// StorageID returns storage descriptor for objects from the blobstor. // StorageID returns storage descriptor for objects from the blobstor.
// It is put together with the object can makes get/delete operation faster. // It is put together with the object can makes get/delete operation faster.
func (db *DB) StorageID(ctx context.Context, prm StorageIDPrm) (res StorageIDRes, err error) { func (db *DB) StorageID(ctx context.Context, prm StorageIDPrm) (res StorageIDRes, err error) {
var (
startedAt = time.Now()
success = false
)
defer func() {
db.metrics.AddMethodDuration("StorageID", time.Since(startedAt), success)
}()
_, span := tracing.StartSpanFromContext(ctx, "metabase.StorageID", _, span := tracing.StartSpanFromContext(ctx, "metabase.StorageID",
trace.WithAttributes( trace.WithAttributes(
attribute.String("address", prm.addr.EncodeToString()), attribute.String("address", prm.addr.EncodeToString()),
@ -54,7 +62,7 @@ func (db *DB) StorageID(ctx context.Context, prm StorageIDPrm) (res StorageIDRes
return err return err
}) })
success = err == nil
return res, metaerr.Wrap(err) return res, metaerr.Wrap(err)
} }

View file

@ -81,6 +81,7 @@ func NewBoltForest(opts ...Option) ForestStorage {
openFile: os.OpenFile, openFile: os.OpenFile,
metrics: &noopMetrics{}, metrics: &noopMetrics{},
}, },
mode: mode.Disabled,
} }
for i := range opts { for i := range opts {
@ -905,7 +906,7 @@ func (t *boltForest) TreeGetByPath(ctx context.Context, cid cidSDK.ID, treeID st
b := treeRoot.Bucket(dataBucket) b := treeRoot.Bucket(dataBucket)
i, curNode, err := t.getPathPrefix(b, attr, path[:len(path)-1]) i, curNodes, err := t.getPathPrefixMultiTraversal(b, attr, path[:len(path)-1])
if err != nil { if err != nil {
return err return err
} }
@ -917,21 +918,23 @@ func (t *boltForest) TreeGetByPath(ctx context.Context, cid cidSDK.ID, treeID st
c := b.Cursor() c := b.Cursor()
attrKey := internalKey(nil, attr, path[len(path)-1], curNode, 0) for i := range curNodes {
attrKey = attrKey[:len(attrKey)-8] attrKey := internalKey(nil, attr, path[len(path)-1], curNodes[i], 0)
childKey, _ := c.Seek(attrKey) attrKey = attrKey[:len(attrKey)-8]
for len(childKey) == len(attrKey)+8 && bytes.Equal(attrKey, childKey[:len(childKey)-8]) { childKey, _ := c.Seek(attrKey)
child := binary.LittleEndian.Uint64(childKey[len(childKey)-8:]) for len(childKey) == len(attrKey)+8 && bytes.Equal(attrKey, childKey[:len(childKey)-8]) {
if latest { child := binary.LittleEndian.Uint64(childKey[len(childKey)-8:])
_, ts, _, _ := t.getState(b, stateKey(make([]byte, 9), child)) if latest {
if ts >= maxTimestamp { _, ts, _, _ := t.getState(b, stateKey(make([]byte, 9), child))
nodes = append(nodes[:0], child) if ts >= maxTimestamp {
maxTimestamp = ts nodes = append(nodes[:0], child)
maxTimestamp = ts
}
} else {
nodes = append(nodes, child)
} }
} else { childKey, _ = c.Next()
nodes = append(nodes, child)
} }
childKey, _ = c.Next()
} }
return nil return nil
})) }))
@ -987,23 +990,26 @@ func (t *boltForest) TreeGetMeta(ctx context.Context, cid cidSDK.ID, treeID stri
return m, parentID, metaerr.Wrap(err) return m, parentID, metaerr.Wrap(err)
} }
func (t *boltForest) hasFewChildren(b *bbolt.Bucket, nodeID Node, threshold int) bool { func (t *boltForest) hasFewChildren(b *bbolt.Bucket, nodeIDs MultiNode, threshold int) bool {
key := make([]byte, 9) key := make([]byte, 9)
key[0] = 'c' key[0] = 'c'
binary.LittleEndian.PutUint64(key[1:], nodeID)
count := 0 count := 0
c := b.Cursor() for _, nodeID := range nodeIDs {
for k, _ := c.Seek(key); len(k) == childrenKeySize && binary.LittleEndian.Uint64(k[1:]) == nodeID; k, _ = c.Next() { binary.LittleEndian.PutUint64(key[1:], nodeID)
if count++; count > threshold {
return false c := b.Cursor()
for k, _ := c.Seek(key); len(k) == childrenKeySize && binary.LittleEndian.Uint64(k[1:]) == nodeID; k, _ = c.Next() {
if count++; count > threshold {
return false
}
} }
} }
return true return true
} }
// TreeSortedByFilename implements the Forest interface. // TreeSortedByFilename implements the Forest interface.
func (t *boltForest) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, treeID string, nodeID Node, last string, count int) ([]NodeInfo, string, error) { func (t *boltForest) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, treeID string, nodeIDs MultiNode, last *string, count int) ([]MultiNodeInfo, *string, error) {
var ( var (
startedAt = time.Now() startedAt = time.Now()
success = false success = false
@ -1016,7 +1022,6 @@ func (t *boltForest) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, tr
trace.WithAttributes( trace.WithAttributes(
attribute.String("container_id", cid.EncodeToString()), attribute.String("container_id", cid.EncodeToString()),
attribute.String("tree_id", treeID), attribute.String("tree_id", treeID),
attribute.String("node_id", strconv.FormatUint(nodeID, 10)),
), ),
) )
defer span.End() defer span.End()
@ -1025,7 +1030,10 @@ func (t *boltForest) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, tr
defer t.modeMtx.RUnlock() defer t.modeMtx.RUnlock()
if t.mode.NoMetabase() { if t.mode.NoMetabase() {
return nil, "", ErrDegradedMode return nil, last, ErrDegradedMode
}
if len(nodeIDs) == 0 {
return nil, last, errors.New("empty node list")
} }
h := newHeap(last, count) h := newHeap(last, count)
@ -1045,20 +1053,22 @@ func (t *boltForest) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, tr
// If the node is a leaf, we could scan all filenames in the tree. // If the node is a leaf, we could scan all filenames in the tree.
// To prevent this we first count the number of children: if it is less than // To prevent this we first count the number of children: if it is less than
// the number of nodes we need to return, fallback to TreeGetChildren() implementation. // the number of nodes we need to return, fallback to TreeGetChildren() implementation.
if fewChildren = t.hasFewChildren(b, nodeID, count); fewChildren { if fewChildren = t.hasFewChildren(b, nodeIDs, count); fewChildren {
var err error var err error
result, err = t.getChildren(b, nodeID) result, err = t.getChildren(b, nodeIDs)
return err return err
} }
t.fillSortedChildren(b, nodeID, h) t.fillSortedChildren(b, nodeIDs, h)
for info, ok := h.pop(); ok; info, ok = h.pop() { for info, ok := h.pop(); ok; info, ok = h.pop() {
childInfo, err := t.getChildInfo(b, key, info.id) for _, id := range info.id {
if err != nil { childInfo, err := t.getChildInfo(b, key, id)
return err if err != nil {
return err
}
result = append(result, childInfo)
} }
result = append(result, childInfo)
} }
return nil return nil
}) })
@ -1069,20 +1079,29 @@ func (t *boltForest) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, tr
} }
if fewChildren { if fewChildren {
result = sortAndCut(result, []byte(last)) result = sortAndCut(result, last)
} }
if len(result) != 0 { res := mergeNodeInfos(result)
last = string(result[len(result)-1].Meta.GetAttr(AttributeFilename)) if len(res) > count {
res = res[:count]
} }
return result, last, metaerr.Wrap(err) if len(res) != 0 {
s := string(findAttr(res[len(res)-1].Meta, AttributeFilename))
last = &s
}
return res, last, metaerr.Wrap(err)
} }
func sortAndCut(result []NodeInfo, last []byte) []NodeInfo { func sortAndCut(result []NodeInfo, last *string) []NodeInfo {
var lastBytes []byte
if last != nil {
lastBytes = []byte(*last)
}
sort.Slice(result, func(i, j int) bool { sort.Slice(result, func(i, j int) bool {
return bytes.Compare(result[i].Meta.GetAttr(AttributeFilename), result[j].Meta.GetAttr(AttributeFilename)) == -1 return bytes.Compare(result[i].Meta.GetAttr(AttributeFilename), result[j].Meta.GetAttr(AttributeFilename)) == -1
}) })
for i := range result { for i := range result {
if bytes.Compare(last, result[i].Meta.GetAttr(AttributeFilename)) == -1 { if lastBytes == nil || bytes.Compare(lastBytes, result[i].Meta.GetAttr(AttributeFilename)) == -1 {
return result[i:] return result[i:]
} }
} }
@ -1101,37 +1120,64 @@ func (t *boltForest) getChildInfo(b *bbolt.Bucket, key []byte, childID Node) (No
return childInfo, nil return childInfo, nil
} }
func (t *boltForest) fillSortedChildren(b *bbolt.Bucket, nodeID Node, h *fixedHeap) { func (t *boltForest) fillSortedChildren(b *bbolt.Bucket, nodeIDs MultiNode, h *fixedHeap) {
c := b.Cursor() c := b.Cursor()
prefix := internalKeyPrefix(nil, AttributeFilename) prefix := internalKeyPrefix(nil, AttributeFilename)
length := uint16(0) length := uint16(0)
count := 0 count := 0
var nodes []uint64
var lastFilename *string
for k, _ := c.Seek(prefix); len(k) > 0 && k[0] == 'i'; k, _ = c.Next() { for k, _ := c.Seek(prefix); len(k) > 0 && k[0] == 'i'; k, _ = c.Next() {
if len(k) < len(prefix)+2+16 { if len(k) < len(prefix)+2+16 {
continue continue
} }
parentID := binary.LittleEndian.Uint64(k[len(k)-16:]) parentID := binary.LittleEndian.Uint64(k[len(k)-16:])
if parentID != nodeID {
var contains bool
for i := range nodeIDs {
if parentID == nodeIDs[i] {
contains = true
break
}
}
if !contains {
continue continue
} }
actualLength := binary.LittleEndian.Uint16(k[len(prefix):]) actualLength := binary.LittleEndian.Uint16(k[len(prefix):])
childID := binary.LittleEndian.Uint64(k[len(k)-8:]) childID := binary.LittleEndian.Uint64(k[len(k)-8:])
filename := string(k[len(prefix)+2 : len(k)-16]) filename := string(k[len(prefix)+2 : len(k)-16])
processed := h.push(childID, filename)
if actualLength != length { if lastFilename == nil {
length = actualLength lastFilename = &filename
count = 1 nodes = append(nodes, childID)
} else if processed { } else if *lastFilename == filename {
if count++; count > h.count { nodes = append(nodes, childID)
length = actualLength + 1 } else {
c.Seek(append(prefix, byte(length), byte(length>>8))) processed := h.push(nodes, *lastFilename)
c.Prev() // c.Next() will be performed by for loop nodes = MultiNode{childID}
lastFilename = &filename
if actualLength != length {
length = actualLength
count = 1
} else if processed {
if count++; count > h.count {
lastFilename = nil
nodes = nil
length = actualLength + 1
c.Seek(append(prefix, byte(length), byte(length>>8)))
c.Prev() // c.Next() will be performed by for loop
}
} }
} }
} }
if len(nodes) != 0 && lastFilename != nil {
h.push(nodes, *lastFilename)
}
} }
// TreeGetChildren implements the Forest interface. // TreeGetChildren implements the Forest interface.
@ -1171,28 +1217,30 @@ func (t *boltForest) TreeGetChildren(ctx context.Context, cid cidSDK.ID, treeID
b := treeRoot.Bucket(dataBucket) b := treeRoot.Bucket(dataBucket)
var err error var err error
result, err = t.getChildren(b, nodeID) result, err = t.getChildren(b, []Node{nodeID})
return err return err
}) })
success = err == nil success = err == nil
return result, metaerr.Wrap(err) return result, metaerr.Wrap(err)
} }
func (t *boltForest) getChildren(b *bbolt.Bucket, nodeID Node) ([]NodeInfo, error) { func (t *boltForest) getChildren(b *bbolt.Bucket, nodeIDs MultiNode) ([]NodeInfo, error) {
var result []NodeInfo var result []NodeInfo
key := make([]byte, 9) key := make([]byte, 9)
key[0] = 'c' for _, nodeID := range nodeIDs {
binary.LittleEndian.PutUint64(key[1:], nodeID) key[0] = 'c'
binary.LittleEndian.PutUint64(key[1:], nodeID)
c := b.Cursor() c := b.Cursor()
for k, _ := c.Seek(key); len(k) == childrenKeySize && binary.LittleEndian.Uint64(k[1:]) == nodeID; k, _ = c.Next() { for k, _ := c.Seek(key); len(k) == childrenKeySize && binary.LittleEndian.Uint64(k[1:]) == nodeID; k, _ = c.Next() {
childID := binary.LittleEndian.Uint64(k[9:]) childID := binary.LittleEndian.Uint64(k[9:])
childInfo, err := t.getChildInfo(b, key, childID) childInfo, err := t.getChildInfo(b, key, childID)
if err != nil { if err != nil {
return nil, err return nil, err
}
result = append(result, childInfo)
} }
result = append(result, childInfo)
} }
return result, nil return result, nil
} }
@ -1406,6 +1454,36 @@ func (t *boltForest) TreeListTrees(ctx context.Context, prm TreeListTreesPrm) (*
return &res, nil return &res, nil
} }
func (t *boltForest) getPathPrefixMultiTraversal(bTree *bbolt.Bucket, attr string, path []string) (int, []Node, error) {
c := bTree.Cursor()
var curNodes []Node
nextNodes := []Node{RootID}
var attrKey []byte
for i := range path {
curNodes, nextNodes = nextNodes, curNodes[:0]
for j := range curNodes {
attrKey = internalKey(attrKey, attr, path[i], curNodes[j], 0)
attrKey = attrKey[:len(attrKey)-8]
childKey, value := c.Seek(attrKey)
for len(childKey) == len(attrKey)+8 && bytes.Equal(attrKey, childKey[:len(childKey)-8]) {
if len(value) == 1 && value[0] == 1 {
nextNodes = append(nextNodes, binary.LittleEndian.Uint64(childKey[len(childKey)-8:]))
}
childKey, value = c.Next()
}
}
if len(nextNodes) == 0 {
return i, curNodes, nil
}
}
return len(path), nextNodes, nil
}
func (t *boltForest) getPathPrefix(bTree *bbolt.Bucket, attr string, path []string) (int, Node, error) { func (t *boltForest) getPathPrefix(bTree *bbolt.Bucket, attr string, path []string) (int, Node, error) {
c := bTree.Cursor() c := bTree.Cursor()

View file

@ -156,45 +156,59 @@ func (f *memoryForest) TreeGetMeta(_ context.Context, cid cid.ID, treeID string,
} }
// TreeSortedByFilename implements the Forest interface. // TreeSortedByFilename implements the Forest interface.
func (f *memoryForest) TreeSortedByFilename(_ context.Context, cid cid.ID, treeID string, nodeID Node, start string, count int) ([]NodeInfo, string, error) { func (f *memoryForest) TreeSortedByFilename(_ context.Context, cid cid.ID, treeID string, nodeIDs MultiNode, start *string, count int) ([]MultiNodeInfo, *string, error) {
fullID := cid.String() + "/" + treeID fullID := cid.String() + "/" + treeID
s, ok := f.treeMap[fullID] s, ok := f.treeMap[fullID]
if !ok { if !ok {
return nil, "", ErrTreeNotFound return nil, start, ErrTreeNotFound
} }
if count == 0 { if count == 0 {
return nil, start, nil return nil, start, nil
} }
children := s.tree.getChildren(nodeID) var res []NodeInfo
res := make([]NodeInfo, 0, len(children))
for _, childID := range children { for _, nodeID := range nodeIDs {
if len(s.infoMap[childID].Meta.GetAttr(AttributeFilename)) == 0 { children := s.tree.getChildren(nodeID)
continue for _, childID := range children {
var found bool
for _, kv := range s.infoMap[childID].Meta.Items {
if kv.Key == AttributeFilename {
found = true
break
}
}
if !found {
continue
}
res = append(res, NodeInfo{
ID: childID,
Meta: s.infoMap[childID].Meta,
ParentID: s.infoMap[childID].Parent,
})
} }
res = append(res, NodeInfo{
ID: childID,
Meta: s.infoMap[childID].Meta,
ParentID: s.infoMap[childID].Parent,
})
} }
if len(res) == 0 { if len(res) == 0 {
return res, "", nil return nil, start, nil
} }
sort.Slice(res, func(i, j int) bool { sort.Slice(res, func(i, j int) bool {
return bytes.Compare(res[i].Meta.GetAttr(AttributeFilename), res[j].Meta.GetAttr(AttributeFilename)) == -1 return bytes.Compare(res[i].Meta.GetAttr(AttributeFilename), res[j].Meta.GetAttr(AttributeFilename)) == -1
}) })
for i := range res {
if string(res[i].Meta.GetAttr(AttributeFilename)) > start { r := mergeNodeInfos(res)
for i := range r {
if start == nil || string(findAttr(r[i].Meta, AttributeFilename)) > *start {
finish := i + count finish := i + count
if len(res) < finish { if len(res) < finish {
finish = len(res) finish = len(res)
} }
return res[i:finish], string(res[finish-1].Meta.GetAttr(AttributeFilename)), nil last := string(findAttr(r[finish-1].Meta, AttributeFilename))
return r[i:finish], &last, nil
} }
} }
return nil, string(res[len(res)-1].Meta.GetAttr(AttributeFilename)), nil last := string(res[len(res)-1].Meta.GetAttr(AttributeFilename))
return nil, &last, nil
} }
// TreeGetChildren implements the Forest interface. // TreeGetChildren implements the Forest interface.

View file

@ -16,7 +16,6 @@ import (
cidSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cidSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
cidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id/test" cidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id/test"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object" objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"github.com/davecgh/go-spew/spew"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
@ -216,7 +215,7 @@ func BenchmarkForestSortedIteration(b *testing.B) {
b.Run(providers[i].name+",root", func(b *testing.B) { b.Run(providers[i].name+",root", func(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
res, _, err := f.TreeSortedByFilename(context.Background(), cnr, treeID, RootID, "", 100) res, _, err := f.TreeSortedByFilename(context.Background(), cnr, treeID, MultiNode{RootID}, nil, 100)
if err != nil || len(res) != 100 { if err != nil || len(res) != 100 {
b.Fatalf("err %v, count %d", err, len(res)) b.Fatalf("err %v, count %d", err, len(res))
} }
@ -224,7 +223,7 @@ func BenchmarkForestSortedIteration(b *testing.B) {
}) })
b.Run(providers[i].name+",leaf", func(b *testing.B) { b.Run(providers[i].name+",leaf", func(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
res, _, err := f.TreeSortedByFilename(context.Background(), cnr, treeID, 1, "", 100) res, _, err := f.TreeSortedByFilename(context.Background(), cnr, treeID, MultiNode{1}, nil, 100)
if err != nil || len(res) != 0 { if err != nil || len(res) != 0 {
b.FailNow() b.FailNow()
} }
@ -247,14 +246,14 @@ func testForestTreeSortedIteration(t *testing.T, s ForestStorage) {
cid := cidtest.ID() cid := cidtest.ID()
d := CIDDescriptor{cid, 0, 1} d := CIDDescriptor{cid, 0, 1}
treeID := "version" treeID := "version"
treeAdd := func(t *testing.T, ts int) { treeAdd := func(t *testing.T, ts int, filename string) {
_, err := s.TreeMove(context.Background(), d, treeID, &Move{ _, err := s.TreeMove(context.Background(), d, treeID, &Move{
Child: RootID + uint64(ts), Child: RootID + uint64(ts),
Parent: RootID, Parent: RootID,
Meta: Meta{ Meta: Meta{
Time: Timestamp(ts), Time: Timestamp(ts),
Items: []KeyValue{ Items: []KeyValue{
{Key: AttributeFilename, Value: []byte(strconv.Itoa(ts))}, {Key: AttributeFilename, Value: []byte(filename)},
}, },
}, },
}) })
@ -262,20 +261,20 @@ func testForestTreeSortedIteration(t *testing.T, s ForestStorage) {
} }
const count = 9 const count = 9
for i := 0; i < count; i++ { treeAdd(t, 1, "")
treeAdd(t, i+1) for i := 1; i < count; i++ {
treeAdd(t, i+1, strconv.Itoa(i+1))
} }
var result []NodeInfo var result []MultiNodeInfo
treeAppend := func(t *testing.T, last string, count int) string { treeAppend := func(t *testing.T, last *string, count int) *string {
res, cursor, err := s.TreeSortedByFilename(context.Background(), d.CID, treeID, RootID, last, count) res, cursor, err := s.TreeSortedByFilename(context.Background(), d.CID, treeID, MultiNode{RootID}, last, count)
require.NoError(t, err) require.NoError(t, err)
result = append(result, res...) result = append(result, res...)
spew.Dump(last, res)
return cursor return cursor
} }
last := treeAppend(t, "", 2) last := treeAppend(t, nil, 2)
last = treeAppend(t, last, 3) last = treeAppend(t, last, 3)
last = treeAppend(t, last, 0) last = treeAppend(t, last, 0)
last = treeAppend(t, last, 1) last = treeAppend(t, last, 1)
@ -283,8 +282,12 @@ func testForestTreeSortedIteration(t *testing.T, s ForestStorage) {
require.Len(t, result, count) require.Len(t, result, count)
for i := range result { for i := range result {
require.Equal(t, RootID+uint64(i+1), result[i].ID) require.Equal(t, MultiNode{RootID + uint64(i+1)}, result[i].Children)
require.Equal(t, strconv.Itoa(RootID+i+1), string(result[i].Meta.GetAttr(AttributeFilename))) if i == 0 {
require.Equal(t, "", string(findAttr(result[i].Meta, AttributeFilename)))
} else {
require.Equal(t, strconv.Itoa(RootID+i+1), string(findAttr(result[i].Meta, AttributeFilename)))
}
} }
} }
@ -315,12 +318,12 @@ func testForestTreeSortedByFilename(t *testing.T, s ForestStorage) {
require.NoError(t, err) require.NoError(t, err)
} }
expectAttributes := func(t *testing.T, attr string, expected []string, res []NodeInfo) { expectAttributes := func(t *testing.T, attr string, expected []string, res []MultiNodeInfo) {
require.Equal(t, len(expected), len(res)) require.Equal(t, len(expected), len(res))
actual := make([]string, len(res)) actual := make([]string, len(res))
for i := range actual { for i := range actual {
actual[i] = string(res[i].Meta.GetAttr(attr)) actual[i] = string(findAttr(res[i].Meta, attr))
} }
require.Equal(t, expected, actual) require.Equal(t, expected, actual)
} }
@ -342,40 +345,40 @@ func testForestTreeSortedByFilename(t *testing.T, s ForestStorage) {
treeAddByPath(t, items[i]) treeAddByPath(t, items[i])
} }
getChildren := func(t *testing.T, id Node) []NodeInfo { getChildren := func(t *testing.T, id MultiNode) []MultiNodeInfo {
res, _, err := s.TreeSortedByFilename(context.Background(), d.CID, treeID, id, "", len(items)) res, _, err := s.TreeSortedByFilename(context.Background(), d.CID, treeID, id, nil, len(items))
require.NoError(t, err) require.NoError(t, err)
return res return res
} }
res := getChildren(t, RootID) res := getChildren(t, MultiNode{RootID})
expectAttributes(t, AttributeFilename, []string{"a", "b", "c"}, res) expectAttributes(t, AttributeFilename, []string{"a", "b", "c"}, res)
expectAttributes(t, controlAttr, []string{"", "", "c"}, res) expectAttributes(t, controlAttr, []string{"", "", "c"}, res)
{ {
ra := getChildren(t, res[0].ID) ra := getChildren(t, res[0].Children)
expectAttributes(t, AttributeFilename, []string{"bbb"}, ra) expectAttributes(t, AttributeFilename, []string{"bbb"}, ra)
expectAttributes(t, controlAttr, []string{""}, ra) expectAttributes(t, controlAttr, []string{""}, ra)
rabbb := getChildren(t, ra[0].ID) rabbb := getChildren(t, ra[0].Children)
expectAttributes(t, AttributeFilename, []string{"ccc", "xxx", "z"}, rabbb) expectAttributes(t, AttributeFilename, []string{"ccc", "xxx", "z"}, rabbb)
expectAttributes(t, controlAttr, []string{"a/bbb/ccc", "a/bbb/xxx", "a/bbb/z"}, rabbb) expectAttributes(t, controlAttr, []string{"a/bbb/ccc", "a/bbb/xxx", "a/bbb/z"}, rabbb)
} }
{ {
rb := getChildren(t, res[1].ID) rb := getChildren(t, res[1].Children)
expectAttributes(t, AttributeFilename, []string{"bbb", "xxx"}, rb) expectAttributes(t, AttributeFilename, []string{"bbb", "xxx"}, rb)
expectAttributes(t, controlAttr, []string{"", ""}, rb) expectAttributes(t, controlAttr, []string{"", ""}, rb)
rbbbb := getChildren(t, rb[0].ID) rbbbb := getChildren(t, rb[0].Children)
expectAttributes(t, AttributeFilename, []string{"ccc"}, rbbbb) expectAttributes(t, AttributeFilename, []string{"ccc"}, rbbbb)
expectAttributes(t, controlAttr, []string{"b/bbb/ccc"}, rbbbb) expectAttributes(t, controlAttr, []string{"b/bbb/ccc"}, rbbbb)
rbxxx := getChildren(t, rb[1].ID) rbxxx := getChildren(t, rb[1].Children)
expectAttributes(t, AttributeFilename, []string{"z"}, rbxxx) expectAttributes(t, AttributeFilename, []string{"z"}, rbxxx)
expectAttributes(t, controlAttr, []string{"b/xxx/z"}, rbxxx) expectAttributes(t, controlAttr, []string{"b/xxx/z"}, rbxxx)
} }
{ {
rc := getChildren(t, res[2].ID) rc := getChildren(t, res[2].Children)
require.Len(t, rc, 0) require.Len(t, rc, 0)
} }
} }

View file

@ -5,7 +5,7 @@ import (
) )
type heapInfo struct { type heapInfo struct {
id Node id MultiNode
filename string filename string
} }
@ -17,6 +17,7 @@ func (h filenameHeap) Swap(i, j int) { h[i], h[j] = h[j], h[i] }
func (h *filenameHeap) Push(x any) { func (h *filenameHeap) Push(x any) {
*h = append(*h, x.(heapInfo)) *h = append(*h, x.(heapInfo))
} }
func (h *filenameHeap) Pop() any { func (h *filenameHeap) Pop() any {
old := *h old := *h
n := len(old) n := len(old)
@ -27,13 +28,13 @@ func (h *filenameHeap) Pop() any {
// fixedHeap maintains a fixed number of smallest elements started at some point. // fixedHeap maintains a fixed number of smallest elements started at some point.
type fixedHeap struct { type fixedHeap struct {
start string start *string
max string max string
count int count int
h *filenameHeap h *filenameHeap
} }
func newHeap(start string, count int) *fixedHeap { func newHeap(start *string, count int) *fixedHeap {
h := new(filenameHeap) h := new(filenameHeap)
heap.Init(h) heap.Init(h)
@ -45,8 +46,8 @@ func newHeap(start string, count int) *fixedHeap {
} }
} }
func (h *fixedHeap) push(id Node, filename string) bool { func (h *fixedHeap) push(id MultiNode, filename string) bool {
if filename == "" || filename <= h.start { if h.start != nil && filename <= *h.start {
return false return false
} }
heap.Push(h.h, heapInfo{id: id, filename: filename}) heap.Push(h.h, heapInfo{id: id, filename: filename})

View file

@ -35,7 +35,7 @@ type Forest interface {
TreeGetChildren(ctx context.Context, cid cidSDK.ID, treeID string, nodeID Node) ([]NodeInfo, error) TreeGetChildren(ctx context.Context, cid cidSDK.ID, treeID string, nodeID Node) ([]NodeInfo, error)
// TreeSortedByFilename returns children of the node with the specified ID. The nodes are sorted by the filename attribute.. // TreeSortedByFilename returns children of the node with the specified ID. The nodes are sorted by the filename attribute..
// Should return ErrTreeNotFound if the tree is not found, and empty result if the node is not in the tree. // Should return ErrTreeNotFound if the tree is not found, and empty result if the node is not in the tree.
TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, treeID string, nodeID Node, last string, count int) ([]NodeInfo, string, error) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, treeID string, nodeID MultiNode, last *string, count int) ([]MultiNodeInfo, *string, error)
// TreeGetOpLog returns first log operation stored at or above the height. // TreeGetOpLog returns first log operation stored at or above the height.
// In case no such operation is found, empty Move and nil error should be returned. // In case no such operation is found, empty Move and nil error should be returned.
TreeGetOpLog(ctx context.Context, cid cidSDK.ID, treeID string, height uint64) (Move, error) TreeGetOpLog(ctx context.Context, cid cidSDK.ID, treeID string, height uint64) (Move, error)

View file

@ -21,7 +21,11 @@ func (x Meta) Bytes() []byte {
} }
func (x Meta) GetAttr(name string) []byte { func (x Meta) GetAttr(name string) []byte {
for _, kv := range x.Items { return findAttr(x.Items, name)
}
func findAttr(ms []KeyValue, name string) []byte {
for _, kv := range ms {
if kv.Key == name { if kv.Key == name {
return kv.Value return kv.Value
} }

View file

@ -0,0 +1,49 @@
package pilorama
import "bytes"
// MultiNode represents a group of internal nodes accessible by the same path, but having different id.
type MultiNode []Node
// MultiNodeInfo represents a group of internal nodes accessible by the same path, but having different id.
type MultiNodeInfo struct {
Children MultiNode
Parents MultiNode
Timestamps []uint64
Meta []KeyValue
}
func (r *MultiNodeInfo) Add(info NodeInfo) bool {
if !isInternal(info.Meta.Items) || !isInternal(r.Meta) ||
!bytes.Equal(r.Meta[0].Value, info.Meta.Items[0].Value) {
return false
}
r.Children = append(r.Children, info.ID)
r.Parents = append(r.Parents, info.ParentID)
r.Timestamps = append(r.Timestamps, info.Meta.Time)
return true
}
func (n NodeInfo) ToMultiNode() MultiNodeInfo {
return MultiNodeInfo{
Children: MultiNode{n.ID},
Parents: MultiNode{n.ParentID},
Timestamps: []uint64{n.Meta.Time},
Meta: n.Meta.Items,
}
}
func isInternal(m []KeyValue) bool {
return len(m) == 1 && m[0].Key == AttributeFilename
}
func mergeNodeInfos(ns []NodeInfo) []MultiNodeInfo {
var r []MultiNodeInfo
for _, info := range ns {
if len(r) == 0 || !r[len(r)-1].Add(info) {
r = append(r, info.ToMultiNode())
}
}
return r
}

View file

@ -0,0 +1,155 @@
package pilorama
import (
"context"
"strings"
"testing"
cidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id/test"
"github.com/stretchr/testify/require"
)
func TestDuplicateDirectory(t *testing.T) {
for i := range providers {
if providers[i].name == "inmemory" {
continue
}
t.Run(providers[i].name, func(t *testing.T) {
testDuplicateDirectory(t, providers[i].construct(t))
})
}
}
func testDuplicateDirectory(t *testing.T, f Forest) {
ctx := context.Background()
d := CIDDescriptor{CID: cidtest.ID(), Size: 1}
treeID := "sometree"
treeApply := func(t *testing.T, parent, child uint64, filename string, internal bool) {
// Nothing magic here, we add items in order and children are unique.
// This simplifies function interface a bit.
ts := child
kv := []KeyValue{{Key: AttributeFilename, Value: []byte(filename)}}
if !internal {
kv = append(kv, KeyValue{Key: "uniqueAttr", Value: []byte{byte(child)}})
}
err := f.TreeApply(ctx, d.CID, treeID, &Move{
Parent: parent,
Child: child,
Meta: Meta{
Time: ts,
Items: kv,
},
}, true)
require.NoError(t, err)
}
// The following tree is constructed:
// 0
// [1] |-- dir1 (internal)
// [2] |-- value1
// [3] |-- dir3 (internal)
// [4] |-- value3
// [5] |-- dir1 (internal)
// [6] |-- value2
// [7] |-- dir3 (internal)
// [8] |-- value4
// [9] |-- dir2 (internal)
// [10] |-- value0
treeApply(t, RootID, 1, "dir1", true)
treeApply(t, 1, 2, "value1", false)
treeApply(t, 1, 3, "dir3", true)
treeApply(t, 3, 4, "value3", false)
treeApply(t, RootID, 5, "dir1", true)
treeApply(t, 5, 6, "value2", false)
treeApply(t, 5, 7, "dir3", true)
treeApply(t, 7, 8, "value4", false)
treeApply(t, RootID, 9, "dir2", true)
treeApply(t, RootID, 10, "value0", false)
// The compacted view:
// 0
// [1,5] |-- dir1 (internal)
// [2] |-- value1
// [3,7] |-- dir3 (internal)
// [4] |-- value3
// [8] |-- value4
// [6] |-- value2
// [9] |-- dir2 (internal)
// [10] |-- value0
testGetByPath := func(t *testing.T, p string) []byte {
pp := strings.Split(p, "/")
nodes, err := f.TreeGetByPath(context.Background(), d.CID, treeID, AttributeFilename, pp, false)
require.NoError(t, err)
require.Equal(t, 1, len(nodes))
meta, _, err := f.TreeGetMeta(ctx, d.CID, treeID, nodes[0])
require.NoError(t, err)
require.Equal(t, []byte(pp[len(pp)-1]), meta.GetAttr(AttributeFilename))
return meta.GetAttr("uniqueAttr")
}
require.Equal(t, []byte{2}, testGetByPath(t, "dir1/value1"))
require.Equal(t, []byte{4}, testGetByPath(t, "dir1/dir3/value3"))
require.Equal(t, []byte{8}, testGetByPath(t, "dir1/dir3/value4"))
require.Equal(t, []byte{10}, testGetByPath(t, "value0"))
testSortedByFilename := func(t *testing.T, root MultiNode, last *string, batchSize int) ([]MultiNodeInfo, *string) {
res, last, err := f.TreeSortedByFilename(context.Background(), d.CID, treeID, root, last, batchSize)
require.NoError(t, err)
return res, last
}
t.Run("test sorted listing, full children branch", func(t *testing.T) {
t.Run("big batch size", func(t *testing.T) {
res, _ := testSortedByFilename(t, MultiNode{RootID}, nil, 10)
require.Equal(t, 3, len(res))
require.Equal(t, MultiNode{1, 5}, res[0].Children)
require.Equal(t, MultiNode{9}, res[1].Children)
require.Equal(t, MultiNode{10}, res[2].Children)
t.Run("multi-root", func(t *testing.T) {
res, _ := testSortedByFilename(t, MultiNode{1, 5}, nil, 10)
require.Equal(t, 3, len(res))
require.Equal(t, MultiNode{3, 7}, res[0].Children)
require.Equal(t, MultiNode{2}, res[1].Children)
require.Equal(t, MultiNode{6}, res[2].Children)
})
})
t.Run("small batch size", func(t *testing.T) {
res, last := testSortedByFilename(t, MultiNode{RootID}, nil, 1)
require.Equal(t, 1, len(res))
require.Equal(t, MultiNode{1, 5}, res[0].Children)
res, last = testSortedByFilename(t, MultiNode{RootID}, last, 1)
require.Equal(t, 1, len(res))
require.Equal(t, MultiNode{9}, res[0].Children)
res, last = testSortedByFilename(t, MultiNode{RootID}, last, 1)
require.Equal(t, 1, len(res))
require.Equal(t, MultiNode{10}, res[0].Children)
res, _ = testSortedByFilename(t, MultiNode{RootID}, last, 1)
require.Equal(t, 0, len(res))
t.Run("multi-root", func(t *testing.T) {
res, last := testSortedByFilename(t, MultiNode{1, 5}, nil, 1)
require.Equal(t, 1, len(res))
require.Equal(t, MultiNode{3, 7}, res[0].Children)
res, last = testSortedByFilename(t, MultiNode{1, 5}, last, 1)
require.Equal(t, 1, len(res))
require.Equal(t, MultiNode{2}, res[0].Children)
res, last = testSortedByFilename(t, MultiNode{1, 5}, last, 1)
require.Equal(t, 1, len(res))
require.Equal(t, MultiNode{6}, res[0].Children)
res, _ = testSortedByFilename(t, MultiNode{RootID}, last, 1)
require.Equal(t, 0, len(res))
})
})
})
}

View file

@ -13,8 +13,9 @@ import (
// HeadPrm groups the parameters of Head operation. // HeadPrm groups the parameters of Head operation.
type HeadPrm struct { type HeadPrm struct {
addr oid.Address addr oid.Address
raw bool raw bool
ShardLooksBad bool
} }
// HeadRes groups the resulting values of Head operation. // HeadRes groups the resulting values of Head operation.
@ -59,7 +60,8 @@ func (s *Shard) Head(ctx context.Context, prm HeadPrm) (HeadRes, error) {
var obj *objectSDK.Object var obj *objectSDK.Object
var err error var err error
if s.GetMode().NoMetabase() { mode := s.GetMode()
if mode.NoMetabase() || (mode.ReadOnly() && prm.ShardLooksBad) {
var getPrm GetPrm var getPrm GetPrm
getPrm.SetAddress(prm.addr) getPrm.SetAddress(prm.addr)
getPrm.SetIgnoreMeta(true) getPrm.SetIgnoreMeta(true)

View file

@ -469,6 +469,7 @@ func (s *Shard) updateMetrics(ctx context.Context) {
s.setContainerObjectsCount(contID.EncodeToString(), logical, count.Logic) s.setContainerObjectsCount(contID.EncodeToString(), logical, count.Logic)
s.setContainerObjectsCount(contID.EncodeToString(), user, count.User) s.setContainerObjectsCount(contID.EncodeToString(), user, count.User)
} }
s.cfg.metricsWriter.SetMode(s.info.Mode)
} }
// incObjectCounter increment both physical and logical object // incObjectCounter increment both physical and logical object

View file

@ -184,26 +184,25 @@ func (s *Shard) TreeGetChildren(ctx context.Context, cid cidSDK.ID, treeID strin
} }
// TreeSortedByFilename implements the pilorama.Forest interface. // TreeSortedByFilename implements the pilorama.Forest interface.
func (s *Shard) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, treeID string, nodeID pilorama.Node, last string, count int) ([]pilorama.NodeInfo, string, error) { func (s *Shard) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, treeID string, nodeID pilorama.MultiNode, last *string, count int) ([]pilorama.MultiNodeInfo, *string, error) {
ctx, span := tracing.StartSpanFromContext(ctx, "Shard.TreeSortedByFilename", ctx, span := tracing.StartSpanFromContext(ctx, "Shard.TreeSortedByFilename",
trace.WithAttributes( trace.WithAttributes(
attribute.String("shard_id", s.ID().String()), attribute.String("shard_id", s.ID().String()),
attribute.String("container_id", cid.EncodeToString()), attribute.String("container_id", cid.EncodeToString()),
attribute.String("tree_id", treeID), attribute.String("tree_id", treeID),
attribute.String("node_id", strconv.FormatUint(nodeID, 10)),
), ),
) )
defer span.End() defer span.End()
if s.pilorama == nil { if s.pilorama == nil {
return nil, "", ErrPiloramaDisabled return nil, last, ErrPiloramaDisabled
} }
s.m.RLock() s.m.RLock()
defer s.m.RUnlock() defer s.m.RUnlock()
if s.info.Mode.NoMetabase() { if s.info.Mode.NoMetabase() {
return nil, "", ErrDegradedMode return nil, last, ErrDegradedMode
} }
return s.pilorama.TreeSortedByFilename(ctx, cid, treeID, nodeID, last, count) return s.pilorama.TreeSortedByFilename(ctx, cid, treeID, nodeID, last, count)
} }

View file

@ -60,7 +60,7 @@ var defaultBucket = []byte{0}
func New(opts ...Option) Cache { func New(opts ...Option) Cache {
c := &cache{ c := &cache{
flushCh: make(chan objectInfo), flushCh: make(chan objectInfo),
mode: mode.ReadWrite, mode: mode.Disabled,
compressFlags: make(map[string]struct{}), compressFlags: make(map[string]struct{}),
options: options{ options: options{

View file

@ -6,12 +6,14 @@ import (
"github.com/nspcc-dev/neo-go/pkg/core/transaction" "github.com/nspcc-dev/neo-go/pkg/core/transaction"
"github.com/nspcc-dev/neo-go/pkg/neorpc/result" "github.com/nspcc-dev/neo-go/pkg/neorpc/result"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/actor" "github.com/nspcc-dev/neo-go/pkg/rpcclient/actor"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem" "github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
) )
type actorProvider interface { type actorProvider interface {
GetActor() *actor.Actor GetActor() *actor.Actor
GetRPCActor() actor.RPCActor
} }
// Client switches an established connection with neo-go if it is broken. // Client switches an established connection with neo-go if it is broken.
@ -132,3 +134,11 @@ func (a *SwitchRPCGuardedActor) TerminateSession(sessionID uuid.UUID) error {
func (a *SwitchRPCGuardedActor) TraverseIterator(sessionID uuid.UUID, iterator *result.Iterator, num int) ([]stackitem.Item, error) { func (a *SwitchRPCGuardedActor) TraverseIterator(sessionID uuid.UUID, iterator *result.Iterator, num int) ([]stackitem.Item, error) {
return a.actorProvider.GetActor().TraverseIterator(sessionID, iterator, num) return a.actorProvider.GetActor().TraverseIterator(sessionID, iterator, num)
} }
func (a *SwitchRPCGuardedActor) GetRPCActor() actor.RPCActor {
return a.actorProvider.GetRPCActor()
}
func (a *SwitchRPCGuardedActor) GetRPCInvoker() invoker.RPCInvoke {
return a.actorProvider.GetRPCActor()
}

View file

@ -579,3 +579,10 @@ func (c *Client) GetActor() *actor.Actor {
return c.rpcActor return c.rpcActor
} }
func (c *Client) GetRPCActor() actor.RPCActor {
c.switchLock.RLock()
defer c.switchLock.RUnlock()
return c.client
}

View file

@ -4,6 +4,7 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"sync/atomic"
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -48,6 +49,8 @@ type cfg struct {
switchInterval time.Duration switchInterval time.Duration
morphCacheMetrics metrics.MorphCacheMetrics morphCacheMetrics metrics.MorphCacheMetrics
cmode *atomic.Bool
} }
const ( const (
@ -311,3 +314,11 @@ func WithMorphCacheMetrics(morphCacheMetrics metrics.MorphCacheMetrics) Option {
c.morphCacheMetrics = morphCacheMetrics c.morphCacheMetrics = morphCacheMetrics
} }
} }
// WithCompatibilityMode indicates that Client is working in compatibility mode
// in this mode we need to keep backward compatibility with services with previous version.
func WithCompatibilityMode(cmode *atomic.Bool) Option {
return func(c *cfg) {
c.cmode = cmode
}
}

View file

@ -566,14 +566,19 @@ func (c *Client) notaryCosigners(invokedByAlpha bool, ir []*keys.PublicKey, comm
} }
s := make([]actor.SignerAccount, 2, 3) s := make([]actor.SignerAccount, 2, 3)
// Proxy contract that will pay for the execution. // Proxy contract that will pay for the execution.
// Do not change this:
// We must be able to call NNS contract indirectly from the Container contract.
// Thus, CalledByEntry is not sufficient.
// In future we may restrict this to all the usecases we have.
scopes := transaction.Global
if c.cfg.cmode != nil && c.cfg.cmode.Load() {
// Set it to None to keep ability to send notary requests during upgrade
scopes = transaction.None
}
s[0] = actor.SignerAccount{ s[0] = actor.SignerAccount{
Signer: transaction.Signer{ Signer: transaction.Signer{
Account: c.notary.proxy, Account: c.notary.proxy,
// Do not change this: Scopes: scopes,
// We must be able to call NNS contract indirectly from the Container contract.
// Thus, CalledByEntry is not sufficient.
// In future we may restrict this to all the usecases we have.
Scopes: transaction.Global,
}, },
Account: notary.FakeContractAccount(c.notary.proxy), Account: notary.FakeContractAccount(c.notary.proxy),
} }

View file

@ -15,6 +15,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/refs" "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/refs"
session "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/session" session "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/session"
"git.frostfs.info/TrueCloudLab/frostfs-contract/frostfsid/client" "git.frostfs.info/TrueCloudLab/frostfs-contract/frostfsid/client"
aperequest "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/ape/request"
containercore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container" containercore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing" "git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
@ -26,7 +27,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain" apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
policyengine "git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine" policyengine "git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/resource"
nativeschema "git.frostfs.info/TrueCloudLab/policy-engine/schema/native" nativeschema "git.frostfs.info/TrueCloudLab/policy-engine/schema/native"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
@ -148,18 +148,21 @@ func (ac *apeChecker) List(ctx context.Context, req *container.ListRequest) (*co
return nil, err return nil, err
} }
request := &apeRequest{ request := aperequest.NewRequest(
resource: &apeResource{ nativeschema.MethodListContainers,
name: resourceName(namespace, ""), aperequest.NewResource(
props: make(map[string]string), resourceName(namespace, ""),
}, make(map[string]string),
op: nativeschema.MethodListContainers, ),
props: reqProps, reqProps,
} )
s, found, err := ac.router.IsAllowed(apechain.Ingress, rt := policyengine.NewRequestTargetWithNamespace(namespace)
policyengine.NewRequestTargetWithNamespace(namespace), rt.User = &policyengine.Target{
request) Type: policyengine.User,
Name: fmt.Sprintf("%s:%s", namespace, pk.Address()),
}
s, found, err := ac.router.IsAllowed(apechain.Ingress, rt, request)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -193,18 +196,21 @@ func (ac *apeChecker) Put(ctx context.Context, req *container.PutRequest) (*cont
return nil, err return nil, err
} }
request := &apeRequest{ request := aperequest.NewRequest(
resource: &apeResource{ nativeschema.MethodPutContainer,
name: resourceName(namespace, ""), aperequest.NewResource(
props: make(map[string]string), resourceName(namespace, ""),
}, make(map[string]string),
op: nativeschema.MethodPutContainer, ),
props: reqProps, reqProps,
} )
s, found, err := ac.router.IsAllowed(apechain.Ingress, rt := policyengine.NewRequestTargetWithNamespace(namespace)
policyengine.NewRequestTargetWithNamespace(namespace), rt.User = &policyengine.Target{
request) Type: policyengine.User,
Name: fmt.Sprintf("%s:%s", namespace, pk.Address()),
}
s, found, err := ac.router.IsAllowed(apechain.Ingress, rt, request)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -277,7 +283,7 @@ func (ac *apeChecker) validateContainerBoundedOperation(containerID *refs.Contai
return err return err
} }
reqProps, err := ac.getRequestProps(mh, vh, cont, id) reqProps, pk, err := ac.getRequestProps(mh, vh, cont, id)
if err != nil { if err != nil {
return err return err
} }
@ -288,17 +294,17 @@ func (ac *apeChecker) validateContainerBoundedOperation(containerID *refs.Contai
namespace = cntNamespace namespace = cntNamespace
} }
request := &apeRequest{ request := aperequest.NewRequest(
resource: &apeResource{ op,
name: resourceName(namespace, id.EncodeToString()), aperequest.NewResource(
props: ac.getContainerProps(cont), resourceName(namespace, id.EncodeToString()),
}, ac.getContainerProps(cont),
op: op, ),
props: reqProps, reqProps,
} )
s, found, err := ac.router.IsAllowed(apechain.Ingress, s, found, err := ac.router.IsAllowed(apechain.Ingress,
policyengine.NewRequestTarget(namespace, id.EncodeToString()), policyengine.NewRequestTargetExtended(namespace, id.EncodeToString(), fmt.Sprintf("%s:%s", namespace, pk.Address()), nil),
request) request)
if err != nil { if err != nil {
return err return err
@ -329,40 +335,6 @@ func getContainerID(reqContID *refs.ContainerID) (cid.ID, error) {
return id, nil return id, nil
} }
type apeRequest struct {
resource *apeResource
op string
props map[string]string
}
// Operation implements resource.Request.
func (r *apeRequest) Operation() string {
return r.op
}
// Property implements resource.Request.
func (r *apeRequest) Property(key string) string {
return r.props[key]
}
// Resource implements resource.Request.
func (r *apeRequest) Resource() resource.Resource {
return r.resource
}
type apeResource struct {
name string
props map[string]string
}
func (r *apeResource) Name() string {
return r.name
}
func (r *apeResource) Property(key string) string {
return r.props[key]
}
func resourceName(namespace string, container string) string { func resourceName(namespace string, container string) string {
if namespace == "" && container == "" { if namespace == "" && container == "" {
return nativeschema.ResourceFormatRootContainers return nativeschema.ResourceFormatRootContainers
@ -384,19 +356,19 @@ func (ac *apeChecker) getContainerProps(c *containercore.Container) map[string]s
func (ac *apeChecker) getRequestProps(mh *session.RequestMetaHeader, vh *session.RequestVerificationHeader, func (ac *apeChecker) getRequestProps(mh *session.RequestMetaHeader, vh *session.RequestVerificationHeader,
cont *containercore.Container, cnrID cid.ID, cont *containercore.Container, cnrID cid.ID,
) (map[string]string, error) { ) (map[string]string, *keys.PublicKey, error) {
actor, pk, err := ac.getActorAndPublicKey(mh, vh, cnrID) actor, pk, err := ac.getActorAndPublicKey(mh, vh, cnrID)
if err != nil { if err != nil {
return nil, err return nil, nil, err
} }
role, err := ac.getRole(actor, pk, cont, cnrID) role, err := ac.getRole(actor, pk, cont, cnrID)
if err != nil { if err != nil {
return nil, err return nil, nil, err
} }
return map[string]string{ return map[string]string{
nativeschema.PropertyKeyActorPublicKey: hex.EncodeToString(pk.Bytes()), nativeschema.PropertyKeyActorPublicKey: hex.EncodeToString(pk.Bytes()),
nativeschema.PropertyKeyActorRole: role, nativeschema.PropertyKeyActorRole: role,
}, nil }, pk, nil
} }
func (ac *apeChecker) getRole(actor *user.ID, pk *keys.PublicKey, cont *containercore.Container, cnrID cid.ID) (string, error) { func (ac *apeChecker) getRole(actor *user.ID, pk *keys.PublicKey, cont *containercore.Container, cnrID cid.ID) (string, error) {

View file

@ -6,106 +6,108 @@ import "pkg/services/control/ir/types.proto";
option go_package = "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/ir/control"; option go_package = "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/ir/control";
// `ControlService` provides an interface for internal work with the Inner Ring node. // `ControlService` provides an interface for internal work with the Inner Ring
// node.
service ControlService { service ControlService {
// Performs health check of the IR node. // Performs health check of the IR node.
rpc HealthCheck (HealthCheckRequest) returns (HealthCheckResponse); rpc HealthCheck(HealthCheckRequest) returns (HealthCheckResponse);
// Forces a new epoch to be signaled by the IR node with high probability. // Forces a new epoch to be signaled by the IR node with high probability.
rpc TickEpoch (TickEpochRequest) returns (TickEpochResponse); rpc TickEpoch(TickEpochRequest) returns (TickEpochResponse);
// Forces a node removal to be signaled by the IR node with high probability. // Forces a node removal to be signaled by the IR node with high probability.
rpc RemoveNode (RemoveNodeRequest) returns (RemoveNodeResponse); rpc RemoveNode(RemoveNodeRequest) returns (RemoveNodeResponse);
// Forces a container removal to be signaled by the IR node with high probability. // Forces a container removal to be signaled by the IR node with high
rpc RemoveContainer(RemoveContainerRequest) returns (RemoveContainerResponse); // probability.
rpc RemoveContainer(RemoveContainerRequest) returns (RemoveContainerResponse);
} }
// Health check request. // Health check request.
message HealthCheckRequest { message HealthCheckRequest {
// Health check request body. // Health check request body.
message Body {} message Body {}
// Body of health check request message. // Body of health check request message.
Body body = 1; Body body = 1;
// Body signature. // Body signature.
// Should be signed by node key or one of // Should be signed by node key or one of
// the keys configured by the node. // the keys configured by the node.
Signature signature = 2; Signature signature = 2;
} }
// Health check response. // Health check response.
message HealthCheckResponse { message HealthCheckResponse {
// Health check response body // Health check response body
message Body { message Body {
// Health status of IR node application. // Health status of IR node application.
HealthStatus health_status = 1; HealthStatus health_status = 1;
} }
// Body of health check response message. // Body of health check response message.
Body body = 1; Body body = 1;
// Body signature. // Body signature.
Signature signature = 2; Signature signature = 2;
} }
message TickEpochRequest { message TickEpochRequest {
message Body{ message Body {
// Valid until block value override. // Valid until block value override.
uint32 vub = 1; uint32 vub = 1;
} }
Body body = 1; Body body = 1;
Signature signature = 2; Signature signature = 2;
} }
message TickEpochResponse { message TickEpochResponse {
message Body{ message Body {
// Valid until block value for transaction. // Valid until block value for transaction.
uint32 vub = 1; uint32 vub = 1;
} }
Body body = 1; Body body = 1;
Signature signature = 2; Signature signature = 2;
} }
message RemoveNodeRequest { message RemoveNodeRequest {
message Body{ message Body {
bytes key = 1; bytes key = 1;
// Valid until block value override. // Valid until block value override.
uint32 vub = 2; uint32 vub = 2;
} }
Body body = 1; Body body = 1;
Signature signature = 2; Signature signature = 2;
} }
message RemoveNodeResponse { message RemoveNodeResponse {
message Body{ message Body {
// Valid until block value for transaction. // Valid until block value for transaction.
uint32 vub = 1; uint32 vub = 1;
} }
Body body = 1; Body body = 1;
Signature signature = 2; Signature signature = 2;
} }
message RemoveContainerRequest { message RemoveContainerRequest {
message Body{ message Body {
bytes container_id = 1; bytes container_id = 1;
bytes owner = 2; bytes owner = 2;
// Valid until block value override. // Valid until block value override.
uint32 vub = 3; uint32 vub = 3;
} }
Body body = 1; Body body = 1;
Signature signature = 2; Signature signature = 2;
} }
message RemoveContainerResponse { message RemoveContainerResponse {
message Body{ message Body {
// Valid until block value for transaction. // Valid until block value for transaction.
uint32 vub = 1; uint32 vub = 1;
} }
Body body = 1; Body body = 1;
Signature signature = 2; Signature signature = 2;
} }

View file

@ -6,24 +6,24 @@ option go_package = "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/ir/
// Signature of some message. // Signature of some message.
message Signature { message Signature {
// Public key used for signing. // Public key used for signing.
bytes key = 1 [json_name = "key"]; bytes key = 1 [ json_name = "key" ];
// Binary signature. // Binary signature.
bytes sign = 2 [json_name = "signature"]; bytes sign = 2 [ json_name = "signature" ];
} }
// Health status of the IR application. // Health status of the IR application.
enum HealthStatus { enum HealthStatus {
// Undefined status, default value. // Undefined status, default value.
HEALTH_STATUS_UNDEFINED = 0; HEALTH_STATUS_UNDEFINED = 0;
// IR application is starting. // IR application is starting.
STARTING = 1; STARTING = 1;
// IR application is started and serves all services. // IR application is started and serves all services.
READY = 2; READY = 2;
// IR application is shutting down. // IR application is shutting down.
SHUTTING_DOWN = 3; SHUTTING_DOWN = 3;
} }

View file

@ -19,6 +19,10 @@ func apeTarget(chainTarget *control.ChainTarget) (engine.Target, error) {
return engine.ContainerTarget(chainTarget.GetName()), nil return engine.ContainerTarget(chainTarget.GetName()), nil
case control.ChainTarget_NAMESPACE: case control.ChainTarget_NAMESPACE:
return engine.NamespaceTarget(chainTarget.GetName()), nil return engine.NamespaceTarget(chainTarget.GetName()), nil
case control.ChainTarget_USER:
return engine.UserTarget(chainTarget.GetName()), nil
case control.ChainTarget_GROUP:
return engine.GroupTarget(chainTarget.GetName()), nil
default: default:
} }
return engine.Target{}, status.Error(codes.InvalidArgument, return engine.Target{}, status.Error(codes.InvalidArgument,
@ -42,6 +46,16 @@ func controlTarget(chainTarget *engine.Target) (control.ChainTarget, error) {
Name: nm, Name: nm,
Type: control.ChainTarget_NAMESPACE, Type: control.ChainTarget_NAMESPACE,
}, nil }, nil
case engine.User:
return control.ChainTarget{
Name: chainTarget.Name,
Type: control.ChainTarget_USER,
}, nil
case engine.Group:
return control.ChainTarget{
Name: chainTarget.Name,
Type: control.ChainTarget_GROUP,
}, nil
default: default:
} }
return control.ChainTarget{}, status.Error(codes.InvalidArgument, return control.ChainTarget{}, status.Error(codes.InvalidArgument,

Binary file not shown.

File diff suppressed because it is too large Load diff

Binary file not shown.

Binary file not shown.

View file

@ -6,183 +6,186 @@ option go_package = "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/con
// Signature of some message. // Signature of some message.
message Signature { message Signature {
// Public key used for signing. // Public key used for signing.
bytes key = 1 [json_name = "key"]; bytes key = 1 [ json_name = "key" ];
// Binary signature. // Binary signature.
bytes sign = 2 [json_name = "signature"]; bytes sign = 2 [ json_name = "signature" ];
} }
// Status of the storage node in the FrostFS network map. // Status of the storage node in the FrostFS network map.
enum NetmapStatus { enum NetmapStatus {
// Undefined status, default value. // Undefined status, default value.
STATUS_UNDEFINED = 0; STATUS_UNDEFINED = 0;
// Node is online. // Node is online.
ONLINE = 1; ONLINE = 1;
// Node is offline. // Node is offline.
OFFLINE = 2; OFFLINE = 2;
// Node is maintained by the owner. // Node is maintained by the owner.
MAINTENANCE = 3; MAINTENANCE = 3;
} }
// FrostFS node description. // FrostFS node description.
message NodeInfo { message NodeInfo {
// Public key of the FrostFS node in a binary format. // Public key of the FrostFS node in a binary format.
bytes public_key = 1 [json_name = "publicKey"]; bytes public_key = 1 [ json_name = "publicKey" ];
// Ways to connect to a node. // Ways to connect to a node.
repeated string addresses = 2 [json_name = "addresses"]; repeated string addresses = 2 [ json_name = "addresses" ];
// Administrator-defined Attributes of the FrostFS Storage Node. // Administrator-defined Attributes of the FrostFS Storage Node.
// //
// `Attribute` is a Key-Value metadata pair. Key name must be a valid UTF-8 // `Attribute` is a Key-Value metadata pair. Key name must be a valid UTF-8
// string. Value can't be empty. // string. Value can't be empty.
// //
// Node's attributes are mostly used during Storage Policy evaluation to // Node's attributes are mostly used during Storage Policy evaluation to
// calculate object's placement and find a set of nodes satisfying policy // calculate object's placement and find a set of nodes satisfying policy
// requirements. There are some "well-known" node attributes common to all the // requirements. There are some "well-known" node attributes common to all the
// Storage Nodes in the network and used implicitly with default values if not // Storage Nodes in the network and used implicitly with default values if not
// explicitly set: // explicitly set:
// //
// * Capacity \ // * Capacity \
// Total available disk space in Gigabytes. // Total available disk space in Gigabytes.
// * Price \ // * Price \
// Price in GAS tokens for storing one GB of data during one Epoch. In node // Price in GAS tokens for storing one GB of data during one Epoch. In node
// attributes it's a string presenting floating point number with comma or // attributes it's a string presenting floating point number with comma or
// point delimiter for decimal part. In the Network Map it will be saved as // point delimiter for decimal part. In the Network Map it will be saved as
// 64-bit unsigned integer representing number of minimal token fractions. // 64-bit unsigned integer representing number of minimal token fractions.
// * Locode \ // * Locode \
// Node's geographic location in // Node's geographic location in
// [UN/LOCODE](https://www.unece.org/cefact/codesfortrade/codes_index.html) // [UN/LOCODE](https://www.unece.org/cefact/codesfortrade/codes_index.html)
// format approximated to the nearest point defined in standard. // format approximated to the nearest point defined in standard.
// * Country \ // * Country \
// Country code in // Country code in
// [ISO 3166-1_alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2) // [ISO 3166-1_alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2)
// format. Calculated automatically from `Locode` attribute // format. Calculated automatically from `Locode` attribute
// * Region \ // * Region \
// Country's administative subdivision where node is located. Calculated // Country's administative subdivision where node is located. Calculated
// automatically from `Locode` attribute based on `SubDiv` field. Presented // automatically from `Locode` attribute based on `SubDiv` field. Presented
// in [ISO 3166-2](https://en.wikipedia.org/wiki/ISO_3166-2) format. // in [ISO 3166-2](https://en.wikipedia.org/wiki/ISO_3166-2) format.
// * City \ // * City \
// City, town, village or rural area name where node is located written // City, town, village or rural area name where node is located written
// without diacritics . Calculated automatically from `Locode` attribute. // without diacritics . Calculated automatically from `Locode` attribute.
// //
// For detailed description of each well-known attribute please see the // For detailed description of each well-known attribute please see the
// corresponding section in FrostFS Technical specification. // corresponding section in FrostFS Technical specification.
message Attribute { message Attribute {
// Key of the node attribute. // Key of the node attribute.
string key = 1 [json_name = "key"]; string key = 1 [ json_name = "key" ];
// Value of the node attribute. // Value of the node attribute.
string value = 2 [json_name = "value"]; string value = 2 [ json_name = "value" ];
// Parent keys, if any. For example for `City` it could be `Region` and // Parent keys, if any. For example for `City` it could be `Region` and
// `Country`. // `Country`.
repeated string parents = 3 [json_name = "parents"]; repeated string parents = 3 [ json_name = "parents" ];
} }
// Carries list of the FrostFS node attributes in a key-value form. Key name // Carries list of the FrostFS node attributes in a key-value form. Key name
// must be a node-unique valid UTF-8 string. Value can't be empty. NodeInfo // must be a node-unique valid UTF-8 string. Value can't be empty. NodeInfo
// structures with duplicated attribute names or attributes with empty values // structures with duplicated attribute names or attributes with empty values
// will be considered invalid. // will be considered invalid.
repeated Attribute attributes = 3 [json_name = "attributes"]; repeated Attribute attributes = 3 [ json_name = "attributes" ];
// Carries state of the FrostFS node. // Carries state of the FrostFS node.
NetmapStatus state = 4 [json_name = "state"]; NetmapStatus state = 4 [ json_name = "state" ];
} }
// Network map structure. // Network map structure.
message Netmap { message Netmap {
// Network map revision number. // Network map revision number.
uint64 epoch = 1 [json_name = "epoch"]; uint64 epoch = 1 [ json_name = "epoch" ];
// Nodes presented in network. // Nodes presented in network.
repeated NodeInfo nodes = 2 [json_name = "nodes"]; repeated NodeInfo nodes = 2 [ json_name = "nodes" ];
} }
// Health status of the storage node application. // Health status of the storage node application.
enum HealthStatus { enum HealthStatus {
// Undefined status, default value. // Undefined status, default value.
HEALTH_STATUS_UNDEFINED = 0; HEALTH_STATUS_UNDEFINED = 0;
// Storage node application is starting. // Storage node application is starting.
STARTING = 1; STARTING = 1;
// Storage node application is started and serves all services. // Storage node application is started and serves all services.
READY = 2; READY = 2;
// Storage node application is shutting down. // Storage node application is shutting down.
SHUTTING_DOWN = 3; SHUTTING_DOWN = 3;
// Storage node application is reconfiguring. // Storage node application is reconfiguring.
RECONFIGURING = 4; RECONFIGURING = 4;
} }
// Shard description. // Shard description.
message ShardInfo { message ShardInfo {
// ID of the shard. // ID of the shard.
bytes shard_ID = 1 [json_name = "shardID"]; bytes shard_ID = 1 [ json_name = "shardID" ];
// Path to shard's metabase. // Path to shard's metabase.
string metabase_path = 2 [json_name = "metabasePath"]; string metabase_path = 2 [ json_name = "metabasePath" ];
// Shard's blobstor info. // Shard's blobstor info.
repeated BlobstorInfo blobstor = 3 [json_name = "blobstor"]; repeated BlobstorInfo blobstor = 3 [ json_name = "blobstor" ];
// Path to shard's write-cache, empty if disabled. // Path to shard's write-cache, empty if disabled.
string writecache_path = 4 [json_name = "writecachePath"]; string writecache_path = 4 [ json_name = "writecachePath" ];
// Work mode of the shard. // Work mode of the shard.
ShardMode mode = 5; ShardMode mode = 5;
// Amount of errors occured. // Amount of errors occured.
uint32 errorCount = 6; uint32 errorCount = 6;
// Path to shard's pilorama storage. // Path to shard's pilorama storage.
string pilorama_path = 7 [json_name = "piloramaPath"]; string pilorama_path = 7 [ json_name = "piloramaPath" ];
} }
// Blobstor component description. // Blobstor component description.
message BlobstorInfo { message BlobstorInfo {
// Path to the root. // Path to the root.
string path = 1 [json_name = "path"]; string path = 1 [ json_name = "path" ];
// Component type. // Component type.
string type = 2 [json_name = "type"]; string type = 2 [ json_name = "type" ];
} }
// Work mode of the shard. // Work mode of the shard.
enum ShardMode { enum ShardMode {
// Undefined mode, default value. // Undefined mode, default value.
SHARD_MODE_UNDEFINED = 0; SHARD_MODE_UNDEFINED = 0;
// Read-write. // Read-write.
READ_WRITE = 1; READ_WRITE = 1;
// Read-only. // Read-only.
READ_ONLY = 2; READ_ONLY = 2;
// Degraded. // Degraded.
DEGRADED = 3; DEGRADED = 3;
// DegradedReadOnly. // DegradedReadOnly.
DEGRADED_READ_ONLY = 4; DEGRADED_READ_ONLY = 4;
} }
// ChainTarget is an object to which local overrides // ChainTarget is an object to which local overrides
// are applied. // are applied.
message ChainTarget { message ChainTarget {
enum TargetType { enum TargetType {
UNDEFINED = 0; UNDEFINED = 0;
NAMESPACE = 1; NAMESPACE = 1;
CONTAINER = 2; CONTAINER = 2;
}
TargetType type = 1; USER = 3;
string Name = 2; GROUP = 4;
}
TargetType type = 1;
string Name = 2;
} }

View file

@ -12,6 +12,7 @@ import (
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain" apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
policyengine "git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine" policyengine "git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
nativeschema "git.frostfs.info/TrueCloudLab/policy-engine/schema/native" nativeschema "git.frostfs.info/TrueCloudLab/policy-engine/schema/native"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
) )
type checkerImpl struct { type checkerImpl struct {
@ -54,6 +55,9 @@ type Prm struct {
// If SoftAPECheck is set to true, then NoRuleFound is interpreted as allow. // If SoftAPECheck is set to true, then NoRuleFound is interpreted as allow.
SoftAPECheck bool SoftAPECheck bool
// If true, object headers will not retrieved from storage engine.
WithoutHeaderRequest bool
} }
var errMissingOID = errors.New("object ID is not set") var errMissingOID = errors.New("object ID is not set")
@ -81,8 +85,13 @@ func (c *checkerImpl) CheckAPE(ctx context.Context, prm Prm) error {
return fmt.Errorf("failed to create ape request: %w", err) return fmt.Errorf("failed to create ape request: %w", err)
} }
status, ruleFound, err := c.chainRouter.IsAllowed(apechain.Ingress, pub, err := keys.NewPublicKeyFromString(prm.SenderKey)
policyengine.NewRequestTarget(prm.Namespace, prm.Container.EncodeToString()), r) if err != nil {
return err
}
rt := policyengine.NewRequestTargetExtended(prm.Namespace, prm.Container.EncodeToString(), fmt.Sprintf("%s:%s", prm.Namespace, pub.Address()), nil)
status, ruleFound, err := c.chainRouter.IsAllowed(apechain.Ingress, rt, r)
if err != nil { if err != nil {
return err return err
} }

View file

@ -16,6 +16,7 @@ import (
policyengine "git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine" policyengine "git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine/inmemory" "git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine/inmemory"
nativeschema "git.frostfs.info/TrueCloudLab/policy-engine/schema/native" nativeschema "git.frostfs.info/TrueCloudLab/policy-engine/schema/native"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -147,7 +148,9 @@ var (
role = "Container" role = "Container"
senderKey = hex.EncodeToString([]byte{1, 0, 0, 1}) senderPrivateKey, _ = keys.NewPrivateKey()
senderKey = hex.EncodeToString(senderPrivateKey.PublicKey().Bytes())
) )
func TestAPECheck(t *testing.T) { func TestAPECheck(t *testing.T) {

View file

@ -6,49 +6,16 @@ import (
"strconv" "strconv"
objectV2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object" objectV2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object"
aperequest "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/ape/request"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object" objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
aperesource "git.frostfs.info/TrueCloudLab/policy-engine/pkg/resource"
nativeschema "git.frostfs.info/TrueCloudLab/policy-engine/schema/native" nativeschema "git.frostfs.info/TrueCloudLab/policy-engine/schema/native"
) )
type request struct { var defaultRequest = aperequest.Request{}
operation string
resource *resource
properties map[string]string
}
var _ aperesource.Request = (*request)(nil)
type resource struct {
name string
properties map[string]string
}
var _ aperesource.Resource = (*resource)(nil)
func (r *resource) Name() string {
return r.name
}
func (r *resource) Property(key string) string {
return r.properties[key]
}
func (r *request) Operation() string {
return r.operation
}
func (r *request) Property(key string) string {
return r.properties[key]
}
func (r *request) Resource() aperesource.Resource {
return r.resource
}
func nativeSchemaRole(role acl.Role) string { func nativeSchemaRole(role acl.Role) string {
switch role { switch role {
@ -123,7 +90,7 @@ func objectProperties(cnr cid.ID, oid *oid.ID, cnrOwner user.ID, header *objectV
// newAPERequest creates an APE request to be passed to a chain router. It collects resource properties from // newAPERequest creates an APE request to be passed to a chain router. It collects resource properties from
// header provided by headerProvider. If it cannot be found in headerProvider, then properties are // header provided by headerProvider. If it cannot be found in headerProvider, then properties are
// initialized from header given in prm (if it is set). Otherwise, just CID and OID are set to properties. // initialized from header given in prm (if it is set). Otherwise, just CID and OID are set to properties.
func (c *checkerImpl) newAPERequest(ctx context.Context, prm Prm) (*request, error) { func (c *checkerImpl) newAPERequest(ctx context.Context, prm Prm) (aperequest.Request, error) {
switch prm.Method { switch prm.Method {
case nativeschema.MethodGetObject, case nativeschema.MethodGetObject,
nativeschema.MethodHeadObject, nativeschema.MethodHeadObject,
@ -131,32 +98,32 @@ func (c *checkerImpl) newAPERequest(ctx context.Context, prm Prm) (*request, err
nativeschema.MethodHashObject, nativeschema.MethodHashObject,
nativeschema.MethodDeleteObject: nativeschema.MethodDeleteObject:
if prm.Object == nil { if prm.Object == nil {
return nil, fmt.Errorf("method %s: %w", prm.Method, errMissingOID) return defaultRequest, fmt.Errorf("method %s: %w", prm.Method, errMissingOID)
} }
case nativeschema.MethodSearchObject, nativeschema.MethodPutObject: case nativeschema.MethodSearchObject, nativeschema.MethodPutObject:
default: default:
return nil, fmt.Errorf("unknown method: %s", prm.Method) return defaultRequest, fmt.Errorf("unknown method: %s", prm.Method)
} }
var header *objectV2.Header var header *objectV2.Header
if prm.Header != nil { if prm.Header != nil {
header = prm.Header header = prm.Header
} else if prm.Object != nil { } else if prm.Object != nil && !prm.WithoutHeaderRequest {
headerObjSDK, err := c.headerProvider.GetHeader(ctx, prm.Container, *prm.Object) headerObjSDK, err := c.headerProvider.GetHeader(ctx, prm.Container, *prm.Object)
if err == nil { if err == nil {
header = headerObjSDK.ToV2().GetHeader() header = headerObjSDK.ToV2().GetHeader()
} }
} }
return &request{ return aperequest.NewRequest(
operation: prm.Method, prm.Method,
resource: &resource{ aperequest.NewResource(
name: resourceName(prm.Container, prm.Object, prm.Namespace), resourceName(prm.Container, prm.Object, prm.Namespace),
properties: objectProperties(prm.Container, prm.Object, prm.ContainerOwner, header), objectProperties(prm.Container, prm.Object, prm.ContainerOwner, header),
}, ),
properties: map[string]string{ map[string]string{
nativeschema.PropertyKeyActorPublicKey: prm.SenderKey, nativeschema.PropertyKeyActorPublicKey: prm.SenderKey,
nativeschema.PropertyKeyActorRole: prm.Role, nativeschema.PropertyKeyActorRole: prm.Role,
}, },
}, nil ), nil
} }

View file

@ -6,6 +6,7 @@ import (
"testing" "testing"
objectV2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object" objectV2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object"
aperequest "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/ape/request"
checksumtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/checksum/test" checksumtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/checksum/test"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object" objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
@ -256,24 +257,23 @@ func TestNewAPERequest(t *testing.T) {
return return
} }
expectedRequest := request{ expectedRequest := aperequest.NewRequest(
operation: method, method,
resource: &resource{ aperequest.NewResource(
name: resourceName(cnr, obj, prm.Namespace), resourceName(cnr, obj, prm.Namespace),
properties: objectProperties(cnr, obj, testCnrOwner, func() *objectV2.Header { objectProperties(cnr, obj, testCnrOwner, func() *objectV2.Header {
if headerObjSDK != nil { if headerObjSDK != nil {
return headerObjSDK.ToV2().GetHeader() return headerObjSDK.ToV2().GetHeader()
} }
return prm.Header return prm.Header
}()), }())),
}, map[string]string{
properties: map[string]string{
nativeschema.PropertyKeyActorPublicKey: prm.SenderKey, nativeschema.PropertyKeyActorPublicKey: prm.SenderKey,
nativeschema.PropertyKeyActorRole: prm.Role, nativeschema.PropertyKeyActorRole: prm.Role,
}, },
} )
require.Equal(t, expectedRequest, *r) require.Equal(t, expectedRequest, r)
}) })
} }
}) })

View file

@ -125,14 +125,15 @@ func (c *Service) Get(request *objectV2.GetRequest, stream objectSvc.GetObjectSt
} }
err = c.apeChecker.CheckAPE(stream.Context(), Prm{ err = c.apeChecker.CheckAPE(stream.Context(), Prm{
Namespace: reqCtx.Namespace, Namespace: reqCtx.Namespace,
Container: cnrID, Container: cnrID,
Object: objID, Object: objID,
Method: nativeschema.MethodGetObject, Method: nativeschema.MethodGetObject,
Role: nativeSchemaRole(reqCtx.Role), Role: nativeSchemaRole(reqCtx.Role),
SenderKey: hex.EncodeToString(reqCtx.SenderKey), SenderKey: hex.EncodeToString(reqCtx.SenderKey),
ContainerOwner: reqCtx.ContainerOwner, ContainerOwner: reqCtx.ContainerOwner,
SoftAPECheck: reqCtx.SoftAPECheck, SoftAPECheck: reqCtx.SoftAPECheck,
WithoutHeaderRequest: true,
}) })
if err != nil { if err != nil {
return toStatusErr(err) return toStatusErr(err)
@ -211,14 +212,15 @@ func (c *Service) Head(ctx context.Context, request *objectV2.HeadRequest) (*obj
} }
err = c.apeChecker.CheckAPE(ctx, Prm{ err = c.apeChecker.CheckAPE(ctx, Prm{
Namespace: reqCtx.Namespace, Namespace: reqCtx.Namespace,
Container: cnrID, Container: cnrID,
Object: objID, Object: objID,
Method: nativeschema.MethodHeadObject, Method: nativeschema.MethodHeadObject,
Role: nativeSchemaRole(reqCtx.Role), Role: nativeSchemaRole(reqCtx.Role),
SenderKey: hex.EncodeToString(reqCtx.SenderKey), SenderKey: hex.EncodeToString(reqCtx.SenderKey),
ContainerOwner: reqCtx.ContainerOwner, ContainerOwner: reqCtx.ContainerOwner,
SoftAPECheck: reqCtx.SoftAPECheck, SoftAPECheck: reqCtx.SoftAPECheck,
WithoutHeaderRequest: true,
}) })
if err != nil { if err != nil {
return nil, toStatusErr(err) return nil, toStatusErr(err)

View file

@ -114,7 +114,7 @@ func (a *assembler) initializeFromSourceObjectID(ctx context.Context, id oid.ID)
} }
to := uint64(0) to := uint64(0)
if seekOff+seekLen > a.currentOffset+from { if seekOff+seekLen >= a.currentOffset+from {
to = seekOff + seekLen - a.currentOffset to = seekOff + seekLen - a.currentOffset
} }

View file

@ -1,6 +1,7 @@
package getsvc package getsvc
import ( import (
"bytes"
"context" "context"
"crypto/ecdsa" "crypto/ecdsa"
"crypto/rand" "crypto/rand"
@ -25,6 +26,9 @@ import (
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object" objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
oidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id/test" oidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id/test"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/transformer"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/version"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -62,6 +66,10 @@ func (e testEpochReceiver) Epoch() (uint64, error) {
return uint64(e), nil return uint64(e), nil
} }
func (e testEpochReceiver) CurrentEpoch() uint64 {
return uint64(e)
}
func newTestStorage() *testStorage { func newTestStorage() *testStorage {
return &testStorage{ return &testStorage{
inhumed: make(map[string]struct{}), inhumed: make(map[string]struct{}),
@ -555,21 +563,6 @@ func TestGetRemoteSmall(t *testing.T) {
return p return p
} }
newRngPrm := func(raw bool, w ChunkWriter, off, ln uint64) RangePrm {
p := RangePrm{}
p.SetChunkWriter(w)
p.WithRawFlag(raw)
p.common = new(util.CommonPrm).WithLocalOnly(false)
r := objectSDK.NewRange()
r.SetOffset(off)
r.SetLength(ln)
p.SetRange(r)
return p
}
newHeadPrm := func(raw bool, w ObjectWriter) HeadPrm { newHeadPrm := func(raw bool, w ObjectWriter) HeadPrm {
p := HeadPrm{} p := HeadPrm{}
p.SetHeaderWriter(w) p.SetHeaderWriter(w)
@ -1628,6 +1621,203 @@ func TestGetRemoteSmall(t *testing.T) {
}) })
} }
type testTarget struct {
objects []*objectSDK.Object
}
func (tt *testTarget) WriteObject(_ context.Context, obj *objectSDK.Object) error {
tt.objects = append(tt.objects, obj)
return nil
}
func objectChain(t *testing.T, cnr cid.ID, singleSize, totalSize uint64) (oid.ID, []*objectSDK.Object, *objectSDK.Object, []byte) {
pk, err := keys.NewPrivateKey()
require.NoError(t, err)
tt := new(testTarget)
p := transformer.NewPayloadSizeLimiter(transformer.Params{
Key: &pk.PrivateKey,
NextTargetInit: func() transformer.ObjectWriter { return tt },
NetworkState: testEpochReceiver(1),
MaxSize: singleSize,
})
payload := make([]byte, totalSize)
_, err = rand.Read(payload)
require.NoError(t, err)
ver := version.Current()
hdr := objectSDK.New()
hdr.SetContainerID(cnr)
hdr.SetType(objectSDK.TypeRegular)
hdr.SetVersion(&ver)
ctx := context.Background()
require.NoError(t, p.WriteHeader(ctx, hdr))
_, err = p.Write(ctx, payload)
require.NoError(t, err)
res, err := p.Close(ctx)
require.NoError(t, err)
if totalSize <= singleSize {
// Small object, no linking.
require.Len(t, tt.objects, 1)
return res.SelfID, tt.objects, nil, payload
}
return *res.ParentID, tt.objects[:len(tt.objects)-1], tt.objects[len(tt.objects)-1], bytes.Clone(payload)
}
func newRngPrm(raw bool, w ChunkWriter, off, ln uint64) RangePrm {
p := RangePrm{}
p.SetChunkWriter(w)
p.WithRawFlag(raw)
p.common = new(util.CommonPrm)
r := objectSDK.NewRange()
r.SetOffset(off)
r.SetLength(ln)
p.SetRange(r)
return p
}
func TestGetRange(t *testing.T) {
var cnr container.Container
cnr.SetPlacementPolicy(netmaptest.PlacementPolicy())
var idCnr cid.ID
container.CalculateID(&idCnr, cnr)
ns, as := testNodeMatrix(t, []int{2})
testGetRange := func(t *testing.T, svc *Service, addr oid.Address, from, to uint64, payload []byte) {
w := NewSimpleObjectWriter()
rngPrm := newRngPrm(false, w, from, to-from)
rngPrm.WithAddress(addr)
err := svc.GetRange(context.Background(), rngPrm)
require.NoError(t, err)
if from == to {
require.Nil(t, w.Object().Payload())
} else {
require.Equal(t, payload[from:to], w.Object().Payload())
}
}
newSvc := func(b *testPlacementBuilder, c *testClientCache) *Service {
const curEpoch = 13
return &Service{
log: test.NewLogger(t),
localStorage: newTestStorage(),
traverserGenerator: &testTraverserGenerator{
c: cnr,
b: map[uint64]placement.Builder{
curEpoch: b,
},
},
epochSource: testEpochReceiver(curEpoch),
remoteStorageConstructor: c,
keyStore: &testKeyStorage{},
}
}
t.Run("small", func(t *testing.T) {
const totalSize = 5
_, objs, _, payload := objectChain(t, idCnr, totalSize, totalSize)
require.Len(t, objs, 1)
require.Len(t, payload, totalSize)
obj := objs[0]
addr := object.AddressOf(obj)
builder := &testPlacementBuilder{vectors: map[string][][]netmap.NodeInfo{addr.EncodeToString(): ns}}
c1 := newTestClient()
c1.addResult(addr, obj, nil)
svc := newSvc(builder, &testClientCache{
clients: map[string]*testClient{
as[0][0]: c1,
as[0][1]: c1,
},
})
for from := 0; from < totalSize-1; from++ {
for to := from; to < totalSize; to++ {
t.Run(fmt.Sprintf("from=%d,to=%d", from, to), func(t *testing.T) {
testGetRange(t, svc, addr, uint64(from), uint64(to), payload)
})
}
}
})
t.Run("big", func(t *testing.T) {
const totalSize = 9
id, objs, link, payload := objectChain(t, idCnr, 3, totalSize) // 3 parts
require.Equal(t, totalSize, len(payload))
builder := &testPlacementBuilder{vectors: map[string][][]netmap.NodeInfo{}}
builder.vectors[idCnr.EncodeToString()+"/"+id.EncodeToString()] = ns
builder.vectors[object.AddressOf(link).EncodeToString()] = ns
for i := range objs {
builder.vectors[object.AddressOf(objs[i]).EncodeToString()] = ns
}
var addr oid.Address
addr.SetContainer(idCnr)
addr.SetObject(id)
const (
linkingLast = "splitinfo=last"
linkingChildren = "splitinfo=children"
linkingBoth = "splitinfo=both"
)
lastID, _ := objs[len(objs)-1].ID()
linkID, _ := link.ID()
for _, kind := range []string{linkingLast, linkingChildren, linkingBoth} {
t.Run(kind, func(t *testing.T) {
c1 := newTestClient()
for i := range objs {
c1.addResult(object.AddressOf(objs[i]), objs[i], nil)
}
c1.addResult(object.AddressOf(link), link, nil)
si := objectSDK.NewSplitInfo()
switch kind {
case linkingLast:
si.SetLastPart(lastID)
case linkingChildren:
si.SetLink(linkID)
case linkingBoth:
si.SetLastPart(lastID)
si.SetLink(linkID)
}
c1.addResult(addr, nil, objectSDK.NewSplitInfoError(si))
svc := newSvc(builder, &testClientCache{
clients: map[string]*testClient{
as[0][0]: c1,
as[0][1]: c1,
},
})
for from := 0; from < totalSize-1; from++ {
for to := from; to < totalSize; to++ {
t.Run(fmt.Sprintf("from=%d,to=%d", from, to), func(t *testing.T) {
testGetRange(t, svc, addr, uint64(from), uint64(to), payload)
})
}
}
})
}
})
}
func TestGetFromPastEpoch(t *testing.T) { func TestGetFromPastEpoch(t *testing.T) {
ctx := context.Background() ctx := context.Background()

View file

@ -348,7 +348,7 @@ func PayloadRange(ctx context.Context, prm PayloadRangePrm) (*PayloadRangeRes, e
ln = maxInitialBufferSize ln = maxInitialBufferSize
} }
w := bytes.NewBuffer(make([]byte, ln)) w := bytes.NewBuffer(make([]byte, 0, ln))
_, err = io.CopyN(w, rdr, int64(prm.ln)) _, err = io.CopyN(w, rdr, int64(prm.ln))
if err != nil { if err != nil {
return nil, fmt.Errorf("read payload: %w", err) return nil, fmt.Errorf("read payload: %w", err)

70
pkg/services/tree/ape.go Normal file
View file

@ -0,0 +1,70 @@
package tree
import (
"encoding/hex"
"fmt"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/ape/converter"
aperequest "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/ape/request"
core "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
cnrSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
nativeschema "git.frostfs.info/TrueCloudLab/policy-engine/schema/native"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
)
func (s *Service) checkAPE(container *core.Container, cid cid.ID, operation acl.Op, role acl.Role, publicKey *keys.PublicKey) error {
namespace := ""
cntNamespace, hasNamespace := strings.CutSuffix(cnrSDK.ReadDomain(container.Value).Zone(), ".ns")
if hasNamespace {
namespace = cntNamespace
}
schemaMethod, err := converter.SchemaMethodFromACLOperation(operation)
if err != nil {
return apeErr(err)
}
schemaRole, err := converter.SchemaRoleFromACLRole(role)
if err != nil {
return apeErr(err)
}
reqProps := map[string]string{
nativeschema.PropertyKeyActorPublicKey: hex.EncodeToString(publicKey.Bytes()),
nativeschema.PropertyKeyActorRole: schemaRole,
}
var resourceName string
if namespace == "root" || namespace == "" {
resourceName = fmt.Sprintf(nativeschema.ResourceFormatRootContainerObjects, cid.EncodeToString())
} else {
resourceName = fmt.Sprintf(nativeschema.ResourceFormatNamespaceContainerObjects, namespace, cid.EncodeToString())
}
request := aperequest.NewRequest(
schemaMethod,
aperequest.NewResource(resourceName, make(map[string]string)),
reqProps,
)
rt := engine.NewRequestTargetExtended(namespace, cid.EncodeToString(), fmt.Sprintf("%s:%s", namespace, publicKey.Address()), nil)
status, found, err := s.router.IsAllowed(apechain.Ingress, rt, request)
if err != nil {
return apeErr(err)
}
if found && status == apechain.Allow {
return nil
}
err = fmt.Errorf("access to operation %s is denied by access policy engine: %s", schemaMethod, status.String())
return apeErr(err)
}
func apeErr(err error) error {
errAccessDenied := &apistatus.ObjectAccessDenied{}
errAccessDenied.WriteReason(err.Error())
return errAccessDenied
}

View file

@ -48,7 +48,7 @@ func TestGetSubTree(t *testing.T) {
acc := subTreeAcc{errIndex: errIndex} acc := subTreeAcc{errIndex: errIndex}
err := getSubTree(context.Background(), &acc, d.CID, &GetSubTreeRequest_Body{ err := getSubTree(context.Background(), &acc, d.CID, &GetSubTreeRequest_Body{
TreeId: treeID, TreeId: treeID,
RootId: rootID, RootId: []uint64{rootID},
Depth: depth, Depth: depth,
}, p) }, p)
if errIndex == -1 { if errIndex == -1 {
@ -58,12 +58,12 @@ func TestGetSubTree(t *testing.T) {
} }
// GetSubTree must return child only after is has returned the parent. // GetSubTree must return child only after is has returned the parent.
require.Equal(t, rootID, acc.seen[0].Body.NodeId) require.Equal(t, rootID, acc.seen[0].Body.NodeId[0])
loop: loop:
for i := 1; i < len(acc.seen); i++ { for i := 1; i < len(acc.seen); i++ {
parent := acc.seen[i].Body.ParentId parent := acc.seen[i].Body.ParentId
for j := 0; j < i; j++ { for j := 0; j < i; j++ {
if acc.seen[j].Body.NodeId == parent { if acc.seen[j].Body.NodeId[0] == parent[0] {
continue loop continue loop
} }
} }
@ -73,16 +73,16 @@ func TestGetSubTree(t *testing.T) {
// GetSubTree must return valid meta. // GetSubTree must return valid meta.
for i := range acc.seen { for i := range acc.seen {
b := acc.seen[i].Body b := acc.seen[i].Body
meta, node, err := p.TreeGetMeta(context.Background(), d.CID, treeID, b.NodeId) meta, node, err := p.TreeGetMeta(context.Background(), d.CID, treeID, b.NodeId[0])
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, node, b.ParentId) require.Equal(t, node, b.ParentId[0])
require.Equal(t, meta.Time, b.Timestamp) require.Equal(t, meta.Time, b.Timestamp[0])
require.Equal(t, metaToProto(meta.Items), b.Meta) require.Equal(t, metaToProto(meta.Items), b.Meta)
} }
ordered := make([]uint64, len(acc.seen)) ordered := make([]uint64, len(acc.seen))
for i := range acc.seen { for i := range acc.seen {
ordered[i] = acc.seen[i].Body.NodeId ordered[i] = acc.seen[i].Body.NodeId[0]
} }
return ordered return ordered
} }
@ -130,7 +130,7 @@ func TestGetSubTreeOrderAsc(t *testing.T) {
t.Run("boltdb forest", func(t *testing.T) { t.Run("boltdb forest", func(t *testing.T) {
p := pilorama.NewBoltForest(pilorama.WithPath(filepath.Join(t.TempDir(), "pilorama"))) p := pilorama.NewBoltForest(pilorama.WithPath(filepath.Join(t.TempDir(), "pilorama")))
require.NoError(t, p.Open(context.Background(), 0644)) require.NoError(t, p.Open(context.Background(), 0o644))
require.NoError(t, p.Init()) require.NoError(t, p.Init())
testGetSubTreeOrderAsc(t, p) testGetSubTreeOrderAsc(t, p)
}) })
@ -184,7 +184,7 @@ func testGetSubTreeOrderAsc(t *testing.T, p pilorama.ForestStorage) {
} }
found := false found := false
for j := range tree { for j := range tree {
if acc.seen[i].Body.NodeId == tree[j].id { if acc.seen[i].Body.NodeId[0] == tree[j].id {
found = true found = true
paths = append(paths, path.Join(tree[j].path...)) paths = append(paths, path.Join(tree[j].path...))
} }
@ -207,7 +207,7 @@ func testGetSubTreeOrderAsc(t *testing.T, p pilorama.ForestStorage) {
}, p) }, p)
require.NoError(t, err) require.NoError(t, err)
require.Len(t, acc.seen, 1) require.Len(t, acc.seen, 1)
require.Equal(t, uint64(0), acc.seen[0].Body.NodeId) require.Equal(t, uint64(0), acc.seen[0].Body.NodeId[0])
}) })
t.Run("depth=2", func(t *testing.T) { t.Run("depth=2", func(t *testing.T) {
acc := subTreeAcc{errIndex: -1} acc := subTreeAcc{errIndex: -1}
@ -220,15 +220,16 @@ func testGetSubTreeOrderAsc(t *testing.T, p pilorama.ForestStorage) {
}, p) }, p)
require.NoError(t, err) require.NoError(t, err)
require.Len(t, acc.seen, 3) require.Len(t, acc.seen, 3)
require.Equal(t, uint64(0), acc.seen[0].Body.NodeId) require.Equal(t, uint64(0), acc.seen[0].Body.NodeId[0])
require.Equal(t, uint64(0), acc.seen[1].GetBody().GetParentId()) require.Equal(t, uint64(0), acc.seen[1].GetBody().GetParentId()[0])
require.Equal(t, uint64(0), acc.seen[2].GetBody().GetParentId()) require.Equal(t, uint64(0), acc.seen[2].GetBody().GetParentId()[0])
}) })
} }
var ( var (
errSubTreeSend = errors.New("send finished with error") errSubTreeSend = errors.New("send finished with error")
errSubTreeSendAfterError = errors.New("send was invoked after an error occurred") errSubTreeSendAfterError = errors.New("send was invoked after an error occurred")
errInvalidResponse = errors.New("send got invalid response")
) )
type subTreeAcc struct { type subTreeAcc struct {
@ -241,6 +242,16 @@ type subTreeAcc struct {
var _ TreeService_GetSubTreeServer = &subTreeAcc{} var _ TreeService_GetSubTreeServer = &subTreeAcc{}
func (s *subTreeAcc) Send(r *GetSubTreeResponse) error { func (s *subTreeAcc) Send(r *GetSubTreeResponse) error {
b := r.GetBody()
if len(b.GetNodeId()) > 1 {
return errInvalidResponse
}
if len(b.GetParentId()) > 1 {
return errInvalidResponse
}
if len(b.GetTimestamp()) > 1 {
return errInvalidResponse
}
s.seen = append(s.seen, r) s.seen = append(s.seen, r)
if s.errIndex >= 0 { if s.errIndex >= 0 {
if len(s.seen) == s.errIndex+1 { if len(s.seen) == s.errIndex+1 {

View file

@ -9,6 +9,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/pilorama" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/pilorama"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
policyengine "git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
) )
@ -38,6 +39,8 @@ type cfg struct {
containerCacheSize int containerCacheSize int
authorizedKeys [][]byte authorizedKeys [][]byte
router policyengine.ChainRouter
metrics MetricsRegister metrics MetricsRegister
} }
@ -139,3 +142,9 @@ func WithAuthorizedKeys(keys keys.PublicKeys) Option {
} }
} }
} }
func WithAPERouter(router policyengine.ChainRouter) Option {
return func(c *cfg) {
c.router = router
}
}

View file

@ -16,6 +16,8 @@ import (
netmapSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap" netmapSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"github.com/panjf2000/ants/v2" "github.com/panjf2000/ants/v2"
"go.uber.org/zap" "go.uber.org/zap"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
) )
// Service represents tree-service capable of working with multiple // Service represents tree-service capable of working with multiple
@ -440,29 +442,50 @@ func (s *Service) GetSubTree(req *GetSubTreeRequest, srv TreeService_GetSubTreeS
return getSubTree(srv.Context(), srv, cid, b, s.forest) return getSubTree(srv.Context(), srv, cid, b, s.forest)
} }
type stackItem struct {
values []pilorama.MultiNodeInfo
parent pilorama.MultiNode
last *string
}
func getSortedSubTree(ctx context.Context, srv TreeService_GetSubTreeServer, cid cidSDK.ID, b *GetSubTreeRequest_Body, forest pilorama.Forest) error { func getSortedSubTree(ctx context.Context, srv TreeService_GetSubTreeServer, cid cidSDK.ID, b *GetSubTreeRequest_Body, forest pilorama.Forest) error {
const batchSize = 1000 const batchSize = 1000
type stackItem struct { // For backward compatibility.
values []pilorama.NodeInfo rootIDs := b.GetRootId()
parent pilorama.Node if len(rootIDs) == 0 {
last string rootIDs = []uint64{0}
} }
// Traverse the tree in a DFS manner. Because we need to support arbitrary depth, // Traverse the tree in a DFS manner. Because we need to support arbitrary depth,
// recursive implementation is not suitable here, so we maintain explicit stack. // recursive implementation is not suitable here, so we maintain explicit stack.
m, p, err := forest.TreeGetMeta(ctx, cid, b.GetTreeId(), b.GetRootId()) var ms []pilorama.KeyValue
if err != nil { var ps []uint64
return err var ts []uint64
for _, rootID := range rootIDs {
m, p, err := forest.TreeGetMeta(ctx, cid, b.GetTreeId(), rootID)
if err != nil {
return err
}
if ms == nil {
ms = m.Items
} else {
if len(m.Items) != 1 {
return status.Error(codes.InvalidArgument, "multiple non-internal nodes provided")
}
}
ts = append(ts, m.Time)
ps = append(ps, p)
} }
stack := []stackItem{{ stack := []stackItem{{
values: []pilorama.NodeInfo{{ values: []pilorama.MultiNodeInfo{{
ID: b.GetRootId(), Children: rootIDs,
Meta: m, Timestamps: ts,
ParentID: p, Meta: ms,
Parents: ps,
}}, }},
parent: p, parent: ps,
}} }}
for { for {
@ -486,30 +509,20 @@ func getSortedSubTree(ctx context.Context, srv TreeService_GetSubTreeServer, cid
} }
} }
node := stack[len(stack)-1].values[0] node, err := stackPopAndSend(stack, srv)
stack[len(stack)-1].values = stack[len(stack)-1].values[1:]
err = srv.Send(&GetSubTreeResponse{
Body: &GetSubTreeResponse_Body{
NodeId: node.ID,
ParentId: node.ParentID,
Timestamp: node.Meta.Time,
Meta: metaToProto(node.Meta.Items),
},
})
if err != nil { if err != nil {
return err return err
} }
if b.GetDepth() == 0 || uint32(len(stack)) < b.GetDepth() { if b.GetDepth() == 0 || uint32(len(stack)) < b.GetDepth() {
children, last, err := forest.TreeSortedByFilename(ctx, cid, b.GetTreeId(), node.ID, "", batchSize) children, last, err := forest.TreeSortedByFilename(ctx, cid, b.GetTreeId(), node.Children, nil, batchSize)
if err != nil { if err != nil {
return err return err
} }
if len(children) != 0 { if len(children) != 0 {
stack = append(stack, stackItem{ stack = append(stack, stackItem{
values: children, values: children,
parent: node.ID, parent: node.Children,
last: last, last: last,
}) })
} }
@ -518,19 +531,38 @@ func getSortedSubTree(ctx context.Context, srv TreeService_GetSubTreeServer, cid
return nil return nil
} }
func stackPopAndSend(stack []stackItem, srv TreeService_GetSubTreeServer) (pilorama.MultiNodeInfo, error) {
node := stack[len(stack)-1].values[0]
stack[len(stack)-1].values = stack[len(stack)-1].values[1:]
return node, srv.Send(&GetSubTreeResponse{
Body: &GetSubTreeResponse_Body{
NodeId: node.Children,
ParentId: node.Parents,
Timestamp: node.Timestamps,
Meta: metaToProto(node.Meta),
},
})
}
func getSubTree(ctx context.Context, srv TreeService_GetSubTreeServer, cid cidSDK.ID, b *GetSubTreeRequest_Body, forest pilorama.Forest) error { func getSubTree(ctx context.Context, srv TreeService_GetSubTreeServer, cid cidSDK.ID, b *GetSubTreeRequest_Body, forest pilorama.Forest) error {
if b.GetOrderBy().GetDirection() == GetSubTreeRequest_Body_Order_Asc { if b.GetOrderBy().GetDirection() == GetSubTreeRequest_Body_Order_Asc {
return getSortedSubTree(ctx, srv, cid, b, forest) return getSortedSubTree(ctx, srv, cid, b, forest)
} }
var rootID uint64
if len(b.GetRootId()) > 0 {
rootID = b.GetRootId()[0]
}
// Traverse the tree in a DFS manner. Because we need to support arbitrary depth, // Traverse the tree in a DFS manner. Because we need to support arbitrary depth,
// recursive implementation is not suitable here, so we maintain explicit stack. // recursive implementation is not suitable here, so we maintain explicit stack.
m, p, err := forest.TreeGetMeta(ctx, cid, b.GetTreeId(), b.GetRootId()) m, p, err := forest.TreeGetMeta(ctx, cid, b.GetTreeId(), rootID)
if err != nil { if err != nil {
return err return err
} }
stack := [][]pilorama.NodeInfo{{{ stack := [][]pilorama.NodeInfo{{{
ID: b.GetRootId(), ID: rootID,
Meta: m, Meta: m,
ParentID: p, ParentID: p,
}}} }}}
@ -548,9 +580,9 @@ func getSubTree(ctx context.Context, srv TreeService_GetSubTreeServer, cid cidSD
err = srv.Send(&GetSubTreeResponse{ err = srv.Send(&GetSubTreeResponse{
Body: &GetSubTreeResponse_Body{ Body: &GetSubTreeResponse_Body{
NodeId: node.ID, NodeId: []uint64{node.ID},
ParentId: node.ParentID, ParentId: []uint64{node.ParentID},
Timestamp: node.Meta.Time, Timestamp: []uint64{node.Meta.Time},
Meta: metaToProto(node.Meta.Items), Meta: metaToProto(node.Meta.Items),
}, },
}) })

Binary file not shown.

View file

@ -28,25 +28,25 @@ service TreeService {
// Otherwise, a request is denied. // Otherwise, a request is denied.
// Add adds new node to the tree. Invoked by a client. // Add adds new node to the tree. Invoked by a client.
rpc Add (AddRequest) returns (AddResponse); rpc Add(AddRequest) returns (AddResponse);
// AddByPath adds new node to the tree by path. Invoked by a client. // AddByPath adds new node to the tree by path. Invoked by a client.
rpc AddByPath (AddByPathRequest) returns (AddByPathResponse); rpc AddByPath(AddByPathRequest) returns (AddByPathResponse);
// Remove removes node from the tree. Invoked by a client. // Remove removes node from the tree. Invoked by a client.
rpc Remove (RemoveRequest) returns (RemoveResponse); rpc Remove(RemoveRequest) returns (RemoveResponse);
// Move moves node from one parent to another. Invoked by a client. // Move moves node from one parent to another. Invoked by a client.
rpc Move (MoveRequest) returns (MoveResponse); rpc Move(MoveRequest) returns (MoveResponse);
// GetNodeByPath returns list of IDs corresponding to a specific filepath. // GetNodeByPath returns list of IDs corresponding to a specific filepath.
rpc GetNodeByPath (GetNodeByPathRequest) returns (GetNodeByPathResponse); rpc GetNodeByPath(GetNodeByPathRequest) returns (GetNodeByPathResponse);
// GetSubTree returns tree corresponding to a specific node. // GetSubTree returns tree corresponding to a specific node.
rpc GetSubTree (GetSubTreeRequest) returns (stream GetSubTreeResponse); rpc GetSubTree(GetSubTreeRequest) returns (stream GetSubTreeResponse);
// TreeList return list of the existing trees in the container. // TreeList return list of the existing trees in the container.
rpc TreeList (TreeListRequest) returns (TreeListResponse); rpc TreeList(TreeListRequest) returns (TreeListResponse);
/* Synchronization API */ /* Synchronization API */
// Apply pushes log operation from another node to the current. // Apply pushes log operation from another node to the current.
// The request must be signed by a container node. // The request must be signed by a container node.
rpc Apply (ApplyRequest) returns (ApplyResponse); rpc Apply(ApplyRequest) returns (ApplyResponse);
// GetOpLog returns a stream of logged operations starting from some height. // GetOpLog returns a stream of logged operations starting from some height.
rpc GetOpLog(GetOpLogRequest) returns (stream GetOpLogResponse); rpc GetOpLog(GetOpLogRequest) returns (stream GetOpLogResponse);
// Healthcheck is a dummy rpc to check service availability // Healthcheck is a dummy rpc to check service availability
@ -85,7 +85,6 @@ message AddResponse {
Signature signature = 2; Signature signature = 2;
}; };
message AddByPathRequest { message AddByPathRequest {
message Body { message Body {
// Container ID in V2 format. // Container ID in V2 format.
@ -122,7 +121,6 @@ message AddByPathResponse {
Signature signature = 2; Signature signature = 2;
}; };
message RemoveRequest { message RemoveRequest {
message Body { message Body {
// Container ID in V2 format. // Container ID in V2 format.
@ -142,8 +140,7 @@ message RemoveRequest {
} }
message RemoveResponse { message RemoveResponse {
message Body { message Body {}
}
// Response body. // Response body.
Body body = 1; Body body = 1;
@ -151,7 +148,6 @@ message RemoveResponse {
Signature signature = 2; Signature signature = 2;
}; };
message MoveRequest { message MoveRequest {
message Body { message Body {
// TODO import neo.fs.v2.refs.ContainerID directly. // TODO import neo.fs.v2.refs.ContainerID directly.
@ -176,8 +172,7 @@ message MoveRequest {
} }
message MoveResponse { message MoveResponse {
message Body { message Body {}
}
// Response body. // Response body.
Body body = 1; Body body = 1;
@ -185,7 +180,6 @@ message MoveResponse {
Signature signature = 2; Signature signature = 2;
}; };
message GetNodeByPathRequest { message GetNodeByPathRequest {
message Body { message Body {
// Container ID in V2 format. // Container ID in V2 format.
@ -235,7 +229,6 @@ message GetNodeByPathResponse {
Signature signature = 2; Signature signature = 2;
}; };
message GetSubTreeRequest { message GetSubTreeRequest {
message Body { message Body {
message Order { message Order {
@ -249,8 +242,8 @@ message GetSubTreeRequest {
bytes container_id = 1; bytes container_id = 1;
// The name of the tree. // The name of the tree.
string tree_id = 2; string tree_id = 2;
// ID of the root node of a subtree. // IDs of the root nodes of a subtree forest.
uint64 root_id = 3; repeated uint64 root_id = 3 [ packed = false ];
// Optional depth of the traversal. Zero means return only root. // Optional depth of the traversal. Zero means return only root.
// Maximum depth is 10. // Maximum depth is 10.
uint32 depth = 4; uint32 depth = 4;
@ -269,11 +262,11 @@ message GetSubTreeRequest {
message GetSubTreeResponse { message GetSubTreeResponse {
message Body { message Body {
// ID of the node. // ID of the node.
uint64 node_id = 1; repeated uint64 node_id = 1 [ packed = false ];
// ID of the parent. // ID of the parent.
uint64 parent_id = 2; repeated uint64 parent_id = 2 [ packed = false ];
// Time node was first added to a tree. // Time node was first added to a tree.
uint64 timestamp = 3; repeated uint64 timestamp = 3 [ packed = false ];
// Node meta-information. // Node meta-information.
repeated KeyValue meta = 4; repeated KeyValue meta = 4;
} }
@ -307,7 +300,6 @@ message TreeListResponse {
Signature signature = 2; Signature signature = 2;
} }
message ApplyRequest { message ApplyRequest {
message Body { message Body {
// Container ID in V2 format. // Container ID in V2 format.
@ -325,8 +317,7 @@ message ApplyRequest {
} }
message ApplyResponse { message ApplyResponse {
message Body { message Body {}
}
// Response body. // Response body.
Body body = 1; Body body = 1;
@ -334,7 +325,6 @@ message ApplyResponse {
Signature signature = 2; Signature signature = 2;
}; };
message GetOpLogRequest { message GetOpLogRequest {
message Body { message Body {
// Container ID in V2 format. // Container ID in V2 format.
@ -366,8 +356,7 @@ message GetOpLogResponse {
}; };
message HealthcheckResponse { message HealthcheckResponse {
message Body { message Body {}
}
// Response body. // Response body.
Body body = 1; Body body = 1;
@ -376,8 +365,7 @@ message HealthcheckResponse {
}; };
message HealthcheckRequest { message HealthcheckRequest {
message Body { message Body {}
}
// Request body. // Request body.
Body body = 1; Body body = 1;

Binary file not shown.

View file

@ -71,7 +71,7 @@ func (s *Service) verifyClient(req message, cid cidSDK.ID, rawBearer []byte, op
return err return err
} }
role, err := roleFromReq(cnr, req, bt) role, pubKey, err := roleAndPubKeyFromReq(cnr, req, bt)
if err != nil { if err != nil {
return fmt.Errorf("can't get request role: %w", err) return fmt.Errorf("can't get request role: %w", err)
} }
@ -79,8 +79,11 @@ func (s *Service) verifyClient(req message, cid cidSDK.ID, rawBearer []byte, op
basicACL := cnr.Value.BasicACL() basicACL := cnr.Value.BasicACL()
// Basic ACL mask can be unset, if a container operations are performed // Basic ACL mask can be unset, if a container operations are performed
// with strict APE checks only. // with strict APE checks only.
//
// FIXME(@aarifullin): tree service temporiraly performs APE checks on
// object verbs, because tree verbs have not been introduced yet.
if basicACL == 0x0 { if basicACL == 0x0 {
return nil return s.checkAPE(cnr, cid, op, role, pubKey)
} }
if !basicACL.IsOpAllowed(op, role) { if !basicACL.IsOpAllowed(op, role) {
@ -222,7 +225,7 @@ func SignMessage(m message, key *ecdsa.PrivateKey) error {
return nil return nil
} }
func roleFromReq(cnr *core.Container, req message, bt *bearer.Token) (acl.Role, error) { func roleAndPubKeyFromReq(cnr *core.Container, req message, bt *bearer.Token) (acl.Role, *keys.PublicKey, error) {
role := acl.RoleOthers role := acl.RoleOthers
owner := cnr.Value.Owner() owner := cnr.Value.Owner()
@ -233,7 +236,7 @@ func roleFromReq(cnr *core.Container, req message, bt *bearer.Token) (acl.Role,
pub, err := keys.NewPublicKeyFromBytes(rawKey, elliptic.P256()) pub, err := keys.NewPublicKeyFromBytes(rawKey, elliptic.P256())
if err != nil { if err != nil {
return role, fmt.Errorf("invalid public key: %w", err) return role, nil, fmt.Errorf("invalid public key: %w", err)
} }
var reqSigner user.ID var reqSigner user.ID
@ -243,7 +246,7 @@ func roleFromReq(cnr *core.Container, req message, bt *bearer.Token) (acl.Role,
role = acl.RoleOwner role = acl.RoleOwner
} }
return role, nil return role, pub, nil
} }
func eACLOp(op acl.Op) eacl.Operation { func eACLOp(op acl.Op) eacl.Operation {

Binary file not shown.

View file

@ -10,25 +10,25 @@ option go_package = "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/tre
// KeyValue represents key-value pair attached to an object. // KeyValue represents key-value pair attached to an object.
message KeyValue { message KeyValue {
// Attribute name. // Attribute name.
string key = 1 [json_name = "key"]; string key = 1 [ json_name = "key" ];
// Attribute value. // Attribute value.
bytes value = 2 [json_name = "value"]; bytes value = 2 [ json_name = "value" ];
} }
// LogMove represents log-entry for a single move operation. // LogMove represents log-entry for a single move operation.
message LogMove { message LogMove {
// ID of the parent node. // ID of the parent node.
uint64 parent_id = 1 [json_name = "parentID"]; uint64 parent_id = 1 [ json_name = "parentID" ];
// Node meta information, including operation timestamp. // Node meta information, including operation timestamp.
bytes meta = 2 [json_name = "meta"]; bytes meta = 2 [ json_name = "meta" ];
// ID of the node to move. // ID of the node to move.
uint64 child_id = 3 [json_name = "childID"]; uint64 child_id = 3 [ json_name = "childID" ];
} }
// Signature of a message. // Signature of a message.
message Signature { message Signature {
// Serialized public key as defined in FrostFS API. // Serialized public key as defined in FrostFS API.
bytes key = 1 [json_name = "key"]; bytes key = 1 [ json_name = "key" ];
// Signature of a message body. // Signature of a message body.
bytes sign = 2 [json_name = "signature"]; bytes sign = 2 [ json_name = "signature" ];
} }