Compare commits

...

63 commits

Author SHA1 Message Date
Pavel Karpy
db981e9c99 [#2079] cli: Do not panic in object hash
Sign RPC requests with the provided key.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-23 18:30:29 +03:00
Pavel Karpy
28ad4c6ebc [#2081] ir: Set default key in IR's SDK clients
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-22 19:48:18 +03:00
c180c405b5 [#2078] adm: Pack parameters for setPrice invocation
Contract arguments have to be packed.

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2022-11-22 15:07:57 +03:00
Evgenii Stratonikov
02049ca5b2 [#2075] morph/client: Ignore error if a transaction already exists
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-18 13:02:02 +03:00
b9a24e99dc [#2063] morph/client: Support new hash format in morph nns client
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-11-18 12:59:37 +03:00
Pavel Karpy
584f465eee [#2074] write-cache: Do not flush same object twice
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-18 11:38:52 +03:00
Pavel Karpy
2614aa1582 [#2074] write-cache: Remove unused variables
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-18 11:38:52 +03:00
Evgenii Stratonikov
15d2091f42 [#2069] innerring: Do not panic in Head
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-17 16:50:04 +03:00
056cb0d50e [#409] debian: Refactor storage service paths
Separate User data and Service data:
 - /var/lib/neofs/storage for service persistence
 - /srv/neofs for user data

Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-17 15:19:34 +03:00
Evgenii Stratonikov
402bbba15a [#2062] services/policer: Use a proper key for object cache
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-16 11:19:31 +03:00
Pavel Karpy
7cc0986e0c [#2057] meta: Fail write operations in R/O mode
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-15 19:20:05 +03:00
Pavel Karpy
02676f05c3 [#2057] meta: Fix concurrent mode changes
Includes:
1. mode change read lock operation in every exported method that r/w the
underlying database;
2. returning `ErrDegradedMode` logical error if any exported method is
called in degraded (without a metabase) mode.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-15 19:20:05 +03:00
Pavel Karpy
e8d401e28d [#2057] meta: Do not lock the whole meta on GET
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-15 19:20:05 +03:00
Pavel Karpy
479601ceb9 [#2057] blobstor: Block operations on a mode change
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-15 19:20:05 +03:00
Evgenii Stratonikov
27ca754dc1 [#2058] services/policer: Fix panic in shardPolicyWorker
```
2022/11/15 08:40:56 worker exits from a panic: runtime error: index out of range [0] with length 0
2022/11/15 08:40:56 worker exits from panic: goroutine 1188 [running]:
github.com/panjf2000/ants/v2.(*goWorker).run.func1.1()
	github.com/panjf2000/ants/v2@v2.4.0/worker.go:58 +0x10c
panic({0x1042b60, 0xc0015ae018})
	runtime/panic.go:1038 +0x215
github.com/nspcc-dev/neofs-node/pkg/services/policer.(*Policer).shardPolicyWorker.func1()
	github.com/nspcc-dev/neofs-node/pkg/services/policer/process.go:65 +0x366
github.com/panjf2000/ants/v2.(*goWorker).run.func1()
	github.com/panjf2000/ants/v2@v2.4.0/worker.go:68 +0x97
created by github.com/panjf2000/ants/v2.(*goWorker).run
	github.com/panjf2000/ants/v2@v2.4.0/worker.go:48 +0x68
```

Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-15 18:28:45 +03:00
Pavel Karpy
56442c0be3 [#2053] engine: Do not switch mode because of logical errors
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-15 10:32:01 +03:00
b167700b6f [#1940] Removing all trees by container ID if tree ID is empty in pilorama.Forest.TreeDrop
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2022-11-14 17:12:16 +03:00
Evgenii Stratonikov
92cac5bbdf [#2026] neofs-adm: Make contract update idempotent
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-14 14:44:32 +03:00
Pavel Karpy
19a6ca7896 [#1502] node: Store lock object on every container node
Includes extending listing methods in the Storage Engine with object types.
It allows tuning replication/policer algorithms: container nodes do
not remove `LOCK` objects as redundant and try to fulfill `LOCK` placement
on the ohter container nodes.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-14 12:29:40 +03:00
Pavel Karpy
114018a7bd [#1502] core: Add AddressWithType
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-14 12:29:40 +03:00
Pavel Karpy
84f545dacc [#1502] engine: Check all shards for LOCK'ing before inhuming
It allows keeping all the locked objects safe after metabase
resynchronization. Currently, all `LOCK` objects are broadcast to all nodes
in a container, it guarantees `LOCK` object presence in a regular situation.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-14 12:29:40 +03:00
Pavel Karpy
049ab58336 [#1502] shard: Add IsLocked method
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-14 12:29:40 +03:00
Pavel Karpy
4ed5dcd9c8 [#1502] meta: Add IsLocked method
It gets an object and returns its locking status.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-14 12:29:40 +03:00
Evgenii Stratonikov
cdbfd05704 [#2003] neofs-node: Allow to configure replicator pool size
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-12 17:15:32 +03:00
Evgenii Stratonikov
8212020165 [#2048] network/cache: Optimize ClientCache
1. Remove a layer of indirection for mutex, `ClientCache` is already
   used by pointer.
2. Fix duplication of a `AllowExternal` field.

Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-12 17:15:32 +03:00
Evgenii Stratonikov
e538291c59 [#2048] neofs-node: Use a separate client cache for client operations
Background workers can prevent user operations to complete because of
locking in cache.

Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-12 17:15:32 +03:00
Pavel Karpy
aa12fc57c9 [#2040] node: Do not attach tokens in the assembly process
A container node is expected to have full "get" access to assemble the
object.
A non-container node is expected to forward any request to a container node.
Any token is expected to be issued for an original request sender not for a
node so any new request is invalid by design with that token.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-12 17:00:51 +03:00
Pavel Karpy
5747187884 [#2040] node: Attach original meta to the spawned requests
Do not lose meta information of the original requests: cache session and
bearer tokens of the original request b/w a new generated ones. Middle
request wrappers should not contain any meta information, since it is
useless (e.g. ACL service checks only the original tokens).

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-12 17:00:51 +03:00
Evgenii Stratonikov
de2934aeaa [#1985] blobstor: Allow to report multiple errors to caller
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-12 10:27:09 +03:00
Evgenii Stratonikov
2ba3abde5c [#2035] engine: Allow moving to degraded from background workers
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-12 10:27:09 +03:00
Pavel Karpy
f6f911b50c [#1978] cli: Add children to the static session on DELETE
If an external session is provided and is not opened by CLI itself, add
children objects to it too. It fixes "not found" errors when removing a big
object with a predefined session (from a file).

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-12 10:26:48 +03:00
Pavel Karpy
af54295ac6 [#2029] cli: Fix panic caused by flag redefinition
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-11 17:24:49 +03:00
Pavel Karpy
ddba9180ef [#2029] cli: Allow attaching static session to object hash
All the other object commands already have it.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-11 12:40:01 +03:00
Pavel Karpy
6acb831248 [#2028] node: Check session token's NBF and IAT
ACL service did not check "Not Valid Before" and "Issued At" claims.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-11 12:39:53 +03:00
Pavel Karpy
2a88b49bca [#2028] node: Do not wrap malformed request errors
After presenting request statuses on the API level, all the errors are
unwrapped before sending to the caller side. It led to a losing invalid
request's context.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-11 12:39:53 +03:00
Evgenii Stratonikov
01a226b3ec [#2037] services/object: Fix concurrent map writes in traverser
```
fatal error: concurrent map writes

goroutine 4337 [running]:
github.com/nspcc-dev/neofs-node/pkg/services/object/put.(*traversal).submitProcessed(...)
        github.com/nspcc-dev/neofs-node/pkg/services/object/put/distributed.go:78
github.com/nspcc-dev/neofs-node/pkg/services/object/put.(*distributedTarget).iteratePlacement.func1()
        github.com/nspcc-dev/neofs-node/pkg/services/object/put/distributed.go:198 +0x265
github.com/panjf2000/ants/v2.(*goWorker).run.func1()
        github.com/panjf2000/ants/v2@v2.4.0/worker.go:68 +0x97
created by github.com/panjf2000/ants/v2.(*goWorker).run
        github.com/panjf2000/ants/v2@v2.4.0/worker.go:48 +0x65
```

Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-10 10:54:50 +03:00
Pavel Karpy
110f6e7864 [#2000] cli: Provide a bearer token to spawned HEAD by DELETE
If a `neofs-cli object delete` operation is performing using a bearer token,
add it to the new `HEAD` requests that collects children OIDs.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-09 18:11:47 +03:00
Evgenii Stratonikov
c38ad2d339 [#1906] writecache: Do not require read-only mode in Flush
It was needed before we started to flush during transition to
`degraded` mode. Now it is confusing.

Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-08 16:51:58 +03:00
Evgenii Stratonikov
51a9306e41 [#2024] services/object: Unify status errors
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-08 15:56:36 +03:00
Evgenii Stratonikov
871be9d63d [#2024] services/object: Cover corner cases for children OutOfRange
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-08 15:56:36 +03:00
Pavel Karpy
3eb2ac985d [#1972] node: Fix object format unit tests
Includes:
1. Unused func removal;
2. Err check of the `Sign` method.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-08 13:18:15 +03:00
Pavel Karpy
76b87c6d94 [#1972] node: Do not save objects if node not in a container
Do not use node's local storage if it is clear that an object will be
removed anyway as a redundant. It requires moving the changing local storage
logic from the validation step to the local target implementation.
It allows performing any relations checks (e.g. object locking) only if a
node is considered as a valid container member and is expected to store
(stored previously) all the helper objects (e.g. `LOCK`, `TOMBSTONE`, etc).

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-08 13:18:15 +03:00
Pavel Karpy
59eebc5eeb [#1972] cli: Fix lifetime flag in the lock command
That part of the code was refactored incorrectly.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-08 13:18:15 +03:00
Pavel Karpy
f79386e538 [#1972] node: Fix errors comments in the Put service
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-08 13:18:15 +03:00
Evgenii Stratonikov
d2cce62934 [#1818] writecache: Increase error counter on background errors
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-07 16:07:20 +03:00
Evgenii Stratonikov
a4a6d547a8 [#1818] writecache: Update storage ID during flush
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-07 16:07:20 +03:00
Evgenii Stratonikov
681400eed8 [#1818] metabase: Add UpdateStorageID operation
By default writecache puts the whole object to update storage ID.
This logic comes from the times when we needed to put objects
in the metabase by the writecache itself. Now this is done by the
blobstor at unmarshaling objects during flush only to update storage ID
is an overkill.

Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-07 16:07:20 +03:00
Evgenii Stratonikov
b580846630 [#1818] writecache: Reuse FSTree flushing code between flushes
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-07 16:07:20 +03:00
Evgenii Stratonikov
6e2f7e291d [#1818] writecache: Remove unused variable
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-07 16:07:20 +03:00
Pavel Karpy
7a75b3aaaf [#1699] meta: Do not return SplitInfoError on Delete
It is not an error: removing virtual object is expected and should be just
skipped. Getting a virtual object with `raw` flag is considered as an
impossible action, all the virtual objects removals will be handled via
their children's removals implicitly.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-07 16:04:09 +03:00
Evgenii Stratonikov
9cd8441dd5 [#1732] pilorama: Fill parent mark correctly
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-07 13:02:45 +03:00
Evgenii Stratonikov
163d8d778d [#1732] pilorama: Fix backwards log insertion
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-07 13:02:45 +03:00
Evgenii Stratonikov
52d85ca463 [#1732] pilorama: Improve logical error handling
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-07 13:02:45 +03:00
Evgenii Stratonikov
ba3db7fed5 [#1996] engine: Always select proper shard for a tree
Currently there is a possibility for modifying operations to fail
because of I/O errors and a new tree to be created on another shard.
This commit adds existence check for modifying operations.
Read operations remain as they are, not to slow things.
`TreeDrop` is an exception, because this is a tree removal and trying
multiple shards is not an unwanted behaviour.

Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-07 13:02:45 +03:00
Pavel Karpy
2e89176892 [#1971] cli: Unify CID and OID flags provision
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-03 15:23:16 +03:00
Evgenii Stratonikov
855de87b62 [#2007] services/object: Allocate memory on-demand in GET_RANGE
For big objects we want to get OutOfRange error before all the memory is
allocated.

Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-03 15:02:32 +03:00
Evgenii Stratonikov
289a7827c4 [#2007] services/object: Fix comment
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-03 15:02:32 +03:00
Pavel Karpy
abf4a63585 [#1991] cli: Refine container placement description
Not to confuse a user by mixing a replication vector number with its copy
number.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-02 15:14:19 +03:00
Pavel Karpy
3ee4260647 [#2009] github: Run CI on support branches
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-11-02 14:27:36 +03:00
Evgenii Stratonikov
245da1e50e [#1992] writecache: Allow to open in NOSYNC mode
Applicable only to FSTree as we cannot handle corrupted databases
properly yet.

Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-01 15:29:01 +03:00
Evgenii Stratonikov
a386448ab9 [#1992] neofs-node: Allow to open fstree in NOSYNC mode
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-01 15:29:01 +03:00
Evgenii Stratonikov
3c1f788642 [#1994] docs: Update storage node configuration
Reflect the reality after a not so recent refactoring.

Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-01 15:29:01 +03:00
Evgenii Stratonikov
0de9efa685 [#1992] fstree: Allow working in SYNC mode
Make O_SYNC the default and allow to opt-out explicitly.

Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-11-01 15:29:01 +03:00
117 changed files with 1761 additions and 731 deletions

View file

@ -4,6 +4,7 @@ on:
pull_request: pull_request:
branches: branches:
- master - master
- support/**
jobs: jobs:
build: build:

View file

@ -4,6 +4,7 @@ on:
pull_request: pull_request:
branches: branches:
- master - master
- support/**
jobs: jobs:
build: build:

View file

@ -4,6 +4,7 @@ on:
pull_request: pull_request:
branches: branches:
- master - master
- support/**
jobs: jobs:
commits_check_job: commits_check_job:

View file

@ -4,11 +4,13 @@ on:
push: push:
branches: branches:
- master - master
- support/**
paths-ignore: paths-ignore:
- '*.md' - '*.md'
pull_request: pull_request:
branches: branches:
- master - master
- support/**
paths-ignore: paths-ignore:
- '*.md' - '*.md'

View file

@ -4,11 +4,48 @@ Changelog for NeoFS Node
## [Unreleased] ## [Unreleased]
### Added ### Added
- `session` flag support to `neofs-cli object hash` (#2029)
- Shard can now change mode when encountering background disk errors (#2035)
- Background workers and object service now use separate client caches (#2048)
- `replicator.pool_size` config field to tune replicator pool size (#2049)
- Fix NNS hash parsing in morph client (#2063)
### Changed ### Changed
- `object lock` command reads CID and OID the same way other commands do (#1971)
- `LOCK` object are stored on every container node (#1502)
### Fixed ### Fixed
- Open FSTree in sync mode by default (#1992)
- `neofs-cli container nodes`'s output (#1991)
- Do not panic and return correct errors for bad inputs in `GET_RANGE` (#2007, #2024)
- Correctly select the shard for applying tree service operations (#1996)
- Physical child object removal by GC (#1699)
- Increase error counter for write-cache flush errors (#1818)
- Broadcasting helper objects (#1972)
- `neofs-cli lock object`'s `lifetime` flag handling (#1972)
- Do not move write-cache in read-only mode for flushing (#1906)
- Child object collection on CLI side with a bearer token (#2000)
- Fix concurrent map writes in `Object.Put` service (#2037)
- Malformed request errors' reasons in the responses (#2028)
- Session token's IAT and NBF checks in ACL service (#2028)
- Losing meta information on request forwarding (#2040)
- Assembly process triggered by a request with a bearer token (#2040)
- Losing locking context after metabase resync (#1502)
- Removing all trees by container ID if tree ID is empty in `pilorama.Forest.TreeDrop` (#1940)
- Concurrent mode changes in the metabase and blobstor (#2057)
- Panic in IR when performing HEAD requests (#2069)
- Write-cache flush duplication (#2074)
- Ignore error if a transaction already exists in a morph client (#2075)
- Pack arguments of `setPrice` invocation during contract update (#2078)
- `neofs-cli object hash` panic (#2079)
### Removed ### Removed
### Updated ### Updated
### Updating from v0.34.0 ### Updating from v0.34.0
Pass CID and OID parameters via the `--cid` and `--oid` flags, not as the command arguments.
Replicator pool size can now be fine-tuned with `replicator.pool_size` config field.
The default value is taken from `object.put.pool_size_remote` as in earlier versions.
## [0.34.0] - 2022-10-31 - Marado (마라도, 馬羅島) ## [0.34.0] - 2022-10-31 - Marado (마라도, 馬羅島)

View file

@ -27,6 +27,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/vm/emit" "github.com/nspcc-dev/neo-go/pkg/vm/emit"
"github.com/nspcc-dev/neo-go/pkg/vm/opcode" "github.com/nspcc-dev/neo-go/pkg/vm/opcode"
"github.com/nspcc-dev/neo-go/pkg/vm/vmstate" "github.com/nspcc-dev/neo-go/pkg/vm/vmstate"
"github.com/nspcc-dev/neofs-contract/common"
"github.com/nspcc-dev/neofs-contract/nns" "github.com/nspcc-dev/neofs-contract/nns"
"github.com/nspcc-dev/neofs-node/pkg/innerring" "github.com/nspcc-dev/neofs-node/pkg/innerring"
morphClient "github.com/nspcc-dev/neofs-node/pkg/morph/client" morphClient "github.com/nspcc-dev/neofs-node/pkg/morph/client"
@ -106,7 +107,11 @@ func (c *initializeContext) deployNNS(method string) error {
nnsCs, err := c.nnsContractState() nnsCs, err := c.nnsContractState()
if err == nil { if err == nil {
if nnsCs.NEF.Checksum == cs.NEF.Checksum { if nnsCs.NEF.Checksum == cs.NEF.Checksum {
c.Command.Println("NNS contract is already deployed.") if method == deployMethodName {
c.Command.Println("NNS contract is already deployed.")
} else {
c.Command.Println("NNS contract is already updated.")
}
return nil return nil
} }
h = nnsCs.Hash h = nnsCs.Hash
@ -206,7 +211,10 @@ func (c *initializeContext) updateContracts() error {
} }
if err := c.sendCommitteeTx(w.Bytes(), false); err != nil { if err := c.sendCommitteeTx(w.Bytes(), false); err != nil {
return err if !strings.Contains(err.Error(), common.ErrAlreadyUpdated) {
return err
}
c.Command.Println("Alphabet contracts are already updated.")
} }
w.Reset() w.Reset()
@ -243,7 +251,11 @@ func (c *initializeContext) updateContracts() error {
params := getContractDeployParameters(cs, c.getContractDeployData(ctrName, keysParam)) params := getContractDeployParameters(cs, c.getContractDeployData(ctrName, keysParam))
res, err := c.CommitteeAct.MakeCall(invokeHash, method, params...) res, err := c.CommitteeAct.MakeCall(invokeHash, method, params...)
if err != nil { if err != nil {
return fmt.Errorf("deploy contract: %w", err) if method != updateMethodName || !strings.Contains(err.Error(), common.ErrAlreadyUpdated) {
return fmt.Errorf("deploy contract: %w", err)
}
c.Command.Printf("%s contract is already updated.\n", ctrName)
continue
} }
w.WriteBytes(res.Script) w.WriteBytes(res.Script)
@ -275,6 +287,8 @@ func (c *initializeContext) updateContracts() error {
c.Command.Printf("NNS: Set %s -> %s\n", morphClient.NNSGroupKeyName, hex.EncodeToString(groupKey.Bytes())) c.Command.Printf("NNS: Set %s -> %s\n", morphClient.NNSGroupKeyName, hex.EncodeToString(groupKey.Bytes()))
emit.Opcodes(w.BinWriter, opcode.LDSFLD0) emit.Opcodes(w.BinWriter, opcode.LDSFLD0)
emit.Int(w.BinWriter, 1)
emit.Opcodes(w.BinWriter, opcode.PACK)
emit.AppCallNoArgs(w.BinWriter, nnsHash, "setPrice", callflag.All) emit.AppCallNoArgs(w.BinWriter, nnsHash, "setPrice", callflag.All)
if err := c.sendCommitteeTx(w.Bytes(), false); err != nil { if err := c.sendCommitteeTx(w.Bytes(), false); err != nil {

View file

@ -261,6 +261,8 @@ func parseNNSResolveResult(res stackitem.Item) (util.Uint160, error) {
continue continue
} }
// We support several formats for hash encoding, this logic should be maintained in sync
// with nnsResolve from pkg/morph/client/nns.go
h, err := util.Uint160DecodeStringLE(string(bs)) h, err := util.Uint160DecodeStringLE(string(bs))
if err == nil { if err == nil {
return h, nil return h, nil

View file

@ -39,12 +39,14 @@ var containerNodesCmd = &cobra.Command{
binCnr := make([]byte, sha256.Size) binCnr := make([]byte, sha256.Size)
id.Encode(binCnr) id.Encode(binCnr)
policy := cnr.PlacementPolicy()
var cnrNodes [][]netmap.NodeInfo var cnrNodes [][]netmap.NodeInfo
cnrNodes, err = resmap.NetMap().ContainerNodes(cnr.PlacementPolicy(), binCnr) cnrNodes, err = resmap.NetMap().ContainerNodes(policy, binCnr)
common.ExitOnErr(cmd, "could not build container nodes for given container: %w", err) common.ExitOnErr(cmd, "could not build container nodes for given container: %w", err)
for i := range cnrNodes { for i := range cnrNodes {
cmd.Printf("Rep %d\n", i+1) cmd.Printf("Descriptor #%d, REP %d:\n", i+1, policy.ReplicaNumberByIndex(i))
for j := range cnrNodes[i] { for j := range cnrNodes[i] {
common.PrettyPrintNodeInfo(cmd, cnrNodes[i][j], j, "\t", short) common.PrettyPrintNodeInfo(cmd, cnrNodes[i][j], j, "\t", short)
} }

View file

@ -32,6 +32,7 @@ var objectHashCmd = &cobra.Command{
func initObjectHashCmd() { func initObjectHashCmd() {
commonflags.Init(objectHashCmd) commonflags.Init(objectHashCmd)
initFlagSession(objectHashCmd, "RANGEHASH")
flags := objectHashCmd.Flags() flags := objectHashCmd.Flags()
@ -63,11 +64,13 @@ func getObjectHash(cmd *cobra.Command, _ []string) {
common.ExitOnErr(cmd, "could not decode salt: %w", err) common.ExitOnErr(cmd, "could not decode salt: %w", err)
pk := key.GetOrGenerate(cmd) pk := key.GetOrGenerate(cmd)
cli := internalclient.GetSDKClientByFlag(cmd, pk, commonflags.RPC)
tz := typ == hashTz tz := typ == hashTz
fullHash := len(ranges) == 0 fullHash := len(ranges) == 0
if fullHash { if fullHash {
var headPrm internalclient.HeadObjectPrm var headPrm internalclient.HeadObjectPrm
headPrm.SetClient(cli)
Prepare(cmd, &headPrm) Prepare(cmd, &headPrm)
headPrm.SetAddress(objAddr) headPrm.SetAddress(objAddr)
@ -93,8 +96,6 @@ func getObjectHash(cmd *cobra.Command, _ []string) {
return return
} }
cli := internalclient.GetSDKClientByFlag(cmd, pk, commonflags.RPC)
var hashPrm internalclient.HashPayloadRangesPrm var hashPrm internalclient.HashPayloadRangesPrm
hashPrm.SetClient(cli) hashPrm.SetClient(cli)
Prepare(cmd, &hashPrm) Prepare(cmd, &hashPrm)

View file

@ -21,22 +21,22 @@ import (
// object lock command. // object lock command.
var objectLockCmd = &cobra.Command{ var objectLockCmd = &cobra.Command{
Use: "lock CONTAINER OBJECT...", Use: "lock",
Short: "Lock object in container", Short: "Lock object in container",
Long: "Lock object in container", Long: "Lock object in container",
Args: cobra.MinimumNArgs(2), Run: func(cmd *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) { cidRaw, _ := cmd.Flags().GetString("cid")
var cnr cid.ID
err := cnr.DecodeString(args[0]) var cnr cid.ID
err := cnr.DecodeString(cidRaw)
common.ExitOnErr(cmd, "Incorrect container arg: %v", err) common.ExitOnErr(cmd, "Incorrect container arg: %v", err)
argsList := args[1:] oidsRaw, _ := cmd.Flags().GetStringSlice("oid")
lockList := make([]oid.ID, len(argsList)) lockList := make([]oid.ID, len(oidsRaw))
for i := range argsList { for i := range oidsRaw {
err = lockList[i].DecodeString(argsList[i]) err = lockList[i].DecodeString(oidsRaw[i])
common.ExitOnErr(cmd, fmt.Sprintf("Incorrect object arg #%d: %%v", i+1), err) common.ExitOnErr(cmd, fmt.Sprintf("Incorrect object arg #%d: %%v", i+1), err)
} }
@ -63,9 +63,11 @@ var objectLockCmd = &cobra.Command{
currEpoch, err := internalclient.GetCurrentEpoch(ctx, endpoint) currEpoch, err := internalclient.GetCurrentEpoch(ctx, endpoint)
common.ExitOnErr(cmd, "Request current epoch: %w", err) common.ExitOnErr(cmd, "Request current epoch: %w", err)
exp += currEpoch exp = currEpoch + lifetime
} }
common.PrintVerbose("Lock object will expire at %d epoch", exp)
var expirationAttr objectSDK.Attribute var expirationAttr objectSDK.Attribute
expirationAttr.SetKey(objectV2.SysAttributeExpEpoch) expirationAttr.SetKey(objectV2.SysAttributeExpEpoch)
expirationAttr.SetValue(strconv.FormatUint(exp, 10)) expirationAttr.SetValue(strconv.FormatUint(exp, 10))
@ -94,7 +96,16 @@ func initCommandObjectLock() {
commonflags.Init(objectLockCmd) commonflags.Init(objectLockCmd)
initFlagSession(objectLockCmd, "PUT") initFlagSession(objectLockCmd, "PUT")
objectLockCmd.Flags().Uint64P(commonflags.ExpireAt, "e", 0, "Lock expiration epoch") ff := objectLockCmd.Flags()
objectLockCmd.Flags().Uint64(commonflags.Lifetime, 0, "Lock lifetime")
ff.String("cid", "", "Container ID")
_ = objectLockCmd.MarkFlagRequired("cid")
ff.StringSlice("oid", nil, "Object ID")
_ = objectLockCmd.MarkFlagRequired("oid")
ff.Uint64P(commonflags.ExpireAt, "e", 0, "Lock expiration epoch")
ff.Uint64(commonflags.Lifetime, 0, "Lock lifetime")
objectLockCmd.MarkFlagsMutuallyExclusive(commonflags.ExpireAt, commonflags.Lifetime) objectLockCmd.MarkFlagsMutuallyExclusive(commonflags.ExpireAt, commonflags.Lifetime)
} }

View file

@ -208,6 +208,12 @@ func ReadOrOpenSessionViaClient(cmd *cobra.Command, dst SessionPrm, cli *client.
var objs []oid.ID var objs []oid.ID
if obj != nil { if obj != nil {
objs = []oid.ID{*obj} objs = []oid.ID{*obj}
if _, ok := dst.(*internal.DeleteObjectPrm); ok {
common.PrintVerbose("Collecting relatives of the removal object...")
objs = append(objs, collectObjectRelatives(cmd, cli, cnr, *obj)...)
}
} }
finalizeSession(cmd, dst, tok, key, cnr, objs...) finalizeSession(cmd, dst, tok, key, cnr, objs...)
@ -328,6 +334,8 @@ func collectObjectRelatives(cmd *cobra.Command, cli *client.Client, cnr cid.ID,
prmHead.SetAddress(addrObj) prmHead.SetAddress(addrObj)
prmHead.SetRawFlag(true) prmHead.SetRawFlag(true)
Prepare(cmd, &prmHead)
_, err := internal.HeadObject(prmHead) _, err := internal.HeadObject(prmHead)
var errSplit *object.SplitInfoError var errSplit *object.SplitInfoError

View file

@ -29,6 +29,7 @@ import (
metricsconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/metrics" metricsconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/metrics"
nodeconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/node" nodeconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/node"
objectconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/object" objectconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/object"
replicatorconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/replicator"
"github.com/nspcc-dev/neofs-node/pkg/core/container" "github.com/nspcc-dev/neofs-node/pkg/core/container"
netmapCore "github.com/nspcc-dev/neofs-node/pkg/core/netmap" netmapCore "github.com/nspcc-dev/neofs-node/pkg/core/netmap"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor"
@ -129,6 +130,7 @@ type shardCfg struct {
maxObjSize uint64 maxObjSize uint64
flushWorkerCount int flushWorkerCount int
sizeLimit uint64 sizeLimit uint64
noSync bool
} }
piloramaCfg struct { piloramaCfg struct {
@ -155,10 +157,11 @@ func (c *shardCfg) id() string {
type subStorageCfg struct { type subStorageCfg struct {
// common for all storages // common for all storages
typ string typ string
path string path string
perm fs.FileMode perm fs.FileMode
depth uint64 depth uint64
noSync bool
// blobovnicza-specific // blobovnicza-specific
size uint64 size uint64
@ -218,6 +221,7 @@ func (a *applicationConfiguration) readConfig(c *config.Config) error {
wc.smallObjectSize = writeCacheCfg.SmallObjectSize() wc.smallObjectSize = writeCacheCfg.SmallObjectSize()
wc.flushWorkerCount = writeCacheCfg.WorkersNumber() wc.flushWorkerCount = writeCacheCfg.WorkersNumber()
wc.sizeLimit = writeCacheCfg.SizeLimit() wc.sizeLimit = writeCacheCfg.SizeLimit()
wc.noSync = writeCacheCfg.NoSync()
} }
// blobstor with substorages // blobstor with substorages
@ -258,6 +262,7 @@ func (a *applicationConfiguration) readConfig(c *config.Config) error {
case fstree.Type: case fstree.Type:
sub := fstreeconfig.From((*config.Config)(storagesCfg[i])) sub := fstreeconfig.From((*config.Config)(storagesCfg[i]))
sCfg.depth = sub.Depth() sCfg.depth = sub.Depth()
sCfg.noSync = sub.NoSync()
default: default:
return fmt.Errorf("invalid storage type: %s", storagesCfg[i].Type()) return fmt.Errorf("invalid storage type: %s", storagesCfg[i].Type())
} }
@ -337,8 +342,9 @@ type shared struct {
privateTokenStore sessionStorage privateTokenStore sessionStorage
persistate *state.PersistentStorage persistate *state.PersistentStorage
clientCache *cache.ClientCache clientCache *cache.ClientCache
localAddr network.AddressGroup bgClientCache *cache.ClientCache
localAddr network.AddressGroup
key *keys.PrivateKey key *keys.PrivateKey
binPublicKey []byte binPublicKey []byte
@ -483,6 +489,8 @@ type cfgObjectRoutines struct {
putRemoteCapacity int putRemoteCapacity int
replicatorPoolSize int
replication *ants.Pool replication *ants.Pool
} }
@ -551,18 +559,21 @@ func initCfg(appCfg *config.Config) *cfg {
apiVersion: version.Current(), apiVersion: version.Current(),
healthStatus: atomic.NewInt32(int32(control.HealthStatus_HEALTH_STATUS_UNDEFINED)), healthStatus: atomic.NewInt32(int32(control.HealthStatus_HEALTH_STATUS_UNDEFINED)),
} }
cacheOpts := cache.ClientCacheOpts{
DialTimeout: apiclientconfig.DialTimeout(appCfg),
StreamTimeout: apiclientconfig.StreamTimeout(appCfg),
Key: &key.PrivateKey,
AllowExternal: apiclientconfig.AllowExternal(appCfg),
}
c.shared = shared{ c.shared = shared{
key: key, key: key,
binPublicKey: key.PublicKey().Bytes(), binPublicKey: key.PublicKey().Bytes(),
localAddr: netAddr, localAddr: netAddr,
respSvc: response.NewService(response.WithNetworkState(netState)), respSvc: response.NewService(response.WithNetworkState(netState)),
clientCache: cache.NewSDKClientCache(cache.ClientCacheOpts{ clientCache: cache.NewSDKClientCache(cacheOpts),
DialTimeout: apiclientconfig.DialTimeout(appCfg), bgClientCache: cache.NewSDKClientCache(cacheOpts),
StreamTimeout: apiclientconfig.StreamTimeout(appCfg), persistate: persistate,
Key: &key.PrivateKey,
AllowExternal: apiclientconfig.AllowExternal(appCfg),
}),
persistate: persistate,
} }
c.cfgAccounting = cfgAccounting{ c.cfgAccounting = cfgAccounting{
scriptHash: contractsconfig.Balance(appCfg), scriptHash: contractsconfig.Balance(appCfg),
@ -600,7 +611,8 @@ func initCfg(appCfg *config.Config) *cfg {
netState.metrics = c.metricsCollector netState.metrics = c.metricsCollector
} }
c.onShutdown(c.clientCache.CloseAll) // clean up connections c.onShutdown(c.clientCache.CloseAll) // clean up connections
c.onShutdown(c.bgClientCache.CloseAll) // clean up connections
c.onShutdown(func() { _ = c.persistate.Close() }) c.onShutdown(func() { _ = c.persistate.Close() })
return c return c
@ -642,7 +654,7 @@ func (c *cfg) shardOpts() []shardOptsWithID {
writecache.WithSmallObjectSize(wcRead.smallObjectSize), writecache.WithSmallObjectSize(wcRead.smallObjectSize),
writecache.WithFlushWorkersCount(wcRead.flushWorkerCount), writecache.WithFlushWorkersCount(wcRead.flushWorkerCount),
writecache.WithMaxCacheSize(wcRead.sizeLimit), writecache.WithMaxCacheSize(wcRead.sizeLimit),
writecache.WithNoSync(wcRead.noSync),
writecache.WithLogger(c.log), writecache.WithLogger(c.log),
) )
} }
@ -681,7 +693,8 @@ func (c *cfg) shardOpts() []shardOptsWithID {
Storage: fstree.New( Storage: fstree.New(
fstree.WithPath(sRead.path), fstree.WithPath(sRead.path),
fstree.WithPerm(sRead.perm), fstree.WithPerm(sRead.perm),
fstree.WithDepth(sRead.depth)), fstree.WithDepth(sRead.depth),
fstree.WithNoSync(sRead.noSync)),
Policy: func(_ *objectSDK.Object, data []byte) bool { Policy: func(_ *objectSDK.Object, data []byte) bool {
return true return true
}, },
@ -811,7 +824,12 @@ func initObjectPool(cfg *config.Config) (pool cfgObjectRoutines) {
pool.putRemote, err = ants.NewPool(pool.putRemoteCapacity, optNonBlocking) pool.putRemote, err = ants.NewPool(pool.putRemoteCapacity, optNonBlocking)
fatalOnErr(err) fatalOnErr(err)
pool.replication, err = ants.NewPool(pool.putRemoteCapacity) pool.replicatorPoolSize = replicatorconfig.PoolSize(cfg)
if pool.replicatorPoolSize <= 0 {
pool.replicatorPoolSize = pool.putRemoteCapacity
}
pool.replication, err = ants.NewPool(pool.replicatorPoolSize)
fatalOnErr(err) fatalOnErr(err)
return pool return pool

View file

@ -68,6 +68,7 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, pl.MaxBatchSize(), 200) require.Equal(t, pl.MaxBatchSize(), 200)
require.Equal(t, false, wc.Enabled()) require.Equal(t, false, wc.Enabled())
require.Equal(t, true, wc.NoSync())
require.Equal(t, "tmp/0/cache", wc.Path()) require.Equal(t, "tmp/0/cache", wc.Path())
require.EqualValues(t, 16384, wc.SmallObjectSize()) require.EqualValues(t, 16384, wc.SmallObjectSize())
@ -95,7 +96,10 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, "tmp/0/blob", ss[1].Path()) require.Equal(t, "tmp/0/blob", ss[1].Path())
require.EqualValues(t, 0644, ss[1].Perm()) require.EqualValues(t, 0644, ss[1].Perm())
require.EqualValues(t, 5, fstreeconfig.From((*config.Config)(ss[1])).Depth())
fst := fstreeconfig.From((*config.Config)(ss[1]))
require.EqualValues(t, 5, fst.Depth())
require.Equal(t, false, fst.NoSync())
require.EqualValues(t, 150, gc.RemoverBatchSize()) require.EqualValues(t, 150, gc.RemoverBatchSize())
require.Equal(t, 2*time.Minute, gc.RemoverSleepInterval()) require.Equal(t, 2*time.Minute, gc.RemoverSleepInterval())
@ -110,6 +114,7 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, 100, pl.MaxBatchSize()) require.Equal(t, 100, pl.MaxBatchSize())
require.Equal(t, true, wc.Enabled()) require.Equal(t, true, wc.Enabled())
require.Equal(t, false, wc.NoSync())
require.Equal(t, "tmp/1/cache", wc.Path()) require.Equal(t, "tmp/1/cache", wc.Path())
require.EqualValues(t, 16384, wc.SmallObjectSize()) require.EqualValues(t, 16384, wc.SmallObjectSize())
@ -137,7 +142,10 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, "tmp/1/blob", ss[1].Path()) require.Equal(t, "tmp/1/blob", ss[1].Path())
require.EqualValues(t, 0644, ss[1].Perm()) require.EqualValues(t, 0644, ss[1].Perm())
require.EqualValues(t, 5, fstreeconfig.From((*config.Config)(ss[1])).Depth())
fst := fstreeconfig.From((*config.Config)(ss[1]))
require.EqualValues(t, 5, fst.Depth())
require.Equal(t, true, fst.NoSync())
require.EqualValues(t, 200, gc.RemoverBatchSize()) require.EqualValues(t, 200, gc.RemoverBatchSize())
require.Equal(t, 5*time.Minute, gc.RemoverSleepInterval()) require.Equal(t, 5*time.Minute, gc.RemoverSleepInterval())

View file

@ -38,3 +38,10 @@ func (x *Config) Depth() uint64 {
return DepthDefault return DepthDefault
} }
// NoSync returns the value of "no_sync" config parameter.
//
// Returns false if the value is not a boolean or is missing.
func (x *Config) NoSync() bool {
return config.BoolSafe((*config.Config)(x), "no_sync")
}

View file

@ -115,6 +115,13 @@ func (x *Config) SizeLimit() uint64 {
return SizeLimitDefault return SizeLimitDefault
} }
// NoSync returns the value of "no_sync" config parameter.
//
// Returns false if the value is not a boolean.
func (x *Config) NoSync() bool {
return config.BoolSafe((*config.Config)(x), "no_sync")
}
// BoltDB returns config instance for querying bolt db specific parameters. // BoltDB returns config instance for querying bolt db specific parameters.
func (x *Config) BoltDB() *boltdbconfig.Config { func (x *Config) BoltDB() *boltdbconfig.Config {
return (*boltdbconfig.Config)(x) return (*boltdbconfig.Config)(x)

View file

@ -25,3 +25,9 @@ func PutTimeout(c *config.Config) time.Duration {
return PutTimeoutDefault return PutTimeoutDefault
} }
// PoolSize returns the value of "pool_size" config parameter
// from "replicator" section.
func PoolSize(c *config.Config) int {
return int(config.IntSafe(c.Sub(subsection), "pool_size"))
}

View file

@ -15,12 +15,14 @@ func TestReplicatorSection(t *testing.T) {
empty := configtest.EmptyConfig() empty := configtest.EmptyConfig()
require.Equal(t, replicatorconfig.PutTimeoutDefault, replicatorconfig.PutTimeout(empty)) require.Equal(t, replicatorconfig.PutTimeoutDefault, replicatorconfig.PutTimeout(empty))
require.Equal(t, 0, replicatorconfig.PoolSize(empty))
}) })
const path = "../../../../config/example/node" const path = "../../../../config/example/node"
var fileConfigTest = func(c *config.Config) { var fileConfigTest = func(c *config.Config) {
require.Equal(t, 15*time.Second, replicatorconfig.PutTimeout(c)) require.Equal(t, 15*time.Second, replicatorconfig.PutTimeout(c))
require.Equal(t, 10, replicatorconfig.PoolSize(c))
} }
configtest.ForEachFileType(path, fileConfigTest) configtest.ForEachFileType(path, fileConfigTest)

View file

@ -162,7 +162,7 @@ func initContainerService(c *cfg) {
RemoteWriterProvider: &remoteLoadAnnounceProvider{ RemoteWriterProvider: &remoteLoadAnnounceProvider{
key: &c.key.PrivateKey, key: &c.key.PrivateKey,
netmapKeys: c, netmapKeys: c,
clientCache: c.clientCache, clientCache: c.bgClientCache,
deadEndProvider: loadcontroller.SimpleWriterProvider(loadAccumulator), deadEndProvider: loadcontroller.SimpleWriterProvider(loadAccumulator),
}, },
Builder: routeBuilder, Builder: routeBuilder,

View file

@ -97,20 +97,6 @@ func (s *objectSvc) GetRangeHash(ctx context.Context, req *object.GetRangeHashRe
return s.get.GetRangeHash(ctx, req) return s.get.GetRangeHash(ctx, req)
} }
type localObjectInhumer struct {
storage *engine.StorageEngine
log *logger.Logger
}
func (r *localObjectInhumer) DeleteObjects(ts oid.Address, addr ...oid.Address) error {
var prm engine.InhumePrm
prm.WithTarget(ts, addr...)
_, err := r.storage.Inhume(prm)
return err
}
type delNetInfo struct { type delNetInfo struct {
netmap.State netmap.State
tsLifetime uint64 tsLifetime uint64
@ -185,10 +171,16 @@ func initObjectService(c *cfg) {
nmSrc: c.netMapSource, nmSrc: c.netMapSource,
netState: c.cfgNetmap.state, netState: c.cfgNetmap.state,
trustStorage: c.cfgReputation.localTrustStorage, trustStorage: c.cfgReputation.localTrustStorage,
basicConstructor: c.clientCache, basicConstructor: c.bgClientCache,
} }
coreConstructor := (*coreClientConstructor)(clientConstructor) coreConstructor := &coreClientConstructor{
log: c.log,
nmSrc: c.netMapSource,
netState: c.cfgNetmap.state,
trustStorage: c.cfgReputation.localTrustStorage,
basicConstructor: c.clientCache,
}
var irFetcher v2.InnerRingFetcher var irFetcher v2.InnerRingFetcher
@ -202,11 +194,6 @@ func initObjectService(c *cfg) {
} }
} }
objInhumer := &localObjectInhumer{
storage: ls,
log: c.log,
}
c.replicator = replicator.New( c.replicator = replicator.New(
replicator.WithLogger(c.log), replicator.WithLogger(c.log),
replicator.WithPutTimeout( replicator.WithPutTimeout(
@ -244,7 +231,7 @@ func initObjectService(c *cfg) {
) )
} }
}), }),
policer.WithMaxCapacity(c.cfgObject.pool.putRemoteCapacity), policer.WithMaxCapacity(c.cfgObject.pool.replicatorPoolSize),
policer.WithPool(c.cfgObject.pool.replication), policer.WithPool(c.cfgObject.pool.replication),
policer.WithNodeLoader(c), policer.WithNodeLoader(c),
) )
@ -254,7 +241,7 @@ func initObjectService(c *cfg) {
c.workers = append(c.workers, pol) c.workers = append(c.workers, pol)
var os putsvc.ObjectStorage = engineWithoutNotifications{ var os putsvc.ObjectStorage = engineWithoutNotifications{
e: ls, engine: ls,
} }
if c.cfgNotifications.enabled { if c.cfgNotifications.enabled {
@ -274,10 +261,6 @@ func initObjectService(c *cfg) {
putsvc.WithContainerSource(c.cfgObject.cnrSource), putsvc.WithContainerSource(c.cfgObject.cnrSource),
putsvc.WithNetworkMapSource(c.netMapSource), putsvc.WithNetworkMapSource(c.netMapSource),
putsvc.WithNetmapKeys(c), putsvc.WithNetmapKeys(c),
putsvc.WithFormatValidatorOpts(
objectCore.WithDeleteHandler(objInhumer),
objectCore.WithLocker(ls),
),
putsvc.WithNetworkState(c.cfgNetmap.state), putsvc.WithNetworkState(c.cfgNetmap.state),
putsvc.WithWorkerPools(c.cfgObject.pool.putRemote), putsvc.WithWorkerPools(c.cfgObject.pool.putRemote),
putsvc.WithLogger(c.log), putsvc.WithLogger(c.log),
@ -561,6 +544,14 @@ type engineWithNotifications struct {
defaultTopic string defaultTopic string
} }
func (e engineWithNotifications) Delete(tombstone oid.Address, toDelete []oid.ID) error {
return e.base.Delete(tombstone, toDelete)
}
func (e engineWithNotifications) Lock(locker oid.Address, toLock []oid.ID) error {
return e.base.Lock(locker, toLock)
}
func (e engineWithNotifications) Put(o *objectSDK.Object) error { func (e engineWithNotifications) Put(o *objectSDK.Object) error {
if err := e.base.Put(o); err != nil { if err := e.base.Put(o); err != nil {
return err return err
@ -583,9 +574,28 @@ func (e engineWithNotifications) Put(o *objectSDK.Object) error {
} }
type engineWithoutNotifications struct { type engineWithoutNotifications struct {
e *engine.StorageEngine engine *engine.StorageEngine
}
func (e engineWithoutNotifications) Delete(tombstone oid.Address, toDelete []oid.ID) error {
var prm engine.InhumePrm
addrs := make([]oid.Address, len(toDelete))
for i := range addrs {
addrs[i].SetContainer(tombstone.Container())
addrs[i].SetObject(toDelete[i])
}
prm.WithTarget(tombstone, addrs...)
_, err := e.engine.Inhume(prm)
return err
}
func (e engineWithoutNotifications) Lock(locker oid.Address, toLock []oid.ID) error {
return e.engine.Lock(locker.Container(), locker.Object(), toLock)
} }
func (e engineWithoutNotifications) Put(o *objectSDK.Object) error { func (e engineWithoutNotifications) Put(o *objectSDK.Object) error {
return engine.Put(e.e, o) return engine.Put(e.engine, o)
} }

View file

@ -95,7 +95,7 @@ func initReputationService(c *cfg) {
common.RemoteProviderPrm{ common.RemoteProviderPrm{
NetmapKeys: c, NetmapKeys: c,
DeadEndProvider: daughterStorageWriterProvider, DeadEndProvider: daughterStorageWriterProvider,
ClientCache: c.clientCache, ClientCache: c.bgClientCache,
WriterProvider: localreputation.NewRemoteProvider( WriterProvider: localreputation.NewRemoteProvider(
localreputation.RemoteProviderPrm{ localreputation.RemoteProviderPrm{
Key: &c.key.PrivateKey, Key: &c.key.PrivateKey,
@ -110,7 +110,7 @@ func initReputationService(c *cfg) {
common.RemoteProviderPrm{ common.RemoteProviderPrm{
NetmapKeys: c, NetmapKeys: c,
DeadEndProvider: consumerStorageWriterProvider, DeadEndProvider: consumerStorageWriterProvider,
ClientCache: c.clientCache, ClientCache: c.bgClientCache,
WriterProvider: intermediatereputation.NewRemoteProvider( WriterProvider: intermediatereputation.NewRemoteProvider(
intermediatereputation.RemoteProviderPrm{ intermediatereputation.RemoteProviderPrm{
Key: &c.key.PrivateKey, Key: &c.key.PrivateKey,

View file

@ -73,6 +73,7 @@ func initTreeService(c *cfg) {
ev := e.(containerEvent.DeleteSuccess) ev := e.(containerEvent.DeleteSuccess)
// This is executed asynchronously, so we don't care about the operation taking some time. // This is executed asynchronously, so we don't care about the operation taking some time.
c.log.Debug("removing all trees for container", zap.Stringer("cid", ev.ID))
err := c.treeService.DropTree(context.Background(), ev.ID, "") err := c.treeService.DropTree(context.Background(), ev.ID, "")
if err != nil && !errors.Is(err, pilorama.ErrTreeNotFound) { if err != nil && !errors.Is(err, pilorama.ErrTreeNotFound) {
// Ignore pilorama.ErrTreeNotFound but other errors, including shard.ErrReadOnly, should be logged. // Ignore pilorama.ErrTreeNotFound but other errors, including shard.ErrReadOnly, should be logged.

View file

@ -78,6 +78,7 @@ NEOFS_POLICER_HEAD_TIMEOUT=15s
# Replicator section # Replicator section
NEOFS_REPLICATOR_PUT_TIMEOUT=15s NEOFS_REPLICATOR_PUT_TIMEOUT=15s
NEOFS_REPLICATOR_POOL_SIZE=10
# Object service section # Object service section
NEOFS_OBJECT_PUT_POOL_SIZE_REMOTE=100 NEOFS_OBJECT_PUT_POOL_SIZE_REMOTE=100
@ -92,6 +93,7 @@ NEOFS_STORAGE_SHARD_0_RESYNC_METABASE=false
NEOFS_STORAGE_SHARD_0_MODE=read-only NEOFS_STORAGE_SHARD_0_MODE=read-only
### Write cache config ### Write cache config
NEOFS_STORAGE_SHARD_0_WRITECACHE_ENABLED=false NEOFS_STORAGE_SHARD_0_WRITECACHE_ENABLED=false
NEOFS_STORAGE_SHARD_0_WRITECACHE_NO_SYNC=true
NEOFS_STORAGE_SHARD_0_WRITECACHE_PATH=tmp/0/cache NEOFS_STORAGE_SHARD_0_WRITECACHE_PATH=tmp/0/cache
NEOFS_STORAGE_SHARD_0_WRITECACHE_SMALL_OBJECT_SIZE=16384 NEOFS_STORAGE_SHARD_0_WRITECACHE_SMALL_OBJECT_SIZE=16384
NEOFS_STORAGE_SHARD_0_WRITECACHE_MAX_OBJECT_SIZE=134217728 NEOFS_STORAGE_SHARD_0_WRITECACHE_MAX_OBJECT_SIZE=134217728
@ -160,6 +162,7 @@ NEOFS_STORAGE_SHARD_1_BLOBSTOR_0_OPENED_CACHE_CAPACITY=50
NEOFS_STORAGE_SHARD_1_BLOBSTOR_1_TYPE=fstree NEOFS_STORAGE_SHARD_1_BLOBSTOR_1_TYPE=fstree
NEOFS_STORAGE_SHARD_1_BLOBSTOR_1_PATH=tmp/1/blob NEOFS_STORAGE_SHARD_1_BLOBSTOR_1_PATH=tmp/1/blob
NEOFS_STORAGE_SHARD_1_BLOBSTOR_1_PERM=0644 NEOFS_STORAGE_SHARD_1_BLOBSTOR_1_PERM=0644
NEOFS_STORAGE_SHARD_1_BLOBSTOR_1_NO_SYNC=true
NEOFS_STORAGE_SHARD_1_BLOBSTOR_1_DEPTH=5 NEOFS_STORAGE_SHARD_1_BLOBSTOR_1_DEPTH=5
### Pilorama config ### Pilorama config
NEOFS_STORAGE_SHARD_1_PILORAMA_PATH="tmp/1/blob/pilorama.db" NEOFS_STORAGE_SHARD_1_PILORAMA_PATH="tmp/1/blob/pilorama.db"

View file

@ -121,6 +121,7 @@
"head_timeout": "15s" "head_timeout": "15s"
}, },
"replicator": { "replicator": {
"pool_size": 10,
"put_timeout": "15s" "put_timeout": "15s"
}, },
"object": { "object": {
@ -137,6 +138,7 @@
"resync_metabase": false, "resync_metabase": false,
"writecache": { "writecache": {
"enabled": false, "enabled": false,
"no_sync": true,
"path": "tmp/0/cache", "path": "tmp/0/cache",
"small_object_size": 16384, "small_object_size": 16384,
"max_object_size": 134217728, "max_object_size": 134217728,
@ -214,6 +216,7 @@
{ {
"type": "fstree", "type": "fstree",
"path": "tmp/1/blob", "path": "tmp/1/blob",
"no_sync": true,
"perm": "0644", "perm": "0644",
"depth": 5 "depth": 5
} }

View file

@ -101,6 +101,7 @@ policer:
replicator: replicator:
put_timeout: 15s # timeout for the Replicator PUT remote operation put_timeout: 15s # timeout for the Replicator PUT remote operation
pool_size: 10 # maximum amount of concurrent replications
object: object:
put: put:
@ -157,6 +158,7 @@ storage:
writecache: writecache:
enabled: false enabled: false
no_sync: true
path: tmp/0/cache # write-cache root directory path: tmp/0/cache # write-cache root directory
capacity: 3221225472 # approximate write-cache total size, bytes capacity: 3221225472 # approximate write-cache total size, bytes
@ -198,6 +200,7 @@ storage:
path: tmp/1/blob/blobovnicza path: tmp/1/blob/blobovnicza
- type: fstree - type: fstree
path: tmp/1/blob # blobstor path path: tmp/1/blob # blobstor path
no_sync: true
pilorama: pilorama:
path: tmp/1/blob/pilorama.db path: tmp/1/blob/pilorama.db

View file

@ -21,9 +21,12 @@ case "$1" in
USERNAME=ir USERNAME=ir
id -u neofs-ir >/dev/null 2>&1 || useradd -s /usr/sbin/nologin -d /var/lib/neofs/ir --system -M -U -c "NeoFS InnerRing node" neofs-ir id -u neofs-ir >/dev/null 2>&1 || useradd -s /usr/sbin/nologin -d /var/lib/neofs/ir --system -M -U -c "NeoFS InnerRing node" neofs-ir
if ! dpkg-statoverride --list /etc/neofs/$USERNAME >/dev/null; then if ! dpkg-statoverride --list /etc/neofs/$USERNAME >/dev/null; then
chown -f root:neofs-$USERNAME /etc/neofs/$USERNAME/* chown -f root:neofs-$USERNAME /etc/neofs/$USERNAME
chmod -f 0750 /etc/neofs/$USERNAME chmod -f 0750 /etc/neofs/$USERNAME
chown -f root:neofs-$USERNAME /etc/neofs/$USERNAME/config.yml
chown -f root:neofs-$USERNAME /etc/neofs/$USERNAME/control.yml
chmod -f 0640 /etc/neofs/$USERNAME/config.yml || true
chmod -f 0640 /etc/neofs/$USERNAME/control.yml || true
fi fi
USERDIR=$(getent passwd "neofs-$USERNAME" | cut -d: -f6) USERDIR=$(getent passwd "neofs-$USERNAME" | cut -d: -f6)
if ! dpkg-statoverride --list neofs-$USERDIR >/dev/null; then if ! dpkg-statoverride --list neofs-$USERDIR >/dev/null; then

View file

@ -1,2 +1,3 @@
/etc/neofs/storage /etc/neofs/storage
/srv/neofs /srv/neofs
/var/lib/neofs/storage

View file

@ -19,15 +19,23 @@ set -e
case "$1" in case "$1" in
configure) configure)
USERNAME=storage USERNAME=storage
id -u neofs-storage >/dev/null 2>&1 || useradd -s /usr/sbin/nologin -d /srv/neofs --system -M -U -c "NeoFS Storage node" neofs-storage id -u neofs-$USERNAME >/dev/null 2>&1 || useradd -s /usr/sbin/nologin -d /var/lib/neofs/$USERNAME --system -M -U -c "NeoFS Storage node" neofs-$USERNAME
if ! dpkg-statoverride --list /etc/neofs/$USERNAME >/dev/null; then if ! dpkg-statoverride --list /etc/neofs/$USERNAME >/dev/null; then
chown -f root:neofs-$USERNAME /etc/neofs/$USERNAME/* chown -f root:neofs-$USERNAME /etc/neofs/$USERNAME
chmod -f 0750 /etc/neofs/$USERNAME chmod -f 0750 /etc/neofs/$USERNAME
chown -f root:neofs-$USERNAME /etc/neofs/$USERNAME/config.yml
chown -f root:neofs-$USERNAME /etc/neofs/$USERNAME/control.yml
chmod -f 0640 /etc/neofs/$USERNAME/config.yml || true
chmod -f 0640 /etc/neofs/$USERNAME/control.yml || true
fi fi
USERDIR=$(getent passwd "neofs-$USERNAME" | cut -d: -f6) USERDIR=$(getent passwd "neofs-$USERNAME" | cut -d: -f6)
if ! dpkg-statoverride --list neofs-$USERDIR >/dev/null; then if ! dpkg-statoverride --list neofs-$USERDIR >/dev/null; then
chown -f neofs-$USERNAME: $USERDIR chown -f neofs-$USERNAME: $USERDIR
fi fi
USERDIR=/srv/neofs
if ! dpkg-statoverride --list neofs-$USERDIR >/dev/null; then
chown -f neofs-$USERNAME: $USERDIR
fi
;; ;;
abort-upgrade|abort-remove|abort-deconfigure) abort-upgrade|abort-remove|abort-deconfigure)

View file

@ -20,7 +20,7 @@ set -e
case "$1" in case "$1" in
purge) purge)
rm -rf /srv/neofs/* rm -rf /var/lib/neofs/storage/*
;; ;;
remove|upgrade|failed-upgrade|abort-install|abort-upgrade|disappear) remove|upgrade|failed-upgrade|abort-install|abort-upgrade|disappear)

View file

@ -169,52 +169,60 @@ Contains configuration for each shard. Keys must be consecutive numbers starting
`default` subsection has the same format and specifies defaults for missing values. `default` subsection has the same format and specifies defaults for missing values.
The following table describes configuration for each shard. The following table describes configuration for each shard.
| Parameter | Type | Default value | Description | | Parameter | Type | Default value | Description |
|-------------------|---------------------------------------------|---------------|-----------------------------------------------------------------------------------------------------------| |-------------------------------------|---------------------------------------------|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `mode` | `string` | `read-write` | Shard Mode.<br/>Possible values: `read-write`, `read-only`, `degraded`, `degraded-read-only`, `disabled` | | `compress` | `bool` | `false` | Flag to enable compression. |
| `resync_metabase` | `bool` | `false` | Flag to enable metabase resync on start. | | `compression_exclude_content_types` | `[]string` | | List of content-types to disable compression for. Content-type is taken from `Content-Type` object attribute. Each element can contain a star `*` as a first (last) character, which matches any prefix (suffix). |
| `writecache` | [Writecache config](#writecache-subsection) | | Write-cache configuration. | | `mode` | `string` | `read-write` | Shard Mode.<br/>Possible values: `read-write`, `read-only`, `degraded`, `degraded-read-only`, `disabled` |
| `metabase` | [Metabase config](#metabase-subsection) | | Metabase configuration. | | `resync_metabase` | `bool` | `false` | Flag to enable metabase resync on start. |
| `blobstor` | [Blobstor config](#blobstor-subsection) | | Blobstor configuration. | | `writecache` | [Writecache config](#writecache-subsection) | | Write-cache configuration. |
| `gc` | [GC config](#gc-subsection) | | GC configuration. | | `metabase` | [Metabase config](#metabase-subsection) | | Metabase configuration. |
| `blobstor` | [Blobstor config](#blobstor-subsection) | | Blobstor configuration. |
| `small_object_size` | `size` | `1M` | Maximum size of an object stored in blobovnicza tree. |
| `gc` | [GC config](#gc-subsection) | | GC configuration. |
### `blobstor` subsection ### `blobstor` subsection
Contains a list of substorages each with it's own type.
Currently only 2 types are supported: `fstree` and `blobovnicza`.
```yaml ```yaml
blobstor: blobstor:
path: /path/to/blobstor - type: blobovnicza
perm: 0644 path: /path/to/blobstor
compress: true depth: 1
compression_exclude_content_types: width: 4
- audio/* - type: fstree
- video/* path: /path/to/blobstor/blobovnicza
depth: 5 perm: 0644
small_object_size: 102400 size: 4194304
blobovnicza: depth: 1
size: 4194304 width: 4
depth: 1 opened_cache_capacity: 50
width: 4
opened_cache_capacity: 50
``` ```
#### Common options for sub-storages
| Parameter | Type | Default value | Description | | Parameter | Type | Default value | Description |
|-------------------------------------|-----------------------------------------------|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |-------------------------------------|-----------------------------------------------|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `path` | `string` | | Path to the root of the blobstor. | | `path` | `string` | | Path to the root of the blobstor. |
| `perm` | file mode | `0660` | Default permission for created files and directories. | | `perm` | file mode | `0660` | Default permission for created files and directories. |
| `compress` | `bool` | `false` | Flag to enable compression. |
| `compression_exclude_content_types` | `[]string` | | List of content-types to disable compression for. Content-type is taken from `Content-Type` object attribute. Each element can contain a star `*` as a first (last) character, which matches any prefix (suffix). |
| `depth` | `int` | `4` | Depth of the file-system tree for large objects. Must be in range 1..31. |
| `small_object_size` | `size` | `1M` | Maximum size of an object stored in blobovnicza tree. |
| `blobovnicza` | [Blobovnicza config](#blobovnicza-subsection) | | Blobovnicza tree configuration. |
#### `blobovnicza` subsection #### `fstree` type options
| Parameter | Type | Default value | Description |
|---------------------|-----------|---------------|-------------------------------------------------------|
| `path` | `string` | | Path to the root of the blobstor. |
| `perm` | file mode | `0660` | Default permission for created files and directories. |
| `depth` | `int` | `4` | File-system tree depth. |
| Parameter | Type | Default value | Description | #### `blobovnicza` type options
|-------------------------|----------|---------------|-------------------------------------------------------| | Parameter | Type | Default value | Description |
| `path` | `string` | | Path to the root of the blobovnicza tree. | |-------------------------|-----------|---------------|-------------------------------------------------------|
| `size` | `size` | `1 G` | Maximum size of a single blobovnicza | | `path` | `string` | | Path to the root of the blobstor. |
| `depth` | `int` | `2` | Blobovnicza tree depth. | | `perm` | file mode | `0660` | Default permission for created files and directories. |
| `width` | `int` | `16` | Blobovnicza tree width. | | `size` | `size` | `1 G` | Maximum size of a single blobovnicza |
| `opened_cache_capacity` | `int` | `16` | Maximum number of simultaneously opened blobovniczas. | | `depth` | `int` | `2` | Blobovnicza tree depth. |
| `width` | `int` | `16` | Blobovnicza tree width. |
| `opened_cache_capacity` | `int` | `16` | Maximum number of simultaneously opened blobovniczas. |
### `gc` subsection ### `gc` subsection
@ -396,11 +404,13 @@ Configuration for the Replicator service.
```yaml ```yaml
replicator: replicator:
put_timeout: 15s put_timeout: 15s
pool_size: 10
``` ```
| Parameter | Type | Default value | Description | | Parameter | Type | Default value | Description |
|---------------|------------|---------------|---------------------------------------------| |---------------|------------|----------------------------------------|---------------------------------------------|
| `put_timeout` | `duration` | `5s` | Timeout for performing the `PUT` operation. | | `put_timeout` | `duration` | `5s` | Timeout for performing the `PUT` operation. |
| `pool_size` | `int` | Equal to `object.put.pool_size_remote` | Maximum amount of concurrent replications. |
# `object` section # `object` section
Contains pool sizes for object operations with remote nodes. Contains pool sizes for object operations with remote nodes.

View file

@ -0,0 +1,13 @@
package object
import (
"github.com/nspcc-dev/neofs-sdk-go/object"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
)
// AddressWithType groups object address with its NeoFS
// object type.
type AddressWithType struct {
Address oid.Address
Type object.Type
}

View file

@ -26,11 +26,7 @@ type FormatValidator struct {
type FormatValidatorOption func(*cfg) type FormatValidatorOption func(*cfg)
type cfg struct { type cfg struct {
deleteHandler DeleteHandler
netState netmap.State netState netmap.State
locker Locker
} }
// DeleteHandler is an interface of delete queue processor. // DeleteHandler is an interface of delete queue processor.
@ -173,130 +169,141 @@ func (v *FormatValidator) checkOwnerKey(id user.ID, key neofsecdsa.PublicKey) er
return nil return nil
} }
// ContentMeta describes NeoFS meta information that brings object's payload if the object
// is one of:
// - object.TypeTombstone;
// - object.TypeStorageGroup;
// - object.TypeLock.
type ContentMeta struct {
typ object.Type
objs []oid.ID
}
// Type returns object's type.
func (i ContentMeta) Type() object.Type {
return i.typ
}
// Objects returns objects that the original object's payload affects:
// - inhumed objects, if the original object is a Tombstone;
// - locked objects, if the original object is a Lock;
// - members of a storage group, if the original object is a Storage group;
// - nil, if the original object is a Regular object.
func (i ContentMeta) Objects() []oid.ID {
return i.objs
}
// ValidateContent validates payload content according to the object type. // ValidateContent validates payload content according to the object type.
func (v *FormatValidator) ValidateContent(o *object.Object) error { func (v *FormatValidator) ValidateContent(o *object.Object) (ContentMeta, error) {
meta := ContentMeta{
typ: o.Type(),
}
switch o.Type() { switch o.Type() {
case object.TypeRegular: case object.TypeRegular:
// ignore regular objects, they do not need payload formatting // ignore regular objects, they do not need payload formatting
case object.TypeTombstone: case object.TypeTombstone:
if len(o.Payload()) == 0 { if len(o.Payload()) == 0 {
return fmt.Errorf("(%T) empty payload in tombstone", v) return ContentMeta{}, fmt.Errorf("(%T) empty payload in tombstone", v)
} }
tombstone := object.NewTombstone() tombstone := object.NewTombstone()
if err := tombstone.Unmarshal(o.Payload()); err != nil { if err := tombstone.Unmarshal(o.Payload()); err != nil {
return fmt.Errorf("(%T) could not unmarshal tombstone content: %w", v, err) return ContentMeta{}, fmt.Errorf("(%T) could not unmarshal tombstone content: %w", v, err)
} }
// check if the tombstone has the same expiration in the body and the header // check if the tombstone has the same expiration in the body and the header
exp, err := expirationEpochAttribute(o) exp, err := expirationEpochAttribute(o)
if err != nil { if err != nil {
return err return ContentMeta{}, err
} }
if exp != tombstone.ExpirationEpoch() { if exp != tombstone.ExpirationEpoch() {
return errTombstoneExpiration return ContentMeta{}, errTombstoneExpiration
} }
// mark all objects from the tombstone body as removed in the storage engine // mark all objects from the tombstone body as removed in the storage engine
cnr, ok := o.ContainerID() _, ok := o.ContainerID()
if !ok { if !ok {
return errors.New("missing container ID") return ContentMeta{}, errors.New("missing container ID")
} }
idList := tombstone.Members() idList := tombstone.Members()
addrList := make([]oid.Address, len(idList)) meta.objs = idList
for i := range idList {
addrList[i].SetContainer(cnr)
addrList[i].SetObject(idList[i])
}
if v.deleteHandler != nil {
err = v.deleteHandler.DeleteObjects(AddressOf(o), addrList...)
if err != nil {
return fmt.Errorf("delete objects from %s object content: %w", o.Type(), err)
}
}
case object.TypeStorageGroup: case object.TypeStorageGroup:
if len(o.Payload()) == 0 { if len(o.Payload()) == 0 {
return fmt.Errorf("(%T) empty payload in SG", v) return ContentMeta{}, fmt.Errorf("(%T) empty payload in SG", v)
} }
var sg storagegroup.StorageGroup var sg storagegroup.StorageGroup
if err := sg.Unmarshal(o.Payload()); err != nil { if err := sg.Unmarshal(o.Payload()); err != nil {
return fmt.Errorf("(%T) could not unmarshal SG content: %w", v, err) return ContentMeta{}, fmt.Errorf("(%T) could not unmarshal SG content: %w", v, err)
} }
mm := sg.Members() mm := sg.Members()
meta.objs = mm
lenMM := len(mm) lenMM := len(mm)
if lenMM == 0 { if lenMM == 0 {
return errEmptySGMembers return ContentMeta{}, errEmptySGMembers
} }
uniqueFilter := make(map[oid.ID]struct{}, lenMM) uniqueFilter := make(map[oid.ID]struct{}, lenMM)
for i := 0; i < lenMM; i++ { for i := 0; i < lenMM; i++ {
if _, alreadySeen := uniqueFilter[mm[i]]; alreadySeen { if _, alreadySeen := uniqueFilter[mm[i]]; alreadySeen {
return fmt.Errorf("storage group contains non-unique member: %s", mm[i]) return ContentMeta{}, fmt.Errorf("storage group contains non-unique member: %s", mm[i])
} }
uniqueFilter[mm[i]] = struct{}{} uniqueFilter[mm[i]] = struct{}{}
} }
case object.TypeLock: case object.TypeLock:
if len(o.Payload()) == 0 { if len(o.Payload()) == 0 {
return errors.New("empty payload in lock") return ContentMeta{}, errors.New("empty payload in lock")
} }
cnr, ok := o.ContainerID() _, ok := o.ContainerID()
if !ok { if !ok {
return errors.New("missing container") return ContentMeta{}, errors.New("missing container")
} }
id, ok := o.ID() _, ok = o.ID()
if !ok { if !ok {
return errors.New("missing ID") return ContentMeta{}, errors.New("missing ID")
} }
// check that LOCK object has correct expiration epoch // check that LOCK object has correct expiration epoch
lockExp, err := expirationEpochAttribute(o) lockExp, err := expirationEpochAttribute(o)
if err != nil { if err != nil {
return fmt.Errorf("lock object expiration epoch: %w", err) return ContentMeta{}, fmt.Errorf("lock object expiration epoch: %w", err)
} }
if currEpoch := v.netState.CurrentEpoch(); lockExp < currEpoch { if currEpoch := v.netState.CurrentEpoch(); lockExp < currEpoch {
return fmt.Errorf("lock object expiration: %d; current: %d", lockExp, currEpoch) return ContentMeta{}, fmt.Errorf("lock object expiration: %d; current: %d", lockExp, currEpoch)
} }
var lock object.Lock var lock object.Lock
err = lock.Unmarshal(o.Payload()) err = lock.Unmarshal(o.Payload())
if err != nil { if err != nil {
return fmt.Errorf("decode lock payload: %w", err) return ContentMeta{}, fmt.Errorf("decode lock payload: %w", err)
} }
if v.locker != nil { num := lock.NumberOfMembers()
num := lock.NumberOfMembers() if num == 0 {
if num == 0 { return ContentMeta{}, errors.New("missing locked members")
return errors.New("missing locked members")
}
// mark all objects from lock list as locked in the storage engine
locklist := make([]oid.ID, num)
lock.ReadMembers(locklist)
err = v.locker.Lock(cnr, id, locklist)
if err != nil {
return fmt.Errorf("lock objects from %s object content: %w", o.Type(), err)
}
} }
meta.objs = make([]oid.ID, num)
lock.ReadMembers(meta.objs)
default: default:
// ignore all other object types, they do not need payload formatting // ignore all other object types, they do not need payload formatting
} }
return nil return meta, nil
} }
var errExpired = errors.New("object has expired") var errExpired = errors.New("object has expired")
@ -373,17 +380,3 @@ func WithNetState(netState netmap.State) FormatValidatorOption {
c.netState = netState c.netState = netState
} }
} }
// WithDeleteHandler returns an option to set delete queue processor.
func WithDeleteHandler(v DeleteHandler) FormatValidatorOption {
return func(c *cfg) {
c.deleteHandler = v
}
}
// WithLocker returns an option to set object lock storage.
func WithLocker(v Locker) FormatValidatorOption {
return func(c *cfg) {
c.locker = v
}
}

View file

@ -2,8 +2,6 @@ package object
import ( import (
"crypto/ecdsa" "crypto/ecdsa"
"crypto/rand"
"crypto/sha256"
"strconv" "strconv"
"testing" "testing"
@ -19,15 +17,6 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func testSHA(t *testing.T) [sha256.Size]byte {
cs := [sha256.Size]byte{}
_, err := rand.Read(cs[:])
require.NoError(t, err)
return cs
}
func blankValidObject(key *ecdsa.PrivateKey) *object.Object { func blankValidObject(key *ecdsa.PrivateKey) *object.Object {
var idOwner user.ID var idOwner user.ID
user.IDFromKey(&idOwner, key.PublicKey) user.IDFromKey(&idOwner, key.PublicKey)
@ -89,7 +78,8 @@ func TestFormatValidator_Validate(t *testing.T) {
user.IDFromKey(&idOwner, ownerKey.PrivateKey.PublicKey) user.IDFromKey(&idOwner, ownerKey.PrivateKey.PublicKey)
tok := sessiontest.Object() tok := sessiontest.Object()
tok.Sign(ownerKey.PrivateKey) err := tok.Sign(ownerKey.PrivateKey)
require.NoError(t, err)
obj := object.New() obj := object.New()
obj.SetContainerID(cidtest.ID()) obj.SetContainerID(cidtest.ID())
@ -114,7 +104,8 @@ func TestFormatValidator_Validate(t *testing.T) {
obj.SetType(object.TypeTombstone) obj.SetType(object.TypeTombstone)
obj.SetContainerID(cidtest.ID()) obj.SetContainerID(cidtest.ID())
require.Error(t, v.ValidateContent(obj)) // no tombstone content _, err := v.ValidateContent(obj)
require.Error(t, err) // no tombstone content
content := object.NewTombstone() content := object.NewTombstone()
content.SetMembers([]oid.ID{oidtest.ID()}) content.SetMembers([]oid.ID{oidtest.ID()})
@ -124,7 +115,8 @@ func TestFormatValidator_Validate(t *testing.T) {
obj.SetPayload(data) obj.SetPayload(data)
require.Error(t, v.ValidateContent(obj)) // no members in tombstone _, err = v.ValidateContent(obj)
require.Error(t, err) // no members in tombstone
content.SetMembers([]oid.ID{oidtest.ID()}) content.SetMembers([]oid.ID{oidtest.ID()})
@ -133,7 +125,8 @@ func TestFormatValidator_Validate(t *testing.T) {
obj.SetPayload(data) obj.SetPayload(data)
require.Error(t, v.ValidateContent(obj)) // no expiration epoch in tombstone _, err = v.ValidateContent(obj)
require.Error(t, err) // no expiration epoch in tombstone
var expirationAttribute object.Attribute var expirationAttribute object.Attribute
expirationAttribute.SetKey(objectV2.SysAttributeExpEpoch) expirationAttribute.SetKey(objectV2.SysAttributeExpEpoch)
@ -141,15 +134,23 @@ func TestFormatValidator_Validate(t *testing.T) {
obj.SetAttributes(expirationAttribute) obj.SetAttributes(expirationAttribute)
require.Error(t, v.ValidateContent(obj)) // different expiration values _, err = v.ValidateContent(obj)
require.Error(t, err) // different expiration values
id := oidtest.ID()
content.SetExpirationEpoch(10) content.SetExpirationEpoch(10)
content.SetMembers([]oid.ID{id})
data, err = content.Marshal() data, err = content.Marshal()
require.NoError(t, err) require.NoError(t, err)
obj.SetPayload(data) obj.SetPayload(data)
require.NoError(t, v.ValidateContent(obj)) // all good contentGot, err := v.ValidateContent(obj)
require.NoError(t, err) // all good
require.EqualValues(t, []oid.ID{id}, contentGot.Objects())
require.Equal(t, object.TypeTombstone, contentGot.Type())
}) })
t.Run("storage group content", func(t *testing.T) { t.Run("storage group content", func(t *testing.T) {
@ -157,7 +158,8 @@ func TestFormatValidator_Validate(t *testing.T) {
obj.SetType(object.TypeStorageGroup) obj.SetType(object.TypeStorageGroup)
t.Run("empty payload", func(t *testing.T) { t.Run("empty payload", func(t *testing.T) {
require.Error(t, v.ValidateContent(obj)) _, err := v.ValidateContent(obj)
require.Error(t, err)
}) })
var content storagegroup.StorageGroup var content storagegroup.StorageGroup
@ -168,7 +170,9 @@ func TestFormatValidator_Validate(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
obj.SetPayload(data) obj.SetPayload(data)
require.ErrorIs(t, v.ValidateContent(obj), errEmptySGMembers)
_, err = v.ValidateContent(obj)
require.ErrorIs(t, err, errEmptySGMembers)
}) })
t.Run("non-unique members", func(t *testing.T) { t.Run("non-unique members", func(t *testing.T) {
@ -180,17 +184,25 @@ func TestFormatValidator_Validate(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
obj.SetPayload(data) obj.SetPayload(data)
require.Error(t, v.ValidateContent(obj))
_, err = v.ValidateContent(obj)
require.Error(t, err)
}) })
t.Run("correct SG", func(t *testing.T) { t.Run("correct SG", func(t *testing.T) {
content.SetMembers([]oid.ID{oidtest.ID(), oidtest.ID()}) ids := []oid.ID{oidtest.ID(), oidtest.ID()}
content.SetMembers(ids)
data, err := content.Marshal() data, err := content.Marshal()
require.NoError(t, err) require.NoError(t, err)
obj.SetPayload(data) obj.SetPayload(data)
require.NoError(t, v.ValidateContent(obj))
content, err := v.ValidateContent(obj)
require.NoError(t, err)
require.EqualValues(t, ids, content.Objects())
require.Equal(t, object.TypeStorageGroup, content.Type())
}) })
}) })

View file

@ -202,6 +202,7 @@ func (x Client) HeadObject(prm HeadObjectPrm) (*HeadObjectRes, error) {
cliPrm.FromContainer(prm.objAddr.Container()) cliPrm.FromContainer(prm.objAddr.Container())
cliPrm.ByID(prm.objAddr.Object()) cliPrm.ByID(prm.objAddr.Object())
cliPrm.UseKey(*x.key)
cliRes, err := x.c.ObjectHead(prm.ctx, cliPrm) cliRes, err := x.c.ObjectHead(prm.ctx, cliPrm)
if err == nil { if err == nil {

View file

@ -47,7 +47,7 @@ type (
func newClientCache(p *clientCacheParams) *ClientCache { func newClientCache(p *clientCacheParams) *ClientCache {
return &ClientCache{ return &ClientCache{
log: p.Log, log: p.Log,
cache: cache.NewSDKClientCache(cache.ClientCacheOpts{AllowExternal: p.AllowExternal}), cache: cache.NewSDKClientCache(cache.ClientCacheOpts{AllowExternal: p.AllowExternal, Key: p.Key}),
key: p.Key, key: p.Key,
sgTimeout: p.SGTimeout, sgTimeout: p.SGTimeout,
headTimeout: p.HeadTimeout, headTimeout: p.HeadTimeout,

View file

@ -1,9 +1,9 @@
package blobovnicza package blobovnicza
import ( import (
"errors"
"fmt" "fmt"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/util/logicerr"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id" oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
"go.etcd.io/bbolt" "go.etcd.io/bbolt"
) )
@ -21,7 +21,7 @@ type PutRes struct {
// ErrFull is returned when trying to save an // ErrFull is returned when trying to save an
// object to a filled blobovnicza. // object to a filled blobovnicza.
var ErrFull = errors.New("blobovnicza is full") var ErrFull = logicerr.New("blobovnicza is full")
// SetAddress sets the address of the saving object. // SetAddress sets the address of the saving object.
func (p *PutPrm) SetAddress(addr oid.Address) { func (p *PutPrm) SetAddress(addr oid.Address) {
@ -62,7 +62,7 @@ func (b *Blobovnicza) Put(prm PutPrm) (PutRes, error) {
// expected to happen: // expected to happen:
// - before initialization step (incorrect usage by design) // - before initialization step (incorrect usage by design)
// - if DB is corrupted (in future this case should be handled) // - if DB is corrupted (in future this case should be handled)
return fmt.Errorf("(%T) bucket for size %d not created", b, sz) return logicerr.Wrap(fmt.Errorf("(%T) bucket for size %d not created", b, sz))
} }
// save the object in bucket // save the object in bucket

View file

@ -12,6 +12,7 @@ import (
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobovnicza" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobovnicza"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor/common" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor/common"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor/compression" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor/compression"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/util/logicerr"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id" oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -157,7 +158,7 @@ func (b *Blobovniczas) updateAndGet(p string, old *uint64) (blobovniczaWithIndex
if ok { if ok {
if old != nil { if old != nil {
if active.ind == b.blzShallowWidth-1 { if active.ind == b.blzShallowWidth-1 {
return active, errors.New("no more Blobovniczas") return active, logicerr.New("no more Blobovniczas")
} else if active.ind != *old { } else if active.ind != *old {
// sort of CAS in order to control concurrent // sort of CAS in order to control concurrent
// updateActive calls // updateActive calls
@ -246,3 +247,8 @@ func (b *Blobovniczas) Path() string {
func (b *Blobovniczas) SetCompressor(cc *compression.Config) { func (b *Blobovniczas) SetCompressor(cc *compression.Config) {
b.compression = cc b.compression = cc
} }
// SetReportErrorFunc implements common.Storage.
func (b *Blobovniczas) SetReportErrorFunc(f func(string, error)) {
b.reportError = f
}

View file

@ -1,165 +0,0 @@
package blobovniczatree
import (
"math"
"math/rand"
"os"
"strconv"
"strings"
"testing"
"github.com/nspcc-dev/neofs-node/pkg/core/object"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor/common"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor/internal/blobstortest"
"github.com/nspcc-dev/neofs-node/pkg/util/logger"
"github.com/nspcc-dev/neofs-node/pkg/util/logger/test"
objectSDK "github.com/nspcc-dev/neofs-sdk-go/object"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
"github.com/stretchr/testify/require"
"go.uber.org/zap/zaptest"
)
func TestOpenedAndActive(t *testing.T) {
rand.Seed(1024)
l := test.NewLogger(true)
p, err := os.MkdirTemp("", "*")
require.NoError(t, err)
const (
width = 2
depth = 1
dbSize = 64 * 1024
)
b := NewBlobovniczaTree(
WithLogger(l),
WithObjectSizeLimit(2048),
WithBlobovniczaShallowWidth(width),
WithBlobovniczaShallowDepth(depth),
WithRootPath(p),
WithOpenedCacheSize(1),
WithBlobovniczaSize(dbSize))
defer os.RemoveAll(p)
require.NoError(t, b.Open(false))
require.NoError(t, b.Init())
type pair struct {
obj *objectSDK.Object
sid []byte
}
objects := make([]pair, 10)
for i := range objects {
var prm common.PutPrm
prm.Object = blobstortest.NewObject(1024)
prm.Address = object.AddressOf(prm.Object)
prm.RawData, err = prm.Object.Marshal()
require.NoError(t, err)
res, err := b.Put(prm)
require.NoError(t, err)
objects[i].obj = prm.Object
objects[i].sid = res.StorageID
}
for i := range objects {
var prm common.GetPrm
prm.Address = object.AddressOf(objects[i].obj)
// It is important to provide StorageID because
// we want to open a single blobovnicza, without other
// unpredictable cache effects.
prm.StorageID = objects[i].sid
_, err := b.Get(prm)
require.NoError(t, err)
}
require.NoError(t, b.Close())
}
func TestBlobovniczas(t *testing.T) {
rand.Seed(1024)
l := test.NewLogger(false)
p, err := os.MkdirTemp("", "*")
require.NoError(t, err)
var width, depth uint64 = 2, 2
// sizeLim must be big enough, to hold at least multiple pages.
// 32 KiB is the initial size after all by-size buckets are created.
var szLim uint64 = 32*1024 + 1
b := NewBlobovniczaTree(
WithLogger(l),
WithObjectSizeLimit(szLim),
WithBlobovniczaShallowWidth(width),
WithBlobovniczaShallowDepth(depth),
WithRootPath(p),
WithBlobovniczaSize(szLim))
defer os.RemoveAll(p)
require.NoError(t, b.Init())
objSz := uint64(szLim / 2)
addrList := make([]oid.Address, 0)
minFitObjNum := width * depth * szLim / objSz
for i := uint64(0); i < minFitObjNum; i++ {
obj := blobstortest.NewObject(objSz)
addr := object.AddressOf(obj)
addrList = append(addrList, addr)
d, err := obj.Marshal()
require.NoError(t, err)
// save object in blobovnicza
_, err = b.Put(common.PutPrm{Address: addr, RawData: d})
require.NoError(t, err, i)
}
}
func TestFillOrder(t *testing.T) {
for _, depth := range []uint64{1, 2, 4} {
t.Run("depth="+strconv.FormatUint(depth, 10), func(t *testing.T) {
testFillOrder(t, depth)
})
}
}
func testFillOrder(t *testing.T, depth uint64) {
p, err := os.MkdirTemp("", "*")
require.NoError(t, err)
b := NewBlobovniczaTree(
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}),
WithObjectSizeLimit(2048),
WithBlobovniczaShallowWidth(3),
WithBlobovniczaShallowDepth(depth),
WithRootPath(p),
WithBlobovniczaSize(1024*1024)) // big enough for some objects.
require.NoError(t, b.Open(false))
require.NoError(t, b.Init())
t.Cleanup(func() {
b.Close()
})
objCount := 10 /* ~ objects per blobovnicza */ *
int(math.Pow(3, float64(depth)-1)) /* blobovniczas on a previous to last level */
for i := 0; i < objCount; i++ {
obj := blobstortest.NewObject(1024)
addr := object.AddressOf(obj)
d, err := obj.Marshal()
require.NoError(t, err)
res, err := b.Put(common.PutPrm{Address: addr, RawData: d, DontCompress: true})
require.NoError(t, err, i)
require.True(t, strings.HasSuffix(string(res.StorageID), "/0"))
}
}

View file

@ -3,9 +3,14 @@ package blobovniczatree
import ( import (
"errors" "errors"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/util/logicerr"
apistatus "github.com/nspcc-dev/neofs-sdk-go/client/status" apistatus "github.com/nspcc-dev/neofs-sdk-go/client/status"
) )
func isErrOutOfRange(err error) bool { func isErrOutOfRange(err error) bool {
return errors.As(err, new(apistatus.ObjectOutOfRange)) return errors.As(err, new(apistatus.ObjectOutOfRange))
} }
func isLogical(err error) bool {
return errors.As(err, new(logicerr.Logical))
}

View file

@ -19,6 +19,8 @@ type cfg struct {
blzShallowWidth uint64 blzShallowWidth uint64
compression *compression.Config compression *compression.Config
blzOpts []blobovnicza.Option blzOpts []blobovnicza.Option
// reportError is the function called when encountering disk errors.
reportError func(string, error)
} }
type Option func(*cfg) type Option func(*cfg)
@ -37,6 +39,7 @@ func initConfig(c *cfg) {
openedCacheSize: defaultOpenedCacheSize, openedCacheSize: defaultOpenedCacheSize,
blzShallowDepth: defaultBlzShallowDepth, blzShallowDepth: defaultBlzShallowDepth,
blzShallowWidth: defaultBlzShallowWidth, blzShallowWidth: defaultBlzShallowWidth,
reportError: func(string, error) {},
} }
} }

View file

@ -34,9 +34,12 @@ func (b *Blobovniczas) Put(prm common.PutPrm) (common.PutRes, error) {
fn = func(p string) (bool, error) { fn = func(p string) (bool, error) {
active, err := b.getActivated(p) active, err := b.getActivated(p)
if err != nil { if err != nil {
b.log.Debug("could not get active blobovnicza", if !isLogical(err) {
zap.String("error", err.Error()), b.reportError("could not get active blobovnicza", err)
) } else {
b.log.Debug("could not get active blobovnicza",
zap.String("error", err.Error()))
}
return false, nil return false, nil
} }
@ -49,10 +52,13 @@ func (b *Blobovniczas) Put(prm common.PutPrm) (common.PutRes, error) {
) )
if err := b.updateActive(p, &active.ind); err != nil { if err := b.updateActive(p, &active.ind); err != nil {
b.log.Debug("could not update active blobovnicza", if !isLogical(err) {
zap.String("level", p), b.reportError("could not update active blobovnicza", err)
zap.String("error", err.Error()), } else {
) b.log.Debug("could not update active blobovnicza",
zap.String("level", p),
zap.String("error", err.Error()))
}
return false, nil return false, nil
} }
@ -61,10 +67,13 @@ func (b *Blobovniczas) Put(prm common.PutPrm) (common.PutRes, error) {
} }
allFull = false allFull = false
b.log.Debug("could not put object to active blobovnicza", if !isLogical(err) {
zap.String("path", filepath.Join(p, u64ToHexString(active.ind))), b.reportError("could not put object to active blobovnicza", err)
zap.String("error", err.Error()), } else {
) b.log.Debug("could not put object to active blobovnicza",
zap.String("path", filepath.Join(p, u64ToHexString(active.ind))),
zap.String("error", err.Error()))
}
return false, nil return false, nil
} }

View file

@ -105,3 +105,11 @@ func WithUncompressableContentTypes(values []string) Option {
c.compression.UncompressableContentTypes = values c.compression.UncompressableContentTypes = values
} }
} }
// SetReportErrorFunc allows to provide a function to be called on disk errors.
// This function MUST be called before Open.
func (b *BlobStor) SetReportErrorFunc(f func(string, error)) {
for i := range b.storage {
b.storage[i].Storage.SetReportErrorFunc(f)
}
}

View file

@ -12,6 +12,9 @@ type Storage interface {
Type() string Type() string
Path() string Path() string
SetCompressor(cc *compression.Config) SetCompressor(cc *compression.Config)
// SetReportErrorFunc allows to provide a function to be called on disk errors.
// This function MUST be called before Open.
SetReportErrorFunc(f func(string, error))
Get(GetPrm) (GetRes, error) Get(GetPrm) (GetRes, error)
GetRange(GetRangePrm) (GetRangeRes, error) GetRange(GetRangePrm) (GetRangeRes, error)

View file

@ -8,6 +8,9 @@ import (
) )
func (b *BlobStor) Delete(prm common.DeletePrm) (common.DeleteRes, error) { func (b *BlobStor) Delete(prm common.DeletePrm) (common.DeleteRes, error) {
b.modeMtx.RLock()
defer b.modeMtx.RUnlock()
if prm.StorageID == nil { if prm.StorageID == nil {
for i := range b.storage { for i := range b.storage {
res, err := b.storage[i].Storage.Delete(prm) res, err := b.storage[i].Storage.Delete(prm)

View file

@ -10,6 +10,9 @@ import (
// Returns any error encountered that did not allow // Returns any error encountered that did not allow
// to completely check object existence. // to completely check object existence.
func (b *BlobStor) Exists(prm common.ExistsPrm) (common.ExistsRes, error) { func (b *BlobStor) Exists(prm common.ExistsPrm) (common.ExistsRes, error) {
b.modeMtx.RLock()
defer b.modeMtx.RUnlock()
// If there was an error during existence check below, // If there was an error during existence check below,
// it will be returned unless object was found in blobovnicza. // it will be returned unless object was found in blobovnicza.
// Otherwise, it is logged and the latest error is returned. // Otherwise, it is logged and the latest error is returned.

View file

@ -28,6 +28,7 @@ type FSTree struct {
Depth uint64 Depth uint64
DirNameLen int DirNameLen int
noSync bool
readOnly bool readOnly bool
} }
@ -238,16 +239,39 @@ func (t *FSTree) Put(prm common.PutPrm) (common.PutRes, error) {
prm.RawData = t.Compress(prm.RawData) prm.RawData = t.Compress(prm.RawData)
} }
err := os.WriteFile(p, prm.RawData, t.Permissions) err := t.writeFile(p, prm.RawData)
if err != nil { if err != nil {
var pe *fs.PathError var pe *fs.PathError
if errors.As(err, &pe) && pe.Err == syscall.ENOSPC { if errors.As(err, &pe) && pe.Err == syscall.ENOSPC {
err = common.ErrNoSpace err = common.ErrNoSpace
} }
} }
return common.PutRes{StorageID: []byte{}}, err return common.PutRes{StorageID: []byte{}}, err
} }
func (t *FSTree) writeFlags() int {
flags := os.O_WRONLY | os.O_CREATE | os.O_TRUNC
if t.noSync {
return flags
}
return flags | os.O_SYNC
}
// writeFile writes data to a file with path p.
// The code is copied from `os.WriteFile` with minor corrections for flags.
func (t *FSTree) writeFile(p string, data []byte) error {
f, err := os.OpenFile(p, t.writeFlags(), t.Permissions)
if err != nil {
return err
}
_, err = f.Write(data)
if err1 := f.Close(); err1 != nil && err == nil {
err = err1
}
return err
}
// PutStream puts executes handler on a file opened for write. // PutStream puts executes handler on a file opened for write.
func (t *FSTree) PutStream(addr oid.Address, handler func(*os.File) error) error { func (t *FSTree) PutStream(addr oid.Address, handler func(*os.File) error) error {
if t.readOnly { if t.readOnly {
@ -260,7 +284,7 @@ func (t *FSTree) PutStream(addr oid.Address, handler func(*os.File) error) error
return err return err
} }
f, err := os.OpenFile(p, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, t.Permissions) f, err := os.OpenFile(p, t.writeFlags(), t.Permissions)
if err != nil { if err != nil {
return err return err
} }
@ -355,3 +379,8 @@ func (t *FSTree) Path() string {
func (t *FSTree) SetCompressor(cc *compression.Config) { func (t *FSTree) SetCompressor(cc *compression.Config) {
t.Config = cc t.Config = cc
} }
// SetReportErrorFunc implements common.Storage.
func (t *FSTree) SetReportErrorFunc(f func(string, error)) {
// Do nothing, FSTree can encounter only one error which is returned.
}

View file

@ -29,3 +29,9 @@ func WithPath(p string) Option {
f.RootPath = p f.RootPath = p
} }
} }
func WithNoSync(noSync bool) Option {
return func(f *FSTree) {
f.noSync = noSync
}
}

View file

@ -12,6 +12,9 @@ import (
// If the descriptor is present, only one sub-storage is tried, // If the descriptor is present, only one sub-storage is tried,
// Otherwise, each sub-storage is tried in order. // Otherwise, each sub-storage is tried in order.
func (b *BlobStor) Get(prm common.GetPrm) (common.GetRes, error) { func (b *BlobStor) Get(prm common.GetPrm) (common.GetRes, error) {
b.modeMtx.RLock()
defer b.modeMtx.RUnlock()
if prm.StorageID == nil { if prm.StorageID == nil {
for i := range b.storage { for i := range b.storage {
res, err := b.storage[i].Storage.Get(prm) res, err := b.storage[i].Storage.Get(prm)

View file

@ -12,6 +12,9 @@ import (
// If the descriptor is present, only one sub-storage is tried, // If the descriptor is present, only one sub-storage is tried,
// Otherwise, each sub-storage is tried in order. // Otherwise, each sub-storage is tried in order.
func (b *BlobStor) GetRange(prm common.GetRangePrm) (common.GetRangeRes, error) { func (b *BlobStor) GetRange(prm common.GetRangePrm) (common.GetRangeRes, error) {
b.modeMtx.RLock()
defer b.modeMtx.RUnlock()
if prm.StorageID == nil { if prm.StorageID == nil {
for i := range b.storage { for i := range b.storage {
res, err := b.storage[i].Storage.GetRange(prm) res, err := b.storage[i].Storage.GetRange(prm)

View file

@ -2,6 +2,9 @@ package blobstor
// DumpInfo returns information about blob stor. // DumpInfo returns information about blob stor.
func (b *BlobStor) DumpInfo() Info { func (b *BlobStor) DumpInfo() Info {
b.modeMtx.RLock()
defer b.modeMtx.RUnlock()
sub := make([]SubStorageInfo, len(b.storage)) sub := make([]SubStorageInfo, len(b.storage))
for i := range b.storage { for i := range b.storage {
sub[i].Path = b.storage[i].Storage.Path() sub[i].Path = b.storage[i].Storage.Path()

View file

@ -16,6 +16,9 @@ import (
// //
// If handler returns an error, method wraps and returns it immediately. // If handler returns an error, method wraps and returns it immediately.
func (b *BlobStor) Iterate(prm common.IteratePrm) (common.IterateRes, error) { func (b *BlobStor) Iterate(prm common.IteratePrm) (common.IterateRes, error) {
b.modeMtx.RLock()
defer b.modeMtx.RUnlock()
for i := range b.storage { for i := range b.storage {
_, err := b.storage[i].Storage.Iterate(prm) _, err := b.storage[i].Storage.Iterate(prm)
if err != nil && !prm.IgnoreErrors { if err != nil && !prm.IgnoreErrors {

View file

@ -22,6 +22,9 @@ var ErrNoPlaceFound = logicerr.New("couldn't find a place to store an object")
// Returns any error encountered that // Returns any error encountered that
// did not allow to completely save the object. // did not allow to completely save the object.
func (b *BlobStor) Put(prm common.PutPrm) (common.PutRes, error) { func (b *BlobStor) Put(prm common.PutPrm) (common.PutRes, error) {
b.modeMtx.RLock()
defer b.modeMtx.RUnlock()
if prm.Object != nil { if prm.Object != nil {
prm.Address = object.AddressOf(prm.Object) prm.Address = object.AddressOf(prm.Object)
} }

View file

@ -92,6 +92,9 @@ func (e *StorageEngine) Init() error {
return errors.New("failed initialization on all shards") return errors.New("failed initialization on all shards")
} }
e.wg.Add(1)
go e.setModeLoop()
return nil return nil
} }
@ -100,8 +103,10 @@ var errClosed = errors.New("storage engine is closed")
// Close releases all StorageEngine's components. Waits for all data-related operations to complete. // Close releases all StorageEngine's components. Waits for all data-related operations to complete.
// After the call, all the next ones will fail. // After the call, all the next ones will fail.
// //
// The method is supposed to be called when the application exits. // The method MUST only be called when the application exits.
func (e *StorageEngine) Close() error { func (e *StorageEngine) Close() error {
close(e.closeCh)
defer e.wg.Wait()
return e.setBlockExecErr(errClosed) return e.setBlockExecErr(errClosed)
} }

View file

@ -20,7 +20,6 @@ import (
func TestExecBlocks(t *testing.T) { func TestExecBlocks(t *testing.T) {
e := testNewEngineWithShardNum(t, 2) // number doesn't matter in this test, 2 is several but not many e := testNewEngineWithShardNum(t, 2) // number doesn't matter in this test, 2 is several but not many
t.Cleanup(func() { t.Cleanup(func() {
e.Close()
os.RemoveAll(t.Name()) os.RemoveAll(t.Name())
}) })

View file

@ -23,6 +23,10 @@ type StorageEngine struct {
shardPools map[string]util.WorkerPool shardPools map[string]util.WorkerPool
closeCh chan struct{}
setModeCh chan setModeRequest
wg sync.WaitGroup
blockExec struct { blockExec struct {
mtx sync.RWMutex mtx sync.RWMutex
@ -35,33 +39,51 @@ type shardWrapper struct {
*shard.Shard *shard.Shard
} }
// reportShardError checks that the amount of errors doesn't exceed the configured threshold. type setModeRequest struct {
// If it does, shard is set to read-only mode. sh *shard.Shard
func (e *StorageEngine) reportShardError( errorCount uint32
sh hashedShard, }
msg string,
err error, // setModeLoop listens setModeCh to perform degraded mode transition of a single shard.
fields ...zap.Field) { // Instead of creating a worker per single shard we use a single goroutine.
if isLogical(err) { func (e *StorageEngine) setModeLoop() {
e.log.Warn(msg, defer e.wg.Done()
zap.Stringer("shard_id", sh.ID()),
zap.String("error", err.Error())) var (
return mtx sync.RWMutex // protects inProgress map
inProgress = make(map[string]struct{})
)
for {
select {
case <-e.closeCh:
return
case r := <-e.setModeCh:
sid := r.sh.ID().String()
mtx.Lock()
_, ok := inProgress[sid]
if !ok {
inProgress[sid] = struct{}{}
go func() {
e.moveToDegraded(r.sh, r.errorCount)
mtx.Lock()
delete(inProgress, sid)
mtx.Unlock()
}()
}
mtx.Unlock()
}
} }
}
func (e *StorageEngine) moveToDegraded(sh *shard.Shard, errCount uint32) {
e.mtx.RLock()
defer e.mtx.RUnlock()
sid := sh.ID() sid := sh.ID()
errCount := sh.errorCount.Inc() err := sh.SetMode(mode.DegradedReadOnly)
e.log.Warn(msg, append([]zap.Field{
zap.Stringer("shard_id", sid),
zap.Uint32("error count", errCount),
zap.String("error", err.Error()),
}, fields...)...)
if e.errorsThreshold == 0 || errCount < e.errorsThreshold {
return
}
err = sh.SetMode(mode.DegradedReadOnly)
if err != nil { if err != nil {
e.log.Error("failed to move shard in degraded-read-only mode, moving to read-only", e.log.Error("failed to move shard in degraded-read-only mode, moving to read-only",
zap.Stringer("shard_id", sid), zap.Stringer("shard_id", sid),
@ -86,6 +108,85 @@ func (e *StorageEngine) reportShardError(
} }
} }
// reportShardErrorBackground increases shard error counter and logs an error.
// It is intended to be used from background workers and
// doesn't change shard mode because of possible deadlocks.
func (e *StorageEngine) reportShardErrorBackground(id string, msg string, err error) {
e.mtx.RLock()
sh, ok := e.shards[id]
e.mtx.RUnlock()
if !ok {
return
}
if isLogical(err) {
e.log.Warn(msg,
zap.Stringer("shard_id", sh.ID()),
zap.String("error", err.Error()))
return
}
errCount := sh.errorCount.Inc()
e.reportShardErrorWithFlags(sh.Shard, errCount, false, msg, err)
}
// reportShardError checks that the amount of errors doesn't exceed the configured threshold.
// If it does, shard is set to read-only mode.
func (e *StorageEngine) reportShardError(
sh hashedShard,
msg string,
err error,
fields ...zap.Field) {
if isLogical(err) {
e.log.Warn(msg,
zap.Stringer("shard_id", sh.ID()),
zap.String("error", err.Error()))
return
}
errCount := sh.errorCount.Inc()
e.reportShardErrorWithFlags(sh.Shard, errCount, true, msg, err, fields...)
}
func (e *StorageEngine) reportShardErrorWithFlags(
sh *shard.Shard,
errCount uint32,
block bool,
msg string,
err error,
fields ...zap.Field) {
sid := sh.ID()
e.log.Warn(msg, append([]zap.Field{
zap.Stringer("shard_id", sid),
zap.Uint32("error count", errCount),
zap.String("error", err.Error()),
}, fields...)...)
if e.errorsThreshold == 0 || errCount < e.errorsThreshold {
return
}
if block {
e.moveToDegraded(sh, errCount)
} else {
req := setModeRequest{
errorCount: errCount,
sh: sh,
}
select {
case e.setModeCh <- req:
default:
// For background workers we can have a lot of such errors,
// thus logging is done with DEBUG level.
e.log.Debug("mode change is in progress, ignoring set-mode request",
zap.Stringer("shard_id", sid),
zap.Uint32("error_count", errCount))
}
}
}
func isLogical(err error) bool { func isLogical(err error) bool {
return errors.As(err, &logicerr.Logical{}) return errors.As(err, &logicerr.Logical{})
} }
@ -124,6 +225,8 @@ func New(opts ...Option) *StorageEngine {
mtx: new(sync.RWMutex), mtx: new(sync.RWMutex),
shards: make(map[string]shardWrapper), shards: make(map[string]shardWrapper),
shardPools: make(map[string]util.WorkerPool), shardPools: make(map[string]util.WorkerPool),
closeCh: make(chan struct{}),
setModeCh: make(chan setModeRequest),
} }
} }

View file

@ -200,8 +200,8 @@ func checkShardState(t *testing.T, e *StorageEngine, id *shard.ID, errCount uint
sh := e.shards[id.String()] sh := e.shards[id.String()]
e.mtx.RUnlock() e.mtx.RUnlock()
require.Equal(t, mode, sh.GetMode())
require.Equal(t, errCount, sh.errorCount.Load()) require.Equal(t, errCount, sh.errorCount.Load())
require.Equal(t, mode, sh.GetMode())
} }
// corruptSubDir makes random directory except "blobovnicza" in blobstor FSTree unreadable. // corruptSubDir makes random directory except "blobovnicza" in blobstor FSTree unreadable.

View file

@ -136,8 +136,10 @@ mainLoop:
loop: loop:
for i := range lst { for i := range lst {
addr := lst[i].Address
var getPrm shard.GetPrm var getPrm shard.GetPrm
getPrm.SetAddress(lst[i]) getPrm.SetAddress(addr)
getRes, err := sh.Get(getPrm) getRes, err := sh.Get(getPrm)
if err != nil { if err != nil {
@ -147,18 +149,18 @@ mainLoop:
return res, err return res, err
} }
hrw.SortSliceByWeightValue(shards, weights, hrw.Hash([]byte(lst[i].EncodeToString()))) hrw.SortSliceByWeightValue(shards, weights, hrw.Hash([]byte(addr.EncodeToString())))
for j := range shards { for j := range shards {
if _, ok := shardMap[shards[j].ID().String()]; ok { if _, ok := shardMap[shards[j].ID().String()]; ok {
continue continue
} }
putDone, exists := e.putToShard(shards[j].hashedShard, j, shards[j].pool, lst[i], getRes.Object()) putDone, exists := e.putToShard(shards[j].hashedShard, j, shards[j].pool, addr, getRes.Object())
if putDone || exists { if putDone || exists {
if putDone { if putDone {
e.log.Debug("object is moved to another shard", e.log.Debug("object is moved to another shard",
zap.String("from", sidList[n]), zap.String("from", sidList[n]),
zap.Stringer("to", shards[j].ID()), zap.Stringer("to", shards[j].ID()),
zap.Stringer("addr", lst[i])) zap.Stringer("addr", addr))
res.count++ res.count++
} }
@ -172,7 +174,7 @@ mainLoop:
return res, fmt.Errorf("%w: %s", errPutShard, lst[i]) return res, fmt.Errorf("%w: %s", errPutShard, lst[i])
} }
err = prm.handler(lst[i], getRes.Object()) err = prm.handler(addr, getRes.Object())
if err != nil { if err != nil {
return res, err return res, err
} }

View file

@ -9,6 +9,7 @@ import (
apistatus "github.com/nspcc-dev/neofs-sdk-go/client/status" apistatus "github.com/nspcc-dev/neofs-sdk-go/client/status"
objectSDK "github.com/nspcc-dev/neofs-sdk-go/object" objectSDK "github.com/nspcc-dev/neofs-sdk-go/object"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id" oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
"go.uber.org/zap"
) )
// InhumePrm encapsulates parameters for inhume operation. // InhumePrm encapsulates parameters for inhume operation.
@ -79,6 +80,18 @@ func (e *StorageEngine) inhume(prm InhumePrm) (InhumeRes, error) {
} }
for i := range prm.addrs { for i := range prm.addrs {
if !prm.forceRemoval {
locked, err := e.isLocked(prm.addrs[i])
if err != nil {
e.log.Warn("removing an object without full locking check",
zap.Error(err),
zap.Stringer("addr", prm.addrs[i]))
} else if locked {
var lockedErr apistatus.ObjectLocked
return InhumeRes{}, lockedErr
}
}
if prm.tombstone != nil { if prm.tombstone != nil {
shPrm.SetTarget(*prm.tombstone, prm.addrs[i]) shPrm.SetTarget(*prm.tombstone, prm.addrs[i])
} else { } else {
@ -166,6 +179,29 @@ func (e *StorageEngine) inhumeAddr(addr oid.Address, prm shard.InhumePrm, checkE
return ok, retErr return ok, retErr
} }
func (e *StorageEngine) isLocked(addr oid.Address) (bool, error) {
var locked bool
var err error
var outErr error
e.iterateOverUnsortedShards(func(h hashedShard) (stop bool) {
locked, err = h.Shard.IsLocked(addr)
if err != nil {
e.reportShardError(h, "can't check object's lockers", err, zap.Stringer("addr", addr))
outErr = err
return false
}
return locked
})
if locked {
return locked, nil
}
return locked, outErr
}
func (e *StorageEngine) processExpiredTombstones(ctx context.Context, addrs []meta.TombstonedObject) { func (e *StorageEngine) processExpiredTombstones(ctx context.Context, addrs []meta.TombstonedObject) {
e.iterateOverUnsortedShards(func(sh hashedShard) (stop bool) { e.iterateOverUnsortedShards(func(sh hashedShard) (stop bool) {
sh.HandleExpiredTombstones(addrs) sh.HandleExpiredTombstones(addrs)

View file

@ -3,8 +3,8 @@ package engine
import ( import (
"sort" "sort"
objectcore "github.com/nspcc-dev/neofs-node/pkg/core/object"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
) )
// ErrEndOfListing is returned from an object listing with cursor // ErrEndOfListing is returned from an object listing with cursor
@ -38,12 +38,12 @@ func (p *ListWithCursorPrm) WithCursor(cursor *Cursor) {
// ListWithCursorRes contains values returned from ListWithCursor operation. // ListWithCursorRes contains values returned from ListWithCursor operation.
type ListWithCursorRes struct { type ListWithCursorRes struct {
addrList []oid.Address addrList []objectcore.AddressWithType
cursor *Cursor cursor *Cursor
} }
// AddressList returns addresses selected by ListWithCursor operation. // AddressList returns addresses selected by ListWithCursor operation.
func (l ListWithCursorRes) AddressList() []oid.Address { func (l ListWithCursorRes) AddressList() []objectcore.AddressWithType {
return l.addrList return l.addrList
} }
@ -60,7 +60,7 @@ func (l ListWithCursorRes) Cursor() *Cursor {
// Returns ErrEndOfListing if there are no more objects to return or count // Returns ErrEndOfListing if there are no more objects to return or count
// parameter set to zero. // parameter set to zero.
func (e *StorageEngine) ListWithCursor(prm ListWithCursorPrm) (ListWithCursorRes, error) { func (e *StorageEngine) ListWithCursor(prm ListWithCursorPrm) (ListWithCursorRes, error) {
result := make([]oid.Address, 0, prm.count) result := make([]objectcore.AddressWithType, 0, prm.count)
// 1. Get available shards and sort them. // 1. Get available shards and sort them.
e.mtx.RLock() e.mtx.RLock()

View file

@ -8,7 +8,7 @@ import (
"github.com/nspcc-dev/neofs-node/pkg/core/object" "github.com/nspcc-dev/neofs-node/pkg/core/object"
cidtest "github.com/nspcc-dev/neofs-sdk-go/container/id/test" cidtest "github.com/nspcc-dev/neofs-sdk-go/container/id/test"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id" objectSDK "github.com/nspcc-dev/neofs-sdk-go/object"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -24,8 +24,8 @@ func TestListWithCursor(t *testing.T) {
const total = 20 const total = 20
expected := make([]oid.Address, 0, total) expected := make([]object.AddressWithType, 0, total)
got := make([]oid.Address, 0, total) got := make([]object.AddressWithType, 0, total)
for i := 0; i < total; i++ { for i := 0; i < total; i++ {
containerID := cidtest.ID() containerID := cidtest.ID()
@ -36,7 +36,7 @@ func TestListWithCursor(t *testing.T) {
_, err := e.Put(prm) _, err := e.Put(prm)
require.NoError(t, err) require.NoError(t, err)
expected = append(expected, object.AddressOf(obj)) expected = append(expected, object.AddressWithType{Type: objectSDK.TypeRegular, Address: object.AddressOf(obj)})
} }
expected = sortAddresses(expected) expected = sortAddresses(expected)
@ -68,9 +68,9 @@ func TestListWithCursor(t *testing.T) {
require.Equal(t, expected, got) require.Equal(t, expected, got)
} }
func sortAddresses(addr []oid.Address) []oid.Address { func sortAddresses(addrWithType []object.AddressWithType) []object.AddressWithType {
sort.Slice(addr, func(i, j int) bool { sort.Slice(addrWithType, func(i, j int) bool {
return addr[i].EncodeToString() < addr[j].EncodeToString() return addrWithType[i].Address.EncodeToString() < addrWithType[j].Address.EncodeToString()
}) })
return addr return addrWithType
} }

View file

@ -87,6 +87,7 @@ func (e *StorageEngine) createShard(opts []shard.Option) (*shard.Shard, error) {
shard.WithExpiredTombstonesCallback(e.processExpiredTombstones), shard.WithExpiredTombstonesCallback(e.processExpiredTombstones),
shard.WithExpiredLocksCallback(e.processExpiredLocks), shard.WithExpiredLocksCallback(e.processExpiredLocks),
shard.WithDeletedLockCallback(e.processDeletedLocks), shard.WithDeletedLockCallback(e.processDeletedLocks),
shard.WithReportErrorFunc(e.reportShardErrorBackground),
)...) )...)
if err := sh.UpdateID(); err != nil { if err := sh.UpdateID(); err != nil {

View file

@ -13,62 +13,60 @@ var _ pilorama.Forest = (*StorageEngine)(nil)
// TreeMove implements the pilorama.Forest interface. // TreeMove implements the pilorama.Forest interface.
func (e *StorageEngine) TreeMove(d pilorama.CIDDescriptor, treeID string, m *pilorama.Move) (*pilorama.LogMove, error) { func (e *StorageEngine) TreeMove(d pilorama.CIDDescriptor, treeID string, m *pilorama.Move) (*pilorama.LogMove, error) {
var err error index, lst, err := e.getTreeShard(d.CID, treeID)
var lm *pilorama.LogMove if err != nil && !errors.Is(err, pilorama.ErrTreeNotFound) {
for _, sh := range e.sortShardsByWeight(d.CID) { return nil, err
lm, err = sh.TreeMove(d, treeID, m) }
if err != nil {
if errors.Is(err, shard.ErrReadOnlyMode) || err == shard.ErrPiloramaDisabled { lm, err := lst[index].TreeMove(d, treeID, m)
return nil, err if err != nil {
} if !errors.Is(err, shard.ErrReadOnlyMode) && err != shard.ErrPiloramaDisabled {
e.reportShardError(sh, "can't perform `TreeMove`", err, e.reportShardError(lst[index], "can't perform `TreeMove`", err,
zap.Stringer("cid", d.CID), zap.Stringer("cid", d.CID),
zap.String("tree", treeID)) zap.String("tree", treeID))
continue
} }
return lm, nil
return nil, err
} }
return nil, err return lm, nil
} }
// TreeAddByPath implements the pilorama.Forest interface. // TreeAddByPath implements the pilorama.Forest interface.
func (e *StorageEngine) TreeAddByPath(d pilorama.CIDDescriptor, treeID string, attr string, path []string, m []pilorama.KeyValue) ([]pilorama.LogMove, error) { func (e *StorageEngine) TreeAddByPath(d pilorama.CIDDescriptor, treeID string, attr string, path []string, m []pilorama.KeyValue) ([]pilorama.LogMove, error) {
var err error index, lst, err := e.getTreeShard(d.CID, treeID)
var lm []pilorama.LogMove if err != nil && !errors.Is(err, pilorama.ErrTreeNotFound) {
for _, sh := range e.sortShardsByWeight(d.CID) { return nil, err
lm, err = sh.TreeAddByPath(d, treeID, attr, path, m) }
if err != nil {
if errors.Is(err, shard.ErrReadOnlyMode) || err == shard.ErrPiloramaDisabled { lm, err := lst[index].TreeAddByPath(d, treeID, attr, path, m)
return nil, err if err != nil {
} if !errors.Is(err, shard.ErrReadOnlyMode) && err != shard.ErrPiloramaDisabled {
e.reportShardError(sh, "can't perform `TreeAddByPath`", err, e.reportShardError(lst[index], "can't perform `TreeAddByPath`", err,
zap.Stringer("cid", d.CID), zap.Stringer("cid", d.CID),
zap.String("tree", treeID)) zap.String("tree", treeID))
continue
} }
return lm, nil return nil, err
} }
return nil, err return lm, nil
} }
// TreeApply implements the pilorama.Forest interface. // TreeApply implements the pilorama.Forest interface.
func (e *StorageEngine) TreeApply(d pilorama.CIDDescriptor, treeID string, m *pilorama.Move) error { func (e *StorageEngine) TreeApply(d pilorama.CIDDescriptor, treeID string, m *pilorama.Move) error {
var err error index, lst, err := e.getTreeShard(d.CID, treeID)
for _, sh := range e.sortShardsByWeight(d.CID) { if err != nil && !errors.Is(err, pilorama.ErrTreeNotFound) {
err = sh.TreeApply(d, treeID, m) return err
if err != nil {
if errors.Is(err, shard.ErrReadOnlyMode) || err == shard.ErrPiloramaDisabled {
return err
}
e.reportShardError(sh, "can't perform `TreeApply`", err,
zap.Stringer("cid", d.CID),
zap.String("tree", treeID))
continue
}
return nil
} }
return err err = lst[index].TreeApply(d, treeID, m)
if err != nil {
if !errors.Is(err, shard.ErrReadOnlyMode) && err != shard.ErrPiloramaDisabled {
e.reportShardError(lst[index], "can't perform `TreeApply`", err,
zap.Stringer("cid", d.CID),
zap.String("tree", treeID))
}
return err
}
return nil
} }
// TreeGetByPath implements the pilorama.Forest interface. // TreeGetByPath implements the pilorama.Forest interface.
@ -205,3 +203,27 @@ func (e *StorageEngine) TreeList(cid cidSDK.ID) ([]string, error) {
return resIDs, nil return resIDs, nil
} }
// TreeExists implements the pilorama.Forest interface.
func (e *StorageEngine) TreeExists(cid cidSDK.ID, treeID string) (bool, error) {
_, _, err := e.getTreeShard(cid, treeID)
if errors.Is(err, pilorama.ErrTreeNotFound) {
return false, nil
}
return err == nil, err
}
func (e *StorageEngine) getTreeShard(cid cidSDK.ID, treeID string) (int, []hashedShard, error) {
lst := e.sortShardsByWeight(cid)
for i, sh := range lst {
exists, err := sh.TreeExists(cid, treeID)
if err != nil {
return 0, nil, err
}
if exists {
return i, lst, err
}
}
return 0, lst, pilorama.ErrTreeNotFound
}

View file

@ -42,6 +42,13 @@ func (db *DB) containers(tx *bbolt.Tx) ([]cid.ID, error) {
} }
func (db *DB) ContainerSize(id cid.ID) (size uint64, err error) { func (db *DB) ContainerSize(id cid.ID) (size uint64, err error) {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return 0, ErrDegradedMode
}
err = db.boltDB.View(func(tx *bbolt.Tx) error { err = db.boltDB.View(func(tx *bbolt.Tx) error {
size, err = db.containerSize(tx, id) size, err = db.containerSize(tx, id)

View file

@ -15,6 +15,9 @@ import (
// ErrDegradedMode is returned when metabase is in a degraded mode. // ErrDegradedMode is returned when metabase is in a degraded mode.
var ErrDegradedMode = logicerr.New("metabase is in a degraded mode") var ErrDegradedMode = logicerr.New("metabase is in a degraded mode")
// ErrReadOnlyMode is returned when metabase is in a read-only mode.
var ErrReadOnlyMode = logicerr.New("metabase is in a read-only mode")
// Open boltDB instance for metabase. // Open boltDB instance for metabase.
func (db *DB) Open(readOnly bool) error { func (db *DB) Open(readOnly bool) error {
err := util.MkdirAllX(filepath.Dir(db.info.Path), db.info.Permission) err := util.MkdirAllX(filepath.Dir(db.info.Path), db.info.Permission)
@ -81,6 +84,13 @@ func (db *DB) Init() error {
// Reset resets metabase. Works similar to Init but cleans up all static buckets and // Reset resets metabase. Works similar to Init but cleans up all static buckets and
// removes all dynamic (CID-dependent) ones in non-blank BoltDB instances. // removes all dynamic (CID-dependent) ones in non-blank BoltDB instances.
func (db *DB) Reset() error { func (db *DB) Reset() error {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return ErrDegradedMode
}
return db.init(true) return db.init(true)
} }
@ -147,6 +157,15 @@ func (db *DB) init(reset bool) error {
// SyncCounters forces to synchronize the object counters. // SyncCounters forces to synchronize the object counters.
func (db *DB) SyncCounters() error { func (db *DB) SyncCounters() error {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return ErrDegradedMode
} else if db.mode.ReadOnly() {
return ErrReadOnlyMode
}
return db.boltDB.Update(func(tx *bbolt.Tx) error { return db.boltDB.Update(func(tx *bbolt.Tx) error {
return syncCounter(tx, true) return syncCounter(tx, true)
}) })

View file

@ -43,6 +43,13 @@ func (o ObjectCounters) Phy() uint64 {
// Returns only the errors that do not allow reading counter // Returns only the errors that do not allow reading counter
// in Bolt database. // in Bolt database.
func (db *DB) ObjectCounters() (cc ObjectCounters, err error) { func (db *DB) ObjectCounters() (cc ObjectCounters, err error) {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return ObjectCounters{}, ErrDegradedMode
}
err = db.boltDB.View(func(tx *bbolt.Tx) error { err = db.boltDB.View(func(tx *bbolt.Tx) error {
b := tx.Bucket(shardInfoBucket) b := tx.Bucket(shardInfoBucket)
if b != nil { if b != nil {

View file

@ -57,6 +57,12 @@ func (db *DB) Delete(prm DeletePrm) (DeleteRes, error) {
db.modeMtx.RLock() db.modeMtx.RLock()
defer db.modeMtx.RUnlock() defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return DeleteRes{}, ErrDegradedMode
} else if db.mode.ReadOnly() {
return DeleteRes{}, ErrReadOnlyMode
}
var rawRemoved uint64 var rawRemoved uint64
var availableRemoved uint64 var availableRemoved uint64
var err error var err error
@ -157,7 +163,10 @@ func (db *DB) delete(tx *bbolt.Tx, addr oid.Address, refCounter referenceCounter
// unmarshal object, work only with physically stored (raw == true) objects // unmarshal object, work only with physically stored (raw == true) objects
obj, err := db.get(tx, addr, key, false, true, currEpoch) obj, err := db.get(tx, addr, key, false, true, currEpoch)
if err != nil { if err != nil {
if errors.As(err, new(apistatus.ObjectNotFound)) { var siErr *objectSDK.SplitInfoError
var notFoundErr apistatus.ObjectNotFound
if errors.As(err, &notFoundErr) || errors.As(err, &siErr) {
return false, false, nil return false, false, nil
} }

View file

@ -39,9 +39,9 @@ func TestDB_Delete(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Len(t, l, 1) require.Len(t, l, 1)
// try to remove parent unsuccessfully // try to remove parent, should be no-op, error-free
err = metaDelete(db, object.AddressOf(parent)) err = metaDelete(db, object.AddressOf(parent))
require.Error(t, err) require.NoError(t, err)
// inhume parent and child so they will be on graveyard // inhume parent and child so they will be on graveyard
ts := generateObjectWithCID(t, cnr) ts := generateObjectWithCID(t, cnr)

View file

@ -44,6 +44,10 @@ func (db *DB) Exists(prm ExistsPrm) (res ExistsRes, err error) {
db.modeMtx.RLock() db.modeMtx.RLock()
defer db.modeMtx.RUnlock() defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return res, ErrDegradedMode
}
currEpoch := db.epochState.CurrentEpoch() currEpoch := db.epochState.CurrentEpoch()
err = db.boltDB.View(func(tx *bbolt.Tx) error { err = db.boltDB.View(func(tx *bbolt.Tx) error {

View file

@ -47,8 +47,12 @@ func (r GetRes) Header() *objectSDK.Object {
// Returns an error of type apistatus.ObjectAlreadyRemoved if object has been placed in graveyard. // Returns an error of type apistatus.ObjectAlreadyRemoved if object has been placed in graveyard.
// Returns the object.ErrObjectIsExpired if the object is presented but already expired. // Returns the object.ErrObjectIsExpired if the object is presented but already expired.
func (db *DB) Get(prm GetPrm) (res GetRes, err error) { func (db *DB) Get(prm GetPrm) (res GetRes, err error) {
db.modeMtx.Lock() db.modeMtx.RLock()
defer db.modeMtx.Unlock() defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return res, ErrDegradedMode
}
currEpoch := db.epochState.CurrentEpoch() currEpoch := db.epochState.CurrentEpoch()

View file

@ -58,6 +58,13 @@ func (g *GarbageIterationPrm) SetOffset(offset oid.Address) {
// If h returns ErrInterruptIterator, nil returns immediately. // If h returns ErrInterruptIterator, nil returns immediately.
// Returns other errors of h directly. // Returns other errors of h directly.
func (db *DB) IterateOverGarbage(p GarbageIterationPrm) error { func (db *DB) IterateOverGarbage(p GarbageIterationPrm) error {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return ErrDegradedMode
}
return db.boltDB.View(func(tx *bbolt.Tx) error { return db.boltDB.View(func(tx *bbolt.Tx) error {
return db.iterateDeletedObj(tx, gcHandler{p.h}, p.offset) return db.iterateDeletedObj(tx, gcHandler{p.h}, p.offset)
}) })
@ -118,6 +125,13 @@ func (g *GraveyardIterationPrm) SetOffset(offset oid.Address) {
// If h returns ErrInterruptIterator, nil returns immediately. // If h returns ErrInterruptIterator, nil returns immediately.
// Returns other errors of h directly. // Returns other errors of h directly.
func (db *DB) IterateOverGraveyard(p GraveyardIterationPrm) error { func (db *DB) IterateOverGraveyard(p GraveyardIterationPrm) error {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return ErrDegradedMode
}
return db.boltDB.View(func(tx *bbolt.Tx) error { return db.boltDB.View(func(tx *bbolt.Tx) error {
return db.iterateDeletedObj(tx, graveyardHandler{p.h}, p.offset) return db.iterateDeletedObj(tx, graveyardHandler{p.h}, p.offset)
}) })
@ -218,6 +232,15 @@ func graveFromKV(k, v []byte) (res TombstonedObject, err error) {
// //
// Returns any error appeared during deletion process. // Returns any error appeared during deletion process.
func (db *DB) DropGraves(tss []TombstonedObject) error { func (db *DB) DropGraves(tss []TombstonedObject) error {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return ErrDegradedMode
} else if db.mode.ReadOnly() {
return ErrReadOnlyMode
}
buf := make([]byte, addressKeySize) buf := make([]byte, addressKeySize)
return db.boltDB.Update(func(tx *bbolt.Tx) error { return db.boltDB.Update(func(tx *bbolt.Tx) error {

View file

@ -15,5 +15,8 @@ type Info struct {
// DumpInfo returns information about the DB. // DumpInfo returns information about the DB.
func (db *DB) DumpInfo() Info { func (db *DB) DumpInfo() Info {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
return db.info return db.info
} }

View file

@ -95,6 +95,8 @@ func (db *DB) Inhume(prm InhumePrm) (res InhumeRes, err error) {
if db.mode.NoMetabase() { if db.mode.NoMetabase() {
return InhumeRes{}, ErrDegradedMode return InhumeRes{}, ErrDegradedMode
} else if db.mode.ReadOnly() {
return InhumeRes{}, ErrReadOnlyMode
} }
currEpoch := db.epochState.CurrentEpoch() currEpoch := db.epochState.CurrentEpoch()

View file

@ -44,6 +44,13 @@ var ErrInterruptIterator = logicerr.New("iterator is interrupted")
// If h returns ErrInterruptIterator, nil returns immediately. // If h returns ErrInterruptIterator, nil returns immediately.
// Returns other errors of h directly. // Returns other errors of h directly.
func (db *DB) IterateExpired(epoch uint64, h ExpiredObjectHandler) error { func (db *DB) IterateExpired(epoch uint64, h ExpiredObjectHandler) error {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return ErrDegradedMode
}
return db.boltDB.View(func(tx *bbolt.Tx) error { return db.boltDB.View(func(tx *bbolt.Tx) error {
return db.iterateExpired(tx, epoch, h) return db.iterateExpired(tx, epoch, h)
}) })
@ -119,6 +126,13 @@ func (db *DB) iterateExpired(tx *bbolt.Tx, epoch uint64, h ExpiredObjectHandler)
// //
// Does not modify tss. // Does not modify tss.
func (db *DB) IterateCoveredByTombstones(tss map[string]oid.Address, h func(oid.Address) error) error { func (db *DB) IterateCoveredByTombstones(tss map[string]oid.Address, h func(oid.Address) error) error {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return ErrDegradedMode
}
return db.boltDB.View(func(tx *bbolt.Tx) error { return db.boltDB.View(func(tx *bbolt.Tx) error {
return db.iterateCoveredByTombstones(tx, tss, h) return db.iterateCoveredByTombstones(tx, tss, h)
}) })

View file

@ -1,8 +1,10 @@
package meta package meta
import ( import (
objectcore "github.com/nspcc-dev/neofs-node/pkg/core/object"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/util/logicerr" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/util/logicerr"
cid "github.com/nspcc-dev/neofs-sdk-go/container/id" cid "github.com/nspcc-dev/neofs-sdk-go/container/id"
"github.com/nspcc-dev/neofs-sdk-go/object"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id" oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
"go.etcd.io/bbolt" "go.etcd.io/bbolt"
) )
@ -38,12 +40,12 @@ func (l *ListPrm) SetCursor(cursor *Cursor) {
// ListRes contains values returned from ListWithCursor operation. // ListRes contains values returned from ListWithCursor operation.
type ListRes struct { type ListRes struct {
addrList []oid.Address addrList []objectcore.AddressWithType
cursor *Cursor cursor *Cursor
} }
// AddressList returns addresses selected by ListWithCursor operation. // AddressList returns addresses selected by ListWithCursor operation.
func (l ListRes) AddressList() []oid.Address { func (l ListRes) AddressList() []objectcore.AddressWithType {
return l.addrList return l.addrList
} }
@ -62,7 +64,11 @@ func (db *DB) ListWithCursor(prm ListPrm) (res ListRes, err error) {
db.modeMtx.RLock() db.modeMtx.RLock()
defer db.modeMtx.RUnlock() defer db.modeMtx.RUnlock()
result := make([]oid.Address, 0, prm.count) if db.mode.NoMetabase() {
return res, ErrDegradedMode
}
result := make([]objectcore.AddressWithType, 0, prm.count)
err = db.boltDB.View(func(tx *bbolt.Tx) error { err = db.boltDB.View(func(tx *bbolt.Tx) error {
res.addrList, res.cursor, err = db.listWithCursor(tx, result, prm.count, prm.cursor) res.addrList, res.cursor, err = db.listWithCursor(tx, result, prm.count, prm.cursor)
@ -72,7 +78,7 @@ func (db *DB) ListWithCursor(prm ListPrm) (res ListRes, err error) {
return res, err return res, err
} }
func (db *DB) listWithCursor(tx *bbolt.Tx, result []oid.Address, count int, cursor *Cursor) ([]oid.Address, *Cursor, error) { func (db *DB) listWithCursor(tx *bbolt.Tx, result []objectcore.AddressWithType, count int, cursor *Cursor) ([]objectcore.AddressWithType, *Cursor, error) {
threshold := cursor == nil // threshold is a flag to ignore cursor threshold := cursor == nil // threshold is a flag to ignore cursor
var bucketName []byte var bucketName []byte
@ -97,12 +103,17 @@ loop:
continue continue
} }
var objType object.Type
switch prefix { switch prefix {
case case primaryPrefix:
primaryPrefix, objType = object.TypeRegular
storageGroupPrefix, case storageGroupPrefix:
lockersPrefix, objType = object.TypeStorageGroup
tombstonePrefix: case lockersPrefix:
objType = object.TypeLock
case tombstonePrefix:
objType = object.TypeTombstone
default: default:
continue continue
} }
@ -110,7 +121,7 @@ loop:
bkt := tx.Bucket(name) bkt := tx.Bucket(name)
if bkt != nil { if bkt != nil {
copy(rawAddr, cidRaw) copy(rawAddr, cidRaw)
result, offset, cursor = selectNFromBucket(bkt, graveyardBkt, garbageBkt, rawAddr, containerID, result, offset, cursor = selectNFromBucket(bkt, objType, graveyardBkt, garbageBkt, rawAddr, containerID,
result, count, cursor, threshold) result, count, cursor, threshold)
} }
bucketName = name bucketName = name
@ -145,14 +156,15 @@ loop:
// selectNFromBucket similar to selectAllFromBucket but uses cursor to find // selectNFromBucket similar to selectAllFromBucket but uses cursor to find
// object to start selecting from. Ignores inhumed objects. // object to start selecting from. Ignores inhumed objects.
func selectNFromBucket(bkt *bbolt.Bucket, // main bucket func selectNFromBucket(bkt *bbolt.Bucket, // main bucket
objType object.Type, // type of the objects stored in the main bucket
graveyardBkt, garbageBkt *bbolt.Bucket, // cached graveyard buckets graveyardBkt, garbageBkt *bbolt.Bucket, // cached graveyard buckets
cidRaw []byte, // container ID prefix, optimization cidRaw []byte, // container ID prefix, optimization
cnt cid.ID, // container ID cnt cid.ID, // container ID
to []oid.Address, // listing result to []objectcore.AddressWithType, // listing result
limit int, // stop listing at `limit` items in result limit int, // stop listing at `limit` items in result
cursor *Cursor, // start from cursor object cursor *Cursor, // start from cursor object
threshold bool, // ignore cursor and start immediately threshold bool, // ignore cursor and start immediately
) ([]oid.Address, []byte, *Cursor) { ) ([]objectcore.AddressWithType, []byte, *Cursor) {
if cursor == nil { if cursor == nil {
cursor = new(Cursor) cursor = new(Cursor)
} }
@ -186,7 +198,7 @@ func selectNFromBucket(bkt *bbolt.Bucket, // main bucket
var a oid.Address var a oid.Address
a.SetContainer(cnt) a.SetContainer(cnt)
a.SetObject(obj) a.SetObject(obj)
to = append(to, a) to = append(to, objectcore.AddressWithType{Address: a, Type: objType})
count++ count++
} }

View file

@ -9,7 +9,6 @@ import (
meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase" meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase"
cidtest "github.com/nspcc-dev/neofs-sdk-go/container/id/test" cidtest "github.com/nspcc-dev/neofs-sdk-go/container/id/test"
objectSDK "github.com/nspcc-dev/neofs-sdk-go/object" objectSDK "github.com/nspcc-dev/neofs-sdk-go/object"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
oidtest "github.com/nspcc-dev/neofs-sdk-go/object/id/test" oidtest "github.com/nspcc-dev/neofs-sdk-go/object/id/test"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"go.etcd.io/bbolt" "go.etcd.io/bbolt"
@ -73,7 +72,7 @@ func TestLisObjectsWithCursor(t *testing.T) {
total = containers * 5 // regular + ts + sg + child + lock total = containers * 5 // regular + ts + sg + child + lock
) )
expected := make([]oid.Address, 0, total) expected := make([]object.AddressWithType, 0, total)
// fill metabase with objects // fill metabase with objects
for i := 0; i < containers; i++ { for i := 0; i < containers; i++ {
@ -84,28 +83,28 @@ func TestLisObjectsWithCursor(t *testing.T) {
obj.SetType(objectSDK.TypeRegular) obj.SetType(objectSDK.TypeRegular)
err := putBig(db, obj) err := putBig(db, obj)
require.NoError(t, err) require.NoError(t, err)
expected = append(expected, object.AddressOf(obj)) expected = append(expected, object.AddressWithType{Address: object.AddressOf(obj), Type: objectSDK.TypeRegular})
// add one tombstone // add one tombstone
obj = generateObjectWithCID(t, containerID) obj = generateObjectWithCID(t, containerID)
obj.SetType(objectSDK.TypeTombstone) obj.SetType(objectSDK.TypeTombstone)
err = putBig(db, obj) err = putBig(db, obj)
require.NoError(t, err) require.NoError(t, err)
expected = append(expected, object.AddressOf(obj)) expected = append(expected, object.AddressWithType{Address: object.AddressOf(obj), Type: objectSDK.TypeTombstone})
// add one storage group // add one storage group
obj = generateObjectWithCID(t, containerID) obj = generateObjectWithCID(t, containerID)
obj.SetType(objectSDK.TypeStorageGroup) obj.SetType(objectSDK.TypeStorageGroup)
err = putBig(db, obj) err = putBig(db, obj)
require.NoError(t, err) require.NoError(t, err)
expected = append(expected, object.AddressOf(obj)) expected = append(expected, object.AddressWithType{Address: object.AddressOf(obj), Type: objectSDK.TypeStorageGroup})
// add one lock // add one lock
obj = generateObjectWithCID(t, containerID) obj = generateObjectWithCID(t, containerID)
obj.SetType(objectSDK.TypeLock) obj.SetType(objectSDK.TypeLock)
err = putBig(db, obj) err = putBig(db, obj)
require.NoError(t, err) require.NoError(t, err)
expected = append(expected, object.AddressOf(obj)) expected = append(expected, object.AddressWithType{Address: object.AddressOf(obj), Type: objectSDK.TypeLock})
// add one inhumed (do not include into expected) // add one inhumed (do not include into expected)
obj = generateObjectWithCID(t, containerID) obj = generateObjectWithCID(t, containerID)
@ -127,14 +126,14 @@ func TestLisObjectsWithCursor(t *testing.T) {
child.SetSplitID(splitID) child.SetSplitID(splitID)
err = putBig(db, child) err = putBig(db, child)
require.NoError(t, err) require.NoError(t, err)
expected = append(expected, object.AddressOf(child)) expected = append(expected, object.AddressWithType{Address: object.AddressOf(child), Type: objectSDK.TypeRegular})
} }
expected = sortAddresses(expected) expected = sortAddresses(expected)
t.Run("success with various count", func(t *testing.T) { t.Run("success with various count", func(t *testing.T) {
for countPerReq := 1; countPerReq <= total; countPerReq++ { for countPerReq := 1; countPerReq <= total; countPerReq++ {
got := make([]oid.Address, 0, total) got := make([]object.AddressWithType, 0, total)
res, cursor, err := metaListWithCursor(db, uint32(countPerReq), nil) res, cursor, err := metaListWithCursor(db, uint32(countPerReq), nil)
require.NoError(t, err, "count:%d", countPerReq) require.NoError(t, err, "count:%d", countPerReq)
@ -184,8 +183,8 @@ func TestAddObjectDuringListingWithCursor(t *testing.T) {
got, cursor, err := metaListWithCursor(db, total/2, nil) got, cursor, err := metaListWithCursor(db, total/2, nil)
require.NoError(t, err) require.NoError(t, err)
for _, obj := range got { for _, obj := range got {
if _, ok := expected[obj.EncodeToString()]; ok { if _, ok := expected[obj.Address.EncodeToString()]; ok {
expected[obj.EncodeToString()]++ expected[obj.Address.EncodeToString()]++
} }
} }
@ -203,8 +202,8 @@ func TestAddObjectDuringListingWithCursor(t *testing.T) {
break break
} }
for _, obj := range got { for _, obj := range got {
if _, ok := expected[obj.EncodeToString()]; ok { if _, ok := expected[obj.Address.EncodeToString()]; ok {
expected[obj.EncodeToString()]++ expected[obj.Address.EncodeToString()]++
} }
} }
} }
@ -216,14 +215,14 @@ func TestAddObjectDuringListingWithCursor(t *testing.T) {
} }
func sortAddresses(addr []oid.Address) []oid.Address { func sortAddresses(addrWithType []object.AddressWithType) []object.AddressWithType {
sort.Slice(addr, func(i, j int) bool { sort.Slice(addrWithType, func(i, j int) bool {
return addr[i].EncodeToString() < addr[j].EncodeToString() return addrWithType[i].Address.EncodeToString() < addrWithType[j].Address.EncodeToString()
}) })
return addr return addrWithType
} }
func metaListWithCursor(db *meta.DB, count uint32, cursor *meta.Cursor) ([]oid.Address, *meta.Cursor, error) { func metaListWithCursor(db *meta.DB, count uint32, cursor *meta.Cursor) ([]object.AddressWithType, *meta.Cursor, error) {
var listPrm meta.ListPrm var listPrm meta.ListPrm
listPrm.SetCount(count) listPrm.SetCount(count)
listPrm.SetCursor(cursor) listPrm.SetCursor(cursor)

View file

@ -29,6 +29,12 @@ func (db *DB) Lock(cnr cid.ID, locker oid.ID, locked []oid.ID) error {
db.modeMtx.RLock() db.modeMtx.RLock()
defer db.modeMtx.RUnlock() defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return ErrDegradedMode
} else if db.mode.ReadOnly() {
return ErrReadOnlyMode
}
if len(locked) == 0 { if len(locked) == 0 {
panic("empty locked list") panic("empty locked list")
} }
@ -91,6 +97,13 @@ func (db *DB) Lock(cnr cid.ID, locker oid.ID, locked []oid.ID) error {
// FreeLockedBy unlocks all objects in DB which are locked by lockers. // FreeLockedBy unlocks all objects in DB which are locked by lockers.
func (db *DB) FreeLockedBy(lockers []oid.Address) error { func (db *DB) FreeLockedBy(lockers []oid.Address) error {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return ErrDegradedMode
}
return db.boltDB.Update(func(tx *bbolt.Tx) error { return db.boltDB.Update(func(tx *bbolt.Tx) error {
var err error var err error
@ -175,3 +188,42 @@ func freePotentialLocks(tx *bbolt.Tx, idCnr cid.ID, locker oid.ID) error {
return nil return nil
} }
// IsLockedPrm groups the parameters of IsLocked operation.
type IsLockedPrm struct {
addr oid.Address
}
// SetAddress sets object address that will be checked for lock relations.
func (i *IsLockedPrm) SetAddress(addr oid.Address) {
i.addr = addr
}
// IsLockedRes groups the resulting values of IsLocked operation.
type IsLockedRes struct {
locked bool
}
// Locked describes the requested object status according to the metabase
// current state.
func (i IsLockedRes) Locked() bool {
return i.locked
}
// IsLocked checks is the provided object is locked by any `LOCK`. Not found
// object is considered as non-locked.
//
// Returns only non-logical errors related to underlying database.
func (db *DB) IsLocked(prm IsLockedPrm) (res IsLockedRes, err error) {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return res, ErrDegradedMode
}
return res, db.boltDB.View(func(tx *bbolt.Tx) error {
res.locked = objectLocked(tx, prm.addr.Container(), prm.addr.Object())
return nil
})
}

View file

@ -169,6 +169,50 @@ func TestDB_Lock(t *testing.T) {
}) })
} }
func TestDB_IsLocked(t *testing.T) {
db := newDB(t)
// existing and locked objs
objs, _ := putAndLockObj(t, db, 5)
var prm meta.IsLockedPrm
for _, obj := range objs {
prm.SetAddress(objectcore.AddressOf(obj))
res, err := db.IsLocked(prm)
require.NoError(t, err)
require.True(t, res.Locked())
}
// some rand obj
prm.SetAddress(oidtest.Address())
res, err := db.IsLocked(prm)
require.NoError(t, err)
require.False(t, res.Locked())
// existing but not locked obj
obj := objecttest.Object()
var putPrm meta.PutPrm
putPrm.SetObject(obj)
_, err = db.Put(putPrm)
require.NoError(t, err)
prm.SetAddress(objectcore.AddressOf(obj))
res, err = db.IsLocked(prm)
require.NoError(t, err)
require.False(t, res.Locked())
}
// putAndLockObj puts object, returns it and its locker. // putAndLockObj puts object, returns it and its locker.
func putAndLockObj(t *testing.T, db *meta.DB, numOfLockedObjs int) ([]*object.Object, *object.Object) { func putAndLockObj(t *testing.T, db *meta.DB, numOfLockedObjs int) ([]*object.Object, *object.Object) {
cnr := cidtest.ID() cnr := cidtest.ID()

View file

@ -52,6 +52,12 @@ func (db *DB) ToMoveIt(prm ToMoveItPrm) (res ToMoveItRes, err error) {
db.modeMtx.RLock() db.modeMtx.RLock()
defer db.modeMtx.RUnlock() defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return res, ErrDegradedMode
} else if db.mode.ReadOnly() {
return res, ErrReadOnlyMode
}
key := make([]byte, addressKeySize) key := make([]byte, addressKeySize)
key = addressKey(prm.addr, key) key = addressKey(prm.addr, key)
@ -68,6 +74,12 @@ func (db *DB) DoNotMove(prm DoNotMovePrm) (res DoNotMoveRes, err error) {
db.modeMtx.RLock() db.modeMtx.RLock()
defer db.modeMtx.RUnlock() defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return res, ErrDegradedMode
} else if db.mode.ReadOnly() {
return res, ErrReadOnlyMode
}
key := make([]byte, addressKeySize) key := make([]byte, addressKeySize)
key = addressKey(prm.addr, key) key = addressKey(prm.addr, key)
@ -84,6 +96,10 @@ func (db *DB) Movable(_ MovablePrm) (MovableRes, error) {
db.modeMtx.RLock() db.modeMtx.RLock()
defer db.modeMtx.RUnlock() defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return MovableRes{}, ErrDegradedMode
}
var strAddrs []string var strAddrs []string
err := db.boltDB.View(func(tx *bbolt.Tx) error { err := db.boltDB.View(func(tx *bbolt.Tx) error {

View file

@ -56,6 +56,12 @@ func (db *DB) Put(prm PutPrm) (res PutRes, err error) {
db.modeMtx.RLock() db.modeMtx.RLock()
defer db.modeMtx.RUnlock() defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return res, ErrDegradedMode
} else if db.mode.ReadOnly() {
return res, ErrReadOnlyMode
}
currEpoch := db.epochState.CurrentEpoch() currEpoch := db.epochState.CurrentEpoch()
err = db.boltDB.Batch(func(tx *bbolt.Tx) error { err = db.boltDB.Batch(func(tx *bbolt.Tx) error {

View file

@ -59,6 +59,10 @@ func (db *DB) Select(prm SelectPrm) (res SelectRes, err error) {
db.modeMtx.RLock() db.modeMtx.RLock()
defer db.modeMtx.RUnlock() defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return res, ErrDegradedMode
}
if blindlyProcess(prm.filters) { if blindlyProcess(prm.filters) {
return res, nil return res, nil
} }

View file

@ -13,6 +13,13 @@ var (
// ReadShardID reads shard id from db. // ReadShardID reads shard id from db.
// If id is missing, returns nil, nil. // If id is missing, returns nil, nil.
func (db *DB) ReadShardID() ([]byte, error) { func (db *DB) ReadShardID() ([]byte, error) {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return nil, ErrDegradedMode
}
var id []byte var id []byte
err := db.boltDB.View(func(tx *bbolt.Tx) error { err := db.boltDB.View(func(tx *bbolt.Tx) error {
b := tx.Bucket(shardInfoBucket) b := tx.Bucket(shardInfoBucket)
@ -26,6 +33,15 @@ func (db *DB) ReadShardID() ([]byte, error) {
// WriteShardID writes shard it to db. // WriteShardID writes shard it to db.
func (db *DB) WriteShardID(id []byte) error { func (db *DB) WriteShardID(id []byte) error {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return ErrDegradedMode
} else if db.mode.ReadOnly() {
return ErrReadOnlyMode
}
return db.boltDB.Update(func(tx *bbolt.Tx) error { return db.boltDB.Update(func(tx *bbolt.Tx) error {
b, err := tx.CreateBucketIfNotExists(shardInfoBucket) b, err := tx.CreateBucketIfNotExists(shardInfoBucket)
if err != nil { if err != nil {

View file

@ -29,6 +29,13 @@ func (r StorageIDRes) StorageID() []byte {
// StorageID returns storage descriptor for objects from the blobstor. // StorageID returns storage descriptor for objects from the blobstor.
// It is put together with the object can makes get/delete operation faster. // It is put together with the object can makes get/delete operation faster.
func (db *DB) StorageID(prm StorageIDPrm) (res StorageIDRes, err error) { func (db *DB) StorageID(prm StorageIDPrm) (res StorageIDRes, err error) {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return res, ErrDegradedMode
}
err = db.boltDB.View(func(tx *bbolt.Tx) error { err = db.boltDB.View(func(tx *bbolt.Tx) error {
res.id, err = db.storageID(tx, prm.addr) res.id, err = db.storageID(tx, prm.addr)
@ -52,3 +59,46 @@ func (db *DB) storageID(tx *bbolt.Tx, addr oid.Address) ([]byte, error) {
return slice.Copy(storageID), nil return slice.Copy(storageID), nil
} }
// UpdateStorageIDPrm groups the parameters of UpdateStorageID operation.
type UpdateStorageIDPrm struct {
addr oid.Address
id []byte
}
// UpdateStorageIDRes groups the resulting values of UpdateStorageID operation.
type UpdateStorageIDRes struct{}
// SetAddress is an UpdateStorageID option to set the object address to check.
func (p *UpdateStorageIDPrm) SetAddress(addr oid.Address) {
p.addr = addr
}
// SetStorageID is an UpdateStorageID option to set the storage ID.
func (p *UpdateStorageIDPrm) SetStorageID(id []byte) {
p.id = id
}
// UpdateStorageID updates storage descriptor for objects from the blobstor.
func (db *DB) UpdateStorageID(prm UpdateStorageIDPrm) (res UpdateStorageIDRes, err error) {
db.modeMtx.RLock()
defer db.modeMtx.RUnlock()
if db.mode.NoMetabase() {
return res, ErrDegradedMode
} else if db.mode.ReadOnly() {
return res, ErrReadOnlyMode
}
currEpoch := db.epochState.CurrentEpoch()
err = db.boltDB.Batch(func(tx *bbolt.Tx) error {
exists, err := db.exists(tx, prm.addr, currEpoch)
if !exists || err != nil {
return err
}
return updateStorageID(tx, prm.addr, prm.id)
})
return
}

View file

@ -9,7 +9,7 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func TestDB_IsSmall(t *testing.T) { func TestDB_StorageID(t *testing.T) {
db := newDB(t) db := newDB(t)
raw1 := generateObject(t) raw1 := generateObject(t)
@ -39,6 +39,23 @@ func TestDB_IsSmall(t *testing.T) {
fetchedStorageID, err = metaStorageID(db, object.AddressOf(raw1)) fetchedStorageID, err = metaStorageID(db, object.AddressOf(raw1))
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, storageID, fetchedStorageID) require.Equal(t, storageID, fetchedStorageID)
t.Run("update", func(t *testing.T) {
require.NoError(t, metaUpdateStorageID(db, object.AddressOf(raw2), storageID))
fetchedStorageID, err = metaStorageID(db, object.AddressOf(raw2))
require.NoError(t, err)
require.Equal(t, storageID, fetchedStorageID)
})
}
func metaUpdateStorageID(db *meta.DB, addr oid.Address, id []byte) error {
var sidPrm meta.UpdateStorageIDPrm
sidPrm.SetAddress(addr)
sidPrm.SetStorageID(id)
_, err := db.UpdateStorageID(sidPrm)
return err
} }
func metaStorageID(db *meta.DB, addr oid.Address) ([]byte, error) { func metaStorageID(db *meta.DB, addr oid.Address) ([]byte, error) {

View file

@ -154,6 +154,19 @@ func (t *boltForest) TreeMove(d CIDDescriptor, treeID string, m *Move) (*LogMove
}) })
} }
// TreeExists implements the Forest interface.
func (t *boltForest) TreeExists(cid cidSDK.ID, treeID string) (bool, error) {
var exists bool
err := t.db.View(func(tx *bbolt.Tx) error {
treeRoot := tx.Bucket(bucketName(cid, treeID))
exists = treeRoot != nil
return nil
})
return exists, err
}
// TreeAddByPath implements the Forest interface. // TreeAddByPath implements the Forest interface.
func (t *boltForest) TreeAddByPath(d CIDDescriptor, treeID string, attr string, path []string, meta []KeyValue) ([]LogMove, error) { func (t *boltForest) TreeAddByPath(d CIDDescriptor, treeID string, attr string, path []string, meta []KeyValue) ([]LogMove, error) {
if !d.checkValid() { if !d.checkValid() {
@ -311,6 +324,12 @@ func (t *boltForest) applyOperation(logBucket, treeBucket *bbolt.Bucket, lm *Log
return err return err
} }
} }
if key == nil {
// The operation is inserted in the beginning, reposition the cursor.
// Otherwise, `Next` call will return currently inserted operation.
c.First()
}
key, value = c.Next() key, value = c.Next()
// 3. Re-apply all other operations. // 3. Re-apply all other operations.
@ -332,8 +351,8 @@ func (t *boltForest) do(lb *bbolt.Bucket, b *bbolt.Bucket, key []byte, op *LogMo
shouldPut := !t.isAncestor(b, key, op.Child, op.Parent) shouldPut := !t.isAncestor(b, key, op.Child, op.Parent)
currParent := b.Get(parentKey(key, op.Child)) currParent := b.Get(parentKey(key, op.Child))
op.HasOld = currParent != nil
if currParent != nil { // node is already in tree if currParent != nil { // node is already in tree
op.HasOld = true
op.Old.Parent = binary.LittleEndian.Uint64(currParent) op.Old.Parent = binary.LittleEndian.Uint64(currParent)
if err := op.Old.Meta.FromBytes(b.Get(metaKey(key, op.Child))); err != nil { if err := op.Old.Meta.FromBytes(b.Get(metaKey(key, op.Child))); err != nil {
return err return err
@ -608,6 +627,17 @@ func (t *boltForest) TreeGetOpLog(cid cidSDK.ID, treeID string, height uint64) (
// TreeDrop implements the pilorama.Forest interface. // TreeDrop implements the pilorama.Forest interface.
func (t *boltForest) TreeDrop(cid cidSDK.ID, treeID string) error { func (t *boltForest) TreeDrop(cid cidSDK.ID, treeID string) error {
return t.db.Batch(func(tx *bbolt.Tx) error { return t.db.Batch(func(tx *bbolt.Tx) error {
if treeID == "" {
c := tx.Cursor()
prefix := []byte(cid.EncodeToString())
for k, _ := c.Seek(prefix); k != nil && bytes.HasPrefix(k, prefix); k, _ = c.Next() {
err := tx.DeleteBucket(k)
if err != nil {
return err
}
}
return nil
}
err := tx.DeleteBucket(bucketName(cid, treeID)) err := tx.DeleteBucket(bucketName(cid, treeID))
if errors.Is(err, bbolt.ErrBucketNotFound) { if errors.Is(err, bbolt.ErrBucketNotFound) {
return ErrTreeNotFound return ErrTreeNotFound

View file

@ -183,13 +183,21 @@ func (f *memoryForest) TreeGetOpLog(cid cidSDK.ID, treeID string, height uint64)
// TreeDrop implements the pilorama.Forest interface. // TreeDrop implements the pilorama.Forest interface.
func (f *memoryForest) TreeDrop(cid cidSDK.ID, treeID string) error { func (f *memoryForest) TreeDrop(cid cidSDK.ID, treeID string) error {
fullID := cid.String() + "/" + treeID cidStr := cid.String()
_, ok := f.treeMap[fullID] if treeID == "" {
if !ok { for k := range f.treeMap {
return ErrTreeNotFound if strings.HasPrefix(k, cidStr) {
delete(f.treeMap, k)
}
}
} else {
fullID := cidStr + "/" + treeID
_, ok := f.treeMap[fullID]
if !ok {
return ErrTreeNotFound
}
delete(f.treeMap, fullID)
} }
delete(f.treeMap, fullID)
return nil return nil
} }
@ -209,3 +217,10 @@ func (f *memoryForest) TreeList(cid cidSDK.ID) ([]string, error) {
return res, nil return res, nil
} }
// TreeExists implements the pilorama.Forest interface.
func (f *memoryForest) TreeExists(cid cidSDK.ID, treeID string) (bool, error) {
fullID := cid.EncodeToString() + "/" + treeID
_, ok := f.treeMap[fullID]
return ok, nil
}

View file

@ -180,14 +180,26 @@ func TestForest_TreeDrop(t *testing.T) {
} }
func testForestTreeDrop(t *testing.T, s Forest) { func testForestTreeDrop(t *testing.T, s Forest) {
cid := cidtest.ID() const cidsSize = 3
var cids [cidsSize]cidSDK.ID
for i := range cids {
cids[i] = cidtest.ID()
}
cid := cids[0]
t.Run("return nil if not found", func(t *testing.T) { t.Run("return nil if not found", func(t *testing.T) {
require.ErrorIs(t, s.TreeDrop(cid, "123"), ErrTreeNotFound) require.ErrorIs(t, s.TreeDrop(cid, "123"), ErrTreeNotFound)
}) })
require.NoError(t, s.TreeDrop(cid, ""))
trees := []string{"tree1", "tree2"} trees := []string{"tree1", "tree2"}
d := CIDDescriptor{cid, 0, 1} var descs [cidsSize]CIDDescriptor
for i := range descs {
descs[i] = CIDDescriptor{cids[i], 0, 1}
}
d := descs[0]
for i := range trees { for i := range trees {
_, err := s.TreeAddByPath(d, trees[i], AttributeFilename, []string{"path"}, _, err := s.TreeAddByPath(d, trees[i], AttributeFilename, []string{"path"},
[]KeyValue{{Key: "TreeName", Value: []byte(trees[i])}}) []KeyValue{{Key: "TreeName", Value: []byte(trees[i])}})
@ -202,6 +214,28 @@ func testForestTreeDrop(t *testing.T, s Forest) {
_, err = s.TreeGetByPath(cid, trees[1], AttributeFilename, []string{"path"}, true) _, err = s.TreeGetByPath(cid, trees[1], AttributeFilename, []string{"path"}, true)
require.NoError(t, err) require.NoError(t, err)
for j := range descs {
for i := range trees {
_, err := s.TreeAddByPath(descs[j], trees[i], AttributeFilename, []string{"path"},
[]KeyValue{{Key: "TreeName", Value: []byte(trees[i])}})
require.NoError(t, err)
}
}
list, err := s.TreeList(cid)
require.NotEmpty(t, list)
require.NoError(t, s.TreeDrop(cid, ""))
list, err = s.TreeList(cid)
require.NoError(t, err)
require.Empty(t, list)
for j := 1; j < len(cids); j++ {
list, err = s.TreeList(cids[j])
require.NoError(t, err)
require.Equal(t, len(list), len(trees))
}
} }
func TestForest_TreeAdd(t *testing.T) { func TestForest_TreeAdd(t *testing.T) {
@ -478,6 +512,140 @@ func testForestTreeGetOpLog(t *testing.T, constructor func(t testing.TB) Forest)
}) })
} }
func TestForest_TreeExists(t *testing.T) {
for i := range providers {
t.Run(providers[i].name, func(t *testing.T) {
testForestTreeExists(t, providers[i].construct)
})
}
}
func testForestTreeExists(t *testing.T, constructor func(t testing.TB) Forest) {
s := constructor(t)
checkExists := func(t *testing.T, expected bool, cid cidSDK.ID, treeID string) {
actual, err := s.TreeExists(cid, treeID)
require.NoError(t, err)
require.Equal(t, expected, actual)
}
cid := cidtest.ID()
treeID := "version"
d := CIDDescriptor{cid, 0, 1}
t.Run("empty state, no panic", func(t *testing.T) {
checkExists(t, false, cid, treeID)
})
require.NoError(t, s.TreeApply(d, treeID, &Move{Parent: 0, Child: 1}))
checkExists(t, true, cid, treeID)
checkExists(t, false, cidtest.ID(), treeID) // different CID, same tree
checkExists(t, false, cid, "another tree") // same CID, different tree
t.Run("can be removed", func(t *testing.T) {
require.NoError(t, s.TreeDrop(cid, treeID))
checkExists(t, false, cid, treeID)
})
}
func TestApplyTricky1(t *testing.T) {
ops := []Move{
{
Parent: 1,
Meta: Meta{Time: 100},
Child: 2,
},
{
Parent: 0,
Meta: Meta{Time: 80},
Child: 1,
},
}
expected := []struct{ child, parent Node }{
{1, 0},
{2, 1},
}
treeID := "version"
d := CIDDescriptor{CID: cidtest.ID(), Position: 0, Size: 1}
for i := range providers {
t.Run(providers[i].name, func(t *testing.T) {
s := providers[i].construct(t)
for i := range ops {
require.NoError(t, s.TreeApply(d, treeID, &ops[i]))
}
for i := range expected {
_, parent, err := s.TreeGetMeta(d.CID, treeID, expected[i].child)
require.NoError(t, err)
require.Equal(t, expected[i].parent, parent)
}
})
}
}
func TestApplyTricky2(t *testing.T) {
// Apply operations in the reverse order and then insert an operation in the middle
// so that previous "old" parent becomes invalid.
ops := []Move{
{
Parent: 10000,
Meta: Meta{Time: 100},
Child: 5,
},
{
Parent: 3,
Meta: Meta{Time: 80},
Child: 5,
},
{
Parent: 5,
Meta: Meta{Time: 40},
Child: 3,
},
{
Parent: 5,
Meta: Meta{Time: 60},
Child: 1,
},
{
Parent: 1,
Meta: Meta{Time: 90},
Child: 2,
},
{
Parent: 0,
Meta: Meta{Time: 10},
Child: 5,
},
}
expected := []struct{ child, parent Node }{
{5, 10_000},
{3, 5},
{2, 1},
{1, 5},
}
treeID := "version"
d := CIDDescriptor{CID: cidtest.ID(), Position: 0, Size: 1}
for i := range providers {
t.Run(providers[i].name, func(t *testing.T) {
s := providers[i].construct(t)
for i := range ops {
require.NoError(t, s.TreeApply(d, treeID, &ops[i]))
}
for i := range expected {
_, parent, err := s.TreeGetMeta(d.CID, treeID, expected[i].child)
require.NoError(t, err)
require.Equal(t, expected[i].parent, parent)
}
})
}
}
func TestForest_ApplyRandom(t *testing.T) { func TestForest_ApplyRandom(t *testing.T) {
for i := range providers { for i := range providers {
t.Run(providers[i].name, func(t *testing.T) { t.Run(providers[i].name, func(t *testing.T) {

View file

@ -35,10 +35,14 @@ type Forest interface {
TreeGetOpLog(cid cidSDK.ID, treeID string, height uint64) (Move, error) TreeGetOpLog(cid cidSDK.ID, treeID string, height uint64) (Move, error)
// TreeDrop drops a tree from the database. // TreeDrop drops a tree from the database.
// If the tree is not found, ErrTreeNotFound should be returned. // If the tree is not found, ErrTreeNotFound should be returned.
// In case of empty treeID drops all trees related to container.
TreeDrop(cid cidSDK.ID, treeID string) error TreeDrop(cid cidSDK.ID, treeID string) error
// TreeList returns all the tree IDs that have been added to the // TreeList returns all the tree IDs that have been added to the
// passed container ID. Nil slice should be returned if no tree found. // passed container ID. Nil slice should be returned if no tree found.
TreeList(cid cidSDK.ID) ([]string, error) TreeList(cid cidSDK.ID) ([]string, error)
// TreeExists checks if a tree exists locally.
// If the tree is not found, false and a nil error should be returned.
TreeExists(cid cidSDK.ID, treeID string) (bool, error)
} }
type ForestStorage interface { type ForestStorage interface {

View file

@ -1,8 +1,9 @@
package pilorama package pilorama
import ( import (
"errors"
"math" "math"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/util/logicerr"
) )
// Timestamp is an alias for integer timestamp type. // Timestamp is an alias for integer timestamp type.
@ -50,10 +51,10 @@ const (
var ( var (
// ErrTreeNotFound is returned when the requested tree is not found. // ErrTreeNotFound is returned when the requested tree is not found.
ErrTreeNotFound = errors.New("tree not found") ErrTreeNotFound = logicerr.New("tree not found")
// ErrNotPathAttribute is returned when the path is trying to be constructed with a non-internal // ErrNotPathAttribute is returned when the path is trying to be constructed with a non-internal
// attribute. Currently the only attribute allowed is AttributeFilename. // attribute. Currently the only attribute allowed is AttributeFilename.
ErrNotPathAttribute = errors.New("attribute can't be used in path construction") ErrNotPathAttribute = logicerr.New("attribute can't be used in path construction")
) )
// isAttributeInternal returns true iff key can be used in `*ByPath` methods. // isAttributeInternal returns true iff key can be used in `*ByPath` methods.

View file

@ -3,10 +3,10 @@ package shard
import ( import (
"fmt" "fmt"
objectcore "github.com/nspcc-dev/neofs-node/pkg/core/object"
meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase" meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase"
cid "github.com/nspcc-dev/neofs-sdk-go/container/id" cid "github.com/nspcc-dev/neofs-sdk-go/container/id"
"github.com/nspcc-dev/neofs-sdk-go/object" "github.com/nspcc-dev/neofs-sdk-go/object"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -36,7 +36,7 @@ type ListWithCursorPrm struct {
// ListWithCursorRes contains values returned from ListWithCursor operation. // ListWithCursorRes contains values returned from ListWithCursor operation.
type ListWithCursorRes struct { type ListWithCursorRes struct {
addrList []oid.Address addrList []objectcore.AddressWithType
cursor *Cursor cursor *Cursor
} }
@ -53,7 +53,7 @@ func (p *ListWithCursorPrm) WithCursor(cursor *Cursor) {
} }
// AddressList returns addresses selected by ListWithCursor operation. // AddressList returns addresses selected by ListWithCursor operation.
func (r ListWithCursorRes) AddressList() []oid.Address { func (r ListWithCursorRes) AddressList() []objectcore.AddressWithType {
return r.addrList return r.addrList
} }

View file

@ -3,6 +3,7 @@ package shard
import ( import (
"fmt" "fmt"
meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase"
cid "github.com/nspcc-dev/neofs-sdk-go/container/id" cid "github.com/nspcc-dev/neofs-sdk-go/container/id"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id" oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
) )
@ -28,3 +29,22 @@ func (s *Shard) Lock(idCnr cid.ID, locker oid.ID, locked []oid.ID) error {
return nil return nil
} }
// IsLocked checks object locking relation of the provided object. Not found object is
// considered as not locked. Requires healthy metabase, returns ErrDegradedMode otherwise.
func (s *Shard) IsLocked(addr oid.Address) (bool, error) {
m := s.GetMode()
if m.NoMetabase() {
return false, ErrDegradedMode
}
var prm meta.IsLockedPrm
prm.SetAddress(addr)
res, err := s.metaBase.IsLocked(prm)
if err != nil {
return false, err
}
return res.Locked(), nil
}

View file

@ -16,6 +16,7 @@ import (
cidtest "github.com/nspcc-dev/neofs-sdk-go/container/id/test" cidtest "github.com/nspcc-dev/neofs-sdk-go/container/id/test"
"github.com/nspcc-dev/neofs-sdk-go/object" "github.com/nspcc-dev/neofs-sdk-go/object"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id" oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
oidtest "github.com/nspcc-dev/neofs-sdk-go/object/id/test"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -138,5 +139,39 @@ func TestShard_Lock(t *testing.T) {
_, err = sh.Get(getPrm) _, err = sh.Get(getPrm)
require.ErrorAs(t, err, new(apistatus.ObjectNotFound)) require.ErrorAs(t, err, new(apistatus.ObjectNotFound))
}) })
}
func TestShard_IsLocked(t *testing.T) {
sh := newShard(t, false)
cnr := cidtest.ID()
obj := generateObjectWithCID(t, cnr)
cnrID, _ := obj.ContainerID()
objID, _ := obj.ID()
lockID := oidtest.ID()
// put the object
var putPrm shard.PutPrm
putPrm.SetObject(obj)
_, err := sh.Put(putPrm)
require.NoError(t, err)
// not locked object is not locked
locked, err := sh.IsLocked(objectcore.AddressOf(obj))
require.NoError(t, err)
require.False(t, locked)
// locked object is locked
require.NoError(t, sh.Lock(cnrID, lockID, []oid.ID{objID}))
locked, err = sh.IsLocked(objectcore.AddressOf(obj))
require.NoError(t, err)
require.True(t, locked)
} }

View file

@ -96,13 +96,16 @@ type cfg struct {
tsSource TombstoneSource tsSource TombstoneSource
metricsWriter MetricsWriter metricsWriter MetricsWriter
reportErrorFunc func(selfID string, message string, err error)
} }
func defaultCfg() *cfg { func defaultCfg() *cfg {
return &cfg{ return &cfg{
rmBatchSize: 100, rmBatchSize: 100,
log: &logger.Logger{Logger: zap.L()}, log: &logger.Logger{Logger: zap.L()},
gcCfg: defaultGCCfg(), gcCfg: defaultGCCfg(),
reportErrorFunc: func(string, string, error) {},
} }
} }
@ -117,20 +120,25 @@ func New(opts ...Option) *Shard {
bs := blobstor.New(c.blobOpts...) bs := blobstor.New(c.blobOpts...)
mb := meta.New(c.metaOpts...) mb := meta.New(c.metaOpts...)
var writeCache writecache.Cache s := &Shard{
if c.useWriteCache { cfg: c,
writeCache = writecache.New( blobStor: bs,
append(c.writeCacheOpts, metaBase: mb,
writecache.WithBlobstor(bs), tsSource: c.tsSource,
writecache.WithMetabase(mb))...)
} }
s := &Shard{ reportFunc := func(msg string, err error) {
cfg: c, s.reportErrorFunc(s.ID().String(), msg, err)
blobStor: bs, }
metaBase: mb,
writeCache: writeCache, s.blobStor.SetReportErrorFunc(reportFunc)
tsSource: c.tsSource,
if c.useWriteCache {
s.writeCache = writecache.New(
append(c.writeCacheOpts,
writecache.WithReportErrorFunc(reportFunc),
writecache.WithBlobstor(bs),
writecache.WithMetabase(mb))...)
} }
if s.piloramaOpts != nil { if s.piloramaOpts != nil {
@ -281,6 +289,14 @@ func WithMetricsWriter(v MetricsWriter) Option {
} }
} }
// WithReportErrorFunc returns option to specify callback for handling storage-related errors
// in the background workers.
func WithReportErrorFunc(f func(selfID string, message string, err error)) Option {
return func(c *cfg) {
c.reportErrorFunc = f
}
}
func (s *Shard) fillInfo() { func (s *Shard) fillInfo() {
s.cfg.info.MetaBaseInfo = s.metaBase.DumpInfo() s.cfg.info.MetaBaseInfo = s.metaBase.DumpInfo()
s.cfg.info.BlobStorInfo = s.blobStor.DumpInfo() s.cfg.info.BlobStorInfo = s.blobStor.DumpInfo()

View file

@ -91,3 +91,11 @@ func (s *Shard) TreeList(cid cidSDK.ID) ([]string, error) {
} }
return s.pilorama.TreeList(cid) return s.pilorama.TreeList(cid)
} }
// TreeExists implements the pilorama.Forest interface.
func (s *Shard) TreeExists(cid cidSDK.ID, treeID string) (bool, error) {
if s.pilorama == nil {
return false, ErrPiloramaDisabled
}
return s.pilorama.TreeExists(cid, treeID)
}

View file

@ -2,8 +2,6 @@ package shard
import ( import (
"errors" "errors"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard/mode"
) )
// FlushWriteCachePrm represents parameters of a `FlushWriteCache` operation. // FlushWriteCachePrm represents parameters of a `FlushWriteCache` operation.
@ -20,8 +18,7 @@ func (p *FlushWriteCachePrm) SetIgnoreErrors(ignore bool) {
// but write-cache is disabled. // but write-cache is disabled.
var errWriteCacheDisabled = errors.New("write-cache is disabled") var errWriteCacheDisabled = errors.New("write-cache is disabled")
// FlushWriteCache moves writecache in read-only mode and flushes all data from it. // FlushWriteCache flushes all data from the write-cache.
// After the operation writecache will remain read-only mode.
func (s *Shard) FlushWriteCache(p FlushWriteCachePrm) error { func (s *Shard) FlushWriteCache(p FlushWriteCachePrm) error {
if !s.hasWriteCache() { if !s.hasWriteCache() {
return errWriteCacheDisabled return errWriteCacheDisabled
@ -38,9 +35,5 @@ func (s *Shard) FlushWriteCache(p FlushWriteCachePrm) error {
return ErrDegradedMode return ErrDegradedMode
} }
if err := s.writeCache.SetMode(mode.ReadOnly); err != nil {
return err
}
return s.writeCache.Flush(p.ignoreErrors) return s.writeCache.Flush(p.ignoreErrors)
} }

View file

@ -1,12 +1,14 @@
package writecache package writecache
import ( import (
"bytes"
"errors" "errors"
"time" "time"
"github.com/mr-tron/base58" "github.com/mr-tron/base58"
"github.com/nspcc-dev/neo-go/pkg/util/slice" "github.com/nspcc-dev/neo-go/pkg/util/slice"
objectCore "github.com/nspcc-dev/neofs-node/pkg/core/object" objectCore "github.com/nspcc-dev/neofs-node/pkg/core/object"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor/common" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor/common"
meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase" meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase"
"github.com/nspcc-dev/neofs-sdk-go/object" "github.com/nspcc-dev/neofs-sdk-go/object"
@ -26,10 +28,6 @@ const (
defaultFlushInterval = time.Second defaultFlushInterval = time.Second
) )
// errMustBeReadOnly is returned when write-cache must be
// in read-only mode to perform an operation.
var errMustBeReadOnly = errors.New("write-cache must be in read-only mode")
// runFlushLoop starts background workers which periodically flush objects to the blobstor. // runFlushLoop starts background workers which periodically flush objects to the blobstor.
func (c *cache) runFlushLoop() { func (c *cache) runFlushLoop() {
for i := 0; i < c.workersCount; i++ { for i := 0; i < c.workersCount; i++ {
@ -60,7 +58,7 @@ func (c *cache) runFlushLoop() {
} }
func (c *cache) flushDB() { func (c *cache) flushDB() {
lastKey := []byte{} var lastKey []byte
var m []objectInfo var m []objectInfo
for { for {
select { select {
@ -70,7 +68,6 @@ func (c *cache) flushDB() {
} }
m = m[:0] m = m[:0]
sz := 0
c.modeMtx.RLock() c.modeMtx.RLock()
if c.readOnly() { if c.readOnly() {
@ -83,12 +80,29 @@ func (c *cache) flushDB() {
_ = c.db.View(func(tx *bbolt.Tx) error { _ = c.db.View(func(tx *bbolt.Tx) error {
b := tx.Bucket(defaultBucket) b := tx.Bucket(defaultBucket)
cs := b.Cursor() cs := b.Cursor()
for k, v := cs.Seek(lastKey); k != nil && len(m) < flushBatchSize; k, v = cs.Next() {
var k, v []byte
if len(lastKey) == 0 {
k, v = cs.First()
} else {
k, v = cs.Seek(lastKey)
if bytes.Equal(k, lastKey) {
k, v = cs.Next()
}
}
for ; k != nil && len(m) < flushBatchSize; k, v = cs.Next() {
if len(lastKey) == len(k) {
copy(lastKey, k)
} else {
lastKey = slice.Copy(k)
}
if _, ok := c.flushed.Peek(string(k)); ok { if _, ok := c.flushed.Peek(string(k)); ok {
continue continue
} }
sz += len(k) + len(v)
m = append(m, objectInfo{ m = append(m, objectInfo{
addr: string(k), addr: string(k),
data: slice.Copy(v), data: slice.Copy(v),
@ -137,51 +151,7 @@ func (c *cache) flushBigObjects() {
break break
} }
evictNum := 0 _ = c.flushFSTree(true)
var prm common.IteratePrm
prm.LazyHandler = func(addr oid.Address, f func() ([]byte, error)) error {
sAddr := addr.EncodeToString()
if _, ok := c.store.flushed.Peek(sAddr); ok {
return nil
}
data, err := f()
if err != nil {
c.log.Error("can't read a file", zap.Stringer("address", addr))
return nil
}
c.mtx.Lock()
_, compress := c.compressFlags[sAddr]
c.mtx.Unlock()
var prm common.PutPrm
prm.Address = addr
prm.RawData = data
prm.DontCompress = !compress
if _, err := c.blobstor.Put(prm); err != nil {
c.log.Error("cant flush object to blobstor", zap.Error(err))
return nil
}
if compress {
c.mtx.Lock()
delete(c.compressFlags, sAddr)
c.mtx.Unlock()
}
// mark object as flushed
c.flushed.Add(sAddr, false)
evictNum++
return nil
}
_, _ = c.fsTree.Iterate(prm)
c.modeMtx.RUnlock() c.modeMtx.RUnlock()
case <-c.closeCh: case <-c.closeCh:
@ -190,6 +160,63 @@ func (c *cache) flushBigObjects() {
} }
} }
func (c *cache) reportFlushError(msg string, addr string, err error) {
if c.reportError != nil {
c.reportError(msg, err)
} else {
c.log.Error(msg,
zap.String("address", addr),
zap.Error(err))
}
}
func (c *cache) flushFSTree(ignoreErrors bool) error {
var prm common.IteratePrm
prm.IgnoreErrors = ignoreErrors
prm.LazyHandler = func(addr oid.Address, f func() ([]byte, error)) error {
sAddr := addr.EncodeToString()
if _, ok := c.store.flushed.Peek(sAddr); ok {
return nil
}
data, err := f()
if err != nil {
c.reportFlushError("can't read a file", sAddr, err)
if ignoreErrors {
return nil
}
return err
}
var obj object.Object
err = obj.Unmarshal(data)
if err != nil {
c.reportFlushError("can't unmarshal an object", sAddr, err)
if ignoreErrors {
return nil
}
return err
}
err = c.flushObject(&obj, data)
if err != nil {
if ignoreErrors {
return nil
}
return err
}
// mark object as flushed
c.flushed.Add(sAddr, false)
return nil
}
_, err := c.fsTree.Iterate(prm)
return err
}
// flushWorker writes objects to the main storage. // flushWorker writes objects to the main storage.
func (c *cache) flushWorker(_ int) { func (c *cache) flushWorker(_ int) {
defer c.wg.Done() defer c.wg.Done()
@ -203,30 +230,40 @@ func (c *cache) flushWorker(_ int) {
return return
} }
err := c.flushObject(obj) err := c.flushObject(obj, nil)
if err != nil { if err == nil {
c.log.Error("can't flush object to the main storage", zap.Error(err))
} else {
c.flushed.Add(objectCore.AddressOf(obj).EncodeToString(), true) c.flushed.Add(objectCore.AddressOf(obj).EncodeToString(), true)
} }
} }
} }
// flushObject is used to write object directly to the main storage. // flushObject is used to write object directly to the main storage.
func (c *cache) flushObject(obj *object.Object) error { func (c *cache) flushObject(obj *object.Object, data []byte) error {
addr := objectCore.AddressOf(obj)
var prm common.PutPrm var prm common.PutPrm
prm.Object = obj prm.Object = obj
prm.RawData = data
res, err := c.blobstor.Put(prm) res, err := c.blobstor.Put(prm)
if err != nil { if err != nil {
if !errors.Is(err, common.ErrNoSpace) && !errors.Is(err, common.ErrReadOnly) &&
!errors.Is(err, blobstor.ErrNoPlaceFound) {
c.reportFlushError("can't flush an object to blobstor",
addr.EncodeToString(), err)
}
return err return err
} }
var pPrm meta.PutPrm var updPrm meta.UpdateStorageIDPrm
pPrm.SetObject(obj) updPrm.SetAddress(addr)
pPrm.SetStorageID(res.StorageID) updPrm.SetStorageID(res.StorageID)
_, err = c.metabase.Put(pPrm) _, err = c.metabase.UpdateStorageID(updPrm)
if err != nil {
c.reportFlushError("can't update object storage ID",
addr.EncodeToString(), err)
}
return err return err
} }
@ -237,44 +274,11 @@ func (c *cache) Flush(ignoreErrors bool) error {
c.modeMtx.RLock() c.modeMtx.RLock()
defer c.modeMtx.RUnlock() defer c.modeMtx.RUnlock()
if !c.mode.ReadOnly() {
return errMustBeReadOnly
}
return c.flush(ignoreErrors) return c.flush(ignoreErrors)
} }
func (c *cache) flush(ignoreErrors bool) error { func (c *cache) flush(ignoreErrors bool) error {
var prm common.IteratePrm if err := c.flushFSTree(ignoreErrors); err != nil {
prm.IgnoreErrors = ignoreErrors
prm.LazyHandler = func(addr oid.Address, f func() ([]byte, error)) error {
_, ok := c.flushed.Peek(addr.EncodeToString())
if ok {
return nil
}
data, err := f()
if err != nil {
if ignoreErrors {
return nil
}
return err
}
var obj object.Object
err = obj.Unmarshal(data)
if err != nil {
if ignoreErrors {
return nil
}
return err
}
return c.flushObject(&obj)
}
_, err := c.fsTree.Iterate(prm)
if err != nil {
return err return err
} }
@ -290,6 +294,7 @@ func (c *cache) flush(ignoreErrors bool) error {
} }
if err := addr.DecodeString(sa); err != nil { if err := addr.DecodeString(sa); err != nil {
c.reportFlushError("can't decode object address from the DB", sa, err)
if ignoreErrors { if ignoreErrors {
continue continue
} }
@ -298,13 +303,14 @@ func (c *cache) flush(ignoreErrors bool) error {
var obj object.Object var obj object.Object
if err := obj.Unmarshal(data); err != nil { if err := obj.Unmarshal(data); err != nil {
c.reportFlushError("can't unmarshal an object from the DB", sa, err)
if ignoreErrors { if ignoreErrors {
continue continue
} }
return err return err
} }
if err := c.flushObject(&obj); err != nil { if err := c.flushObject(&obj, data); err != nil {
return err return err
} }
} }

View file

@ -21,6 +21,7 @@ import (
versionSDK "github.com/nspcc-dev/neofs-sdk-go/version" versionSDK "github.com/nspcc-dev/neofs-sdk-go/version"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"go.etcd.io/bbolt" "go.etcd.io/bbolt"
"go.uber.org/atomic"
"go.uber.org/zap/zaptest" "go.uber.org/zap/zaptest"
) )
@ -35,7 +36,7 @@ func TestFlush(t *testing.T) {
obj *object.Object obj *object.Object
} }
newCache := func(t *testing.T) (Cache, *blobstor.BlobStor, *meta.DB) { newCache := func(t *testing.T, opts ...Option) (Cache, *blobstor.BlobStor, *meta.DB) {
dir := t.TempDir() dir := t.TempDir()
mb := meta.New( mb := meta.New(
meta.WithPath(filepath.Join(dir, "meta")), meta.WithPath(filepath.Join(dir, "meta")),
@ -54,11 +55,13 @@ func TestFlush(t *testing.T) {
require.NoError(t, bs.Init()) require.NoError(t, bs.Init())
wc := New( wc := New(
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}), append([]Option{
WithPath(filepath.Join(dir, "writecache")), WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}),
WithSmallObjectSize(smallSize), WithPath(filepath.Join(dir, "writecache")),
WithMetabase(mb), WithSmallObjectSize(smallSize),
WithBlobstor(bs)) WithMetabase(mb),
WithBlobstor(bs),
}, opts...)...)
require.NoError(t, wc.Open(false)) require.NoError(t, wc.Open(false))
require.NoError(t, wc.Init()) require.NoError(t, wc.Init())
@ -110,7 +113,6 @@ func TestFlush(t *testing.T) {
wc, bs, mb := newCache(t) wc, bs, mb := newCache(t)
objects := putObjects(t, wc) objects := putObjects(t, wc)
require.NoError(t, wc.SetMode(mode.ReadOnly))
require.NoError(t, bs.SetMode(mode.ReadWrite)) require.NoError(t, bs.SetMode(mode.ReadWrite))
require.NoError(t, mb.SetMode(mode.ReadWrite)) require.NoError(t, mb.SetMode(mode.ReadWrite))
@ -164,7 +166,10 @@ func TestFlush(t *testing.T) {
t.Run("ignore errors", func(t *testing.T) { t.Run("ignore errors", func(t *testing.T) {
testIgnoreErrors := func(t *testing.T, f func(*cache)) { testIgnoreErrors := func(t *testing.T, f func(*cache)) {
wc, bs, mb := newCache(t) var errCount atomic.Uint32
wc, bs, mb := newCache(t, WithReportErrorFunc(func(message string, err error) {
errCount.Inc()
}))
objects := putObjects(t, wc) objects := putObjects(t, wc)
f(wc.(*cache)) f(wc.(*cache))
@ -172,7 +177,9 @@ func TestFlush(t *testing.T) {
require.NoError(t, bs.SetMode(mode.ReadWrite)) require.NoError(t, bs.SetMode(mode.ReadWrite))
require.NoError(t, mb.SetMode(mode.ReadWrite)) require.NoError(t, mb.SetMode(mode.ReadWrite))
require.Equal(t, uint32(0), errCount.Load())
require.Error(t, wc.Flush(false)) require.Error(t, wc.Flush(false))
require.True(t, errCount.Load() > 0)
require.NoError(t, wc.Flush(true)) require.NoError(t, wc.Flush(true))
check(t, mb, bs, objects) check(t, mb, bs, objects)

View file

@ -16,8 +16,8 @@ type Option func(*options)
// meta is an interface for a metabase. // meta is an interface for a metabase.
type metabase interface { type metabase interface {
Put(meta.PutPrm) (meta.PutRes, error)
Exists(meta.ExistsPrm) (meta.ExistsRes, error) Exists(meta.ExistsPrm) (meta.ExistsRes, error)
UpdateStorageID(meta.UpdateStorageIDPrm) (meta.UpdateStorageIDRes, error)
} }
// blob is an interface for the blobstor. // blob is an interface for the blobstor.
@ -50,6 +50,10 @@ type options struct {
maxBatchSize int maxBatchSize int
// maxBatchDelay is the maximum batch wait time for the small object database. // maxBatchDelay is the maximum batch wait time for the small object database.
maxBatchDelay time.Duration maxBatchDelay time.Duration
// noSync is true iff FSTree allows unsynchronized writes.
noSync bool
// reportError is the function called when encountering disk errors in background workers.
reportError func(string, error)
} }
// WithLogger sets logger. // WithLogger sets logger.
@ -130,3 +134,20 @@ func WithMaxBatchDelay(d time.Duration) Option {
} }
} }
} }
// WithNoSync sets an option to allow returning to caller on PUT before write is persisted.
// Note, that we use this flag for FSTree only and DO NOT use it for a bolt DB because
// we cannot yet properly handle the corrupted database during the startup. This SHOULD NOT
// be relied upon and may be changed in future.
func WithNoSync(noSync bool) Option {
return func(o *options) {
o.noSync = noSync
}
}
// WithReportErrorFunc sets error reporting function.
func WithReportErrorFunc(f func(string, error)) Option {
return func(o *options) {
o.reportError = f
}
}

View file

@ -56,14 +56,12 @@ func (c *cache) openStore(readOnly bool) error {
} }
} }
c.fsTree = &fstree.FSTree{ c.fsTree = fstree.New(
Info: fstree.Info{ fstree.WithPath(c.path),
Permissions: os.ModePerm, fstree.WithPerm(os.ModePerm),
RootPath: c.path, fstree.WithDepth(1),
}, fstree.WithDirNameLen(1),
Depth: 1, fstree.WithNoSync(c.noSync))
DirNameLen: 1,
}
// Write-cache can be opened multiple times during `SetMode`. // Write-cache can be opened multiple times during `SetMode`.
// flushed map must not be re-created in this case. // flushed map must not be re-created in this case.

View file

@ -8,6 +8,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/core/transaction" "github.com/nspcc-dev/neo-go/pkg/core/transaction"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/nspcc-dev/neo-go/pkg/rpcclient" "github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/smartcontract" "github.com/nspcc-dev/neo-go/pkg/smartcontract"
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
@ -149,7 +150,20 @@ func nnsResolve(c *rpcclient.WSClient, nnsHash util.Uint160, domain string) (uti
if err != nil { if err != nil {
return util.Uint160{}, fmt.Errorf("malformed response: %w", err) return util.Uint160{}, fmt.Errorf("malformed response: %w", err)
} }
return util.Uint160DecodeStringLE(string(bs))
// We support several formats for hash encoding, this logic should be maintained in sync
// with parseNNSResolveResult from cmd/neofs-adm/internal/modules/morph/initialize_nns.go
h, err := util.Uint160DecodeStringLE(string(bs))
if err == nil {
return h, nil
}
h, err = address.StringToUint160(string(bs))
if err == nil {
return h, nil
}
return util.Uint160{}, errors.New("no valid hashes are found")
} }
func exists(c *rpcclient.WSClient, nnsHash util.Uint160, domain string) (bool, error) { func exists(c *rpcclient.WSClient, nnsHash util.Uint160, domain string) (bool, error) {

View file

@ -13,6 +13,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/crypto/hash" "github.com/nspcc-dev/neo-go/pkg/crypto/hash"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/fixedn" "github.com/nspcc-dev/neo-go/pkg/encoding/fixedn"
"github.com/nspcc-dev/neo-go/pkg/neorpc"
"github.com/nspcc-dev/neo-go/pkg/smartcontract" "github.com/nspcc-dev/neo-go/pkg/smartcontract"
sc "github.com/nspcc-dev/neo-go/pkg/smartcontract" sc "github.com/nspcc-dev/neo-go/pkg/smartcontract"
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
@ -175,7 +176,18 @@ func (c *Client) DepositNotary(amount fixedn.Fixed8, delta uint32) (res util.Uin
big.NewInt(int64(amount)), big.NewInt(int64(amount)),
[]interface{}{c.acc.PrivateKey().GetScriptHash(), till}) []interface{}{c.acc.PrivateKey().GetScriptHash(), till})
if err != nil { if err != nil {
return util.Uint256{}, fmt.Errorf("can't make notary deposit: %w", err) if !errors.Is(err, neorpc.ErrAlreadyExists) {
return util.Uint256{}, fmt.Errorf("can't make notary deposit: %w", err)
}
// Transaction is already in mempool waiting to be processed.
// This is an expected situation if we restart the service.
c.logger.Debug("notary deposit has already been made",
zap.Int64("amount", int64(amount)),
zap.Int64("expire_at", till),
zap.Uint32("vub", vub),
zap.Error(err))
return util.Uint256{}, nil
} }
c.logger.Debug("notary deposit invoke", c.logger.Debug("notary deposit invoke",

View file

@ -13,10 +13,9 @@ type (
// ClientCache is a structure around neofs-sdk-go/client to reuse // ClientCache is a structure around neofs-sdk-go/client to reuse
// already created clients. // already created clients.
ClientCache struct { ClientCache struct {
mu *sync.RWMutex mu sync.RWMutex
clients map[string]*multiClient clients map[string]*multiClient
opts ClientCacheOpts opts ClientCacheOpts
allowExternal bool
} }
ClientCacheOpts struct { ClientCacheOpts struct {
@ -32,17 +31,15 @@ type (
// `opts` are used for new client creation. // `opts` are used for new client creation.
func NewSDKClientCache(opts ClientCacheOpts) *ClientCache { func NewSDKClientCache(opts ClientCacheOpts) *ClientCache {
return &ClientCache{ return &ClientCache{
mu: new(sync.RWMutex), clients: make(map[string]*multiClient),
clients: make(map[string]*multiClient), opts: opts,
opts: opts,
allowExternal: opts.AllowExternal,
} }
} }
// Get function returns existing client or creates a new one. // Get function returns existing client or creates a new one.
func (c *ClientCache) Get(info clientcore.NodeInfo) (clientcore.Client, error) { func (c *ClientCache) Get(info clientcore.NodeInfo) (clientcore.Client, error) {
netAddr := info.AddressGroup() netAddr := info.AddressGroup()
if c.allowExternal { if c.opts.AllowExternal {
netAddr = append(netAddr, info.ExternalAddressGroup()...) netAddr = append(netAddr, info.ExternalAddressGroup()...)
} }
cacheKey := string(info.PublicKey()) cacheKey := string(info.PublicKey())

Some files were not shown because too many files have changed in this diff Show more