Compare commits

...
Sign in to create a new pull request.

7 commits

Author SHA1 Message Date
Aleksey Kravchenko
118d5809e0 [#13] Add info FrostFS backend cmd
Signed-off-by: Aleksey Kravchenko <al.kravchenko@yadro.com>
2025-03-11 10:29:21 +03:00
Aleksey Kravchenko
407bb16cdc [#11] Add leading slash to FilePath attr
Signed-off-by: Aleksey Kravchenko <al.kravchenko@yadro.com>
2025-03-11 10:29:21 +03:00
Aleksey Kravchenko
f031906764 [#9] refactor access to the CID cache
Signed-off-by: Aleksey Kravchenko <al.kravchenko@yadro.com>
2025-03-11 10:29:21 +03:00
Aleksey Kravchenko
858a774e45 [#5] Update frostfs backend docs
Signed-off-by: Aleksey Kravchenko <al.kravchenko@yadro.com>
2025-03-11 10:29:18 +03:00
Aleksey Kravchenko
734bb44a5f [#5] Add container zone names support.
Signed-off-by: Aleksey Kravchenko <al.kravchenko@yadro.com>
2025-03-11 10:26:49 +03:00
Aleksey Kravchenko
c85dd6cd2c [#1] Add forgejo actions
Signed-off-by: Aleksey Kravchenko <al.kravchenko@yadro.com>
2025-03-11 10:26:49 +03:00
Aleksey Kravchenko
a995db6689 [#1] Add frostfs backend docs
Signed-off-by: Aleksey Kravchenko <al.kravchenko@yadro.com>
2025-03-11 10:26:43 +03:00
11 changed files with 951 additions and 85 deletions

View file

@ -0,0 +1,45 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: community, triage, bug
assignees: ''
---
<!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
## Possible Solution
<!--- Not obligatory -->
<!--- If no reason/fix/additions for the bug can be suggested, -->
<!--- uncomment the following phrase: -->
<!--- No fix can be suggested by a QA engineer. Further solutions shall be up to developers. -->
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. -->
1.
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Regression
<!-- Is this issue a regression? (Yes / No) -->
<!-- If Yes, optionally please include version or commit id or PR# that caused this regression, if you have these details. -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used:
* Server setup and configuration:
* Operating System and version (`uname -a`):

View file

@ -0,0 +1 @@
blank_issues_enabled: false

View file

@ -0,0 +1,24 @@
on:
pull_request:
push:
branches:
- tcl/master
jobs:
builds:
name: Builds
runs-on: ubuntu-latest
strategy:
matrix:
go_versions: [ '1.22', '1.23' ]
fail-fast: false
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '${{ matrix.go_versions }}'
- name: Build binary
run: make

View file

@ -0,0 +1,20 @@
on: [pull_request]
jobs:
dco:
name: DCO
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: '1.23'
- name: Run commit format checker
uses: https://git.frostfs.info/TrueCloudLab/dco-go@v3
with:
from: 'origin/${{ github.event.pull_request.base.ref }}'

View file

@ -0,0 +1,67 @@
on:
pull_request:
push:
branches:
- tcl/master
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.23'
cache: true
- name: Install linters
run: go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
- name: Run linters
run: make check
test:
name: Test
runs-on: oci-runner
strategy:
matrix:
go_versions: [ '1.23' ]
fail-fast: false
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '${{ matrix.go_versions }}'
- name: Tests for the FrostFS backend
env:
RESTIC_TEST_FUSE: false
AIO_IMAGE: truecloudlab/frostfs-aio
AIO_VERSION: 1.7.0-nightly.4
RCLONE_CONFIG: /config/rclone.conf
# run only tests related to FrostFS backend
run: |-
podman-service.sh
podman info
mkdir /config
printf "[TestFrostFS]\ntype = frostfs\nendpoint = localhost:8080\nwallet = /config/wallet.json\nplacement_policy = REP 1\nrequest_timeout = 20s\nconnection_timeout = 21s" > /config/rclone.conf
echo "Run frostfs aio container"
docker run -d --net=host --name aio $AIO_IMAGE:$AIO_VERSION --restart always -p 8080:8080
echo "Wait for frostfs to start"
until docker exec aio curl --fail http://localhost:8083 > /dev/null 2>&1; do sleep 0.2; done;
echo "Issue creds"
docker exec aio /usr/bin/issue-creds.sh native
echo "Copy wallet"
docker cp aio:/config/user-wallet.json /config/wallet.json
echo "Start tests"
go test -v github.com/rclone/rclone/backend/frostfs

View file

@ -5,6 +5,7 @@ import (
"bytes"
"context"
"encoding/hex"
"errors"
"fmt"
"io"
"math"
@ -16,6 +17,8 @@ import (
"time"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/ape"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/refs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/checksum"
sdkClient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
@ -40,6 +43,7 @@ func init() {
Name: "frostfs",
Description: "Distributed, decentralized object storage FrostFS",
NewFs: NewFs,
CommandHelp: commandHelp,
Options: []fs.Option{
{
Name: "endpoint",
@ -127,6 +131,11 @@ func init() {
},
},
},
{
Name: "default_container_zone",
Default: "container",
Help: "The name of the zone in which containers will be created or resolved if the zone name is not explicitly specified with the container name.",
},
{
Name: "container_creation_policy",
Default: "private",
@ -150,6 +159,61 @@ func init() {
})
}
var commandHelp = []fs.CommandHelp{
{
Name: "info",
Short: "Show information about the FrostFS objects and containers",
Long: `This command can be used to get information about the FrostFS objects and containers.
Usage Examples:
rclone backend info frostfs:container/path/to/dir
rclone backend info frostfs:container/path/to/dir path/to/file/in/dir.txt
rclone backend info frostfs:container/path/to/dir path/to/file/in/dir.txt -o "format={cid}:{oid}"
The optional "format" flag overrides the information output. In this example, if an object is stored in
a container with the identifier "9mvN7hsUcYoGoHjxpRWtqmDipnmaeRmGVDqRxxPyy2n1" and
its own identifier is "4VPCNFsZ2SQt1GNfYw2uTBNnz5bLgC7i4k4ovtuXKyJP", the output of this command will be
"9mvN7hsucoGoHjxPqrWmDipnMaemGVDqrxxPyynn1:4VpcNFsZqsQt1Gnfw2utBnzn5Blgc7i4kvtuXyKyJp".
The default output format is the same as that of the frostfs-cli utility,
with the "container get" and "object head" options. Here is an example of output:
--- Container info ---
CID: 9mvN7hsUcYoGoHjxpRWtqmDipnmaeRmGVDqRxxPyy2n1
Owner ID: NQL7q6PvPaisWNwdWfoR1LsEsAyje8P3jX
Created: 2025-02-17 15:07:51 +0300 MSK
Attributes:
Timestamp=1739794071
Name=test
__SYSTEM__NAME=test
__SYSTEM__ZONE=container
__SYSTEM__DISABLE_HOMOMORPHIC_HASHING=true
Placement policy:
REP 3
--- Object info ---
ID: 4VPCNFsZ2SQt1GNfYw2uTBNnz5bLgC7i4k4ovtuXKyJP
CID: 9mvN7hsUcYoGoHjxpRWtqmDipnmaeRmGVDqRxxPyy2n1
Owner: NQL7q6PvPaisWNwdWfoR1LsEsAyje8P3jX
CreatedAt: 559
Size: 402905
HomoHash: <empty>
Checksum: 2a068fe24c53bc8bf7d6bbb997414f7938b080305dc45f9fd3ff684bc11fbb7b
Type: REGULAR
Attributes:
FileName=cat.png
FilePath=/dir1/dir2/dir3/cat.png
Timestamp=1733410524 (2024-12-05 17:55:24 +0300 MSK)
ID signature:
public key: 026b7c7a7a16225eb13a5a733495a1bcdd1f016dfa9193498821379b0de2ba6870
signature: 049f6712c8378d323269b605a282bcacd7565ce2eefe1f10a9739c48945f739d95102c478b9cb1d429cd3330b4b5262e725392e322de3bbfa4ce18a9c842289219
`,
},
}
var errMalformedObject = errors.New("malformed object")
// Options defines the configuration for this backend
type Options struct {
FrostfsEndpoint string `config:"endpoint"`
@ -166,6 +230,7 @@ type Options struct {
Address string `config:"address"`
Password string `config:"password"`
PlacementPolicy string `config:"placement_policy"`
DefaultContainerZone string `config:"default_container_zone"`
ContainerCreationPolicy string `config:"container_creation_policy"`
APERules []chain.Rule `config:"-"`
}
@ -204,6 +269,272 @@ type Object struct {
timestamp time.Time
}
// Command the backend to run a named command
//
// The command run is name
// args may be used to read arguments from
// opts may be used to read optional arguments from
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
switch name {
case "info":
return f.infoCmd(ctx, arg, opt)
default:
return nil, fs.ErrorCommandNotFound
}
}
func (f *Fs) containerInfo(ctx context.Context, cnrID cid.ID) (container.Container, error) {
prm := pool.PrmContainerGet{
ContainerID: cnrID,
}
var cnr container.Container
cnr, err := f.pool.GetContainer(ctx, prm)
if err != nil {
return container.Container{}, fmt.Errorf("couldn't get container '%s': %w", cnrID, err)
}
return cnr, err
}
func (f *Fs) getObjectsHead(ctx context.Context, cnrID cid.ID, objIDs []oid.ID) ([]object.Object, error) {
var res []object.Object
for _, objID := range objIDs {
var prmHead pool.PrmObjectHead
prmHead.SetAddress(newAddress(cnrID, objID))
obj, err := f.pool.HeadObject(ctx, prmHead)
if err != nil {
return nil, err
}
res = append(res, obj)
}
return res, nil
}
type printer struct {
w io.Writer
err error
}
func newPrinter(w io.Writer) *printer {
return &printer{w: w}
}
func (p *printer) printf(format string, a ...interface{}) {
if p.err != nil {
return
}
if _, err := fmt.Fprintf(p.w, format, a...); err != nil {
p.err = err
}
}
func (p *printer) lastError() error {
return p.err
}
func (p *printer) printContainerInfo(cnrID cid.ID, cnr container.Container) {
p.printf("CID: %v\nOwner ID: %v", cnrID, cnr.Owner())
var timestamp time.Time
var attrs []string
cnr.IterateAttributes(func(key string, value string) {
attrs = append(attrs, fmt.Sprintf(" %v=%v", key, value))
if key == object.AttributeTimestamp {
val, err := strconv.ParseInt(value, 10, 64)
if err == nil {
timestamp = time.Unix(val, 0)
}
}
})
if !timestamp.IsZero() {
p.printf("\nCreated: %v", timestamp)
}
if len(attrs) > 0 {
p.printf("\nAttributes:\n%s", strings.Join(attrs, "\n"))
}
s := bytes.NewBufferString("")
if err := cnr.PlacementPolicy().WriteStringTo(s); err != nil {
return
}
p.printf("\nPlacement policy:\n%s", s.String())
}
func (p *printer) printChecksum(name string, recv func() (checksum.Checksum, bool)) {
var strVal string
cs, csSet := recv()
if csSet {
strVal = hex.EncodeToString(cs.Value())
} else {
strVal = "<empty>"
}
p.printf("\n%s: %s", name, strVal)
}
func (p *printer) printObject(obj *object.Object) {
objIDStr := "<empty>"
cnrIDStr := objIDStr
if objID, ok := obj.ID(); ok {
objIDStr = objID.String()
}
if cnrID, ok := obj.ContainerID(); ok {
cnrIDStr = cnrID.String()
}
p.printf("\nID: %v", objIDStr)
p.printf("\nCID: %v", cnrIDStr)
p.printf("\nOwner: %s", obj.OwnerID())
p.printf("\nCreatedAt: %d", obj.CreationEpoch())
p.printf("\nSize: %d", obj.PayloadSize())
p.printChecksum("HomoHash", obj.PayloadHomomorphicHash)
p.printChecksum("Checksum", obj.PayloadChecksum)
p.printf("\nType: %s", obj.Type())
p.printf("\nAttributes:")
for _, attr := range obj.Attributes() {
if attr.Key() == object.AttributeTimestamp {
var strVal string
val, err := strconv.ParseInt(attr.Value(), 10, 64)
if err == nil {
strVal = time.Unix(val, 0).String()
} else {
strVal = "malformed"
}
p.printf("\n %s=%s (%s)",
attr.Key(),
attr.Value(),
strVal)
continue
}
p.printf("\n %s=%s", attr.Key(), attr.Value())
}
if signature := obj.Signature(); signature != nil {
p.printf("\nID signature:")
var sigV2 refs.Signature
signature.WriteToV2(&sigV2)
p.printf("\n public key: %s", hex.EncodeToString(sigV2.GetKey()))
p.printf("\n signature: %s", hex.EncodeToString(sigV2.GetSign()))
}
if ecHeader := obj.ECHeader(); ecHeader != nil {
p.printf("\nEC header:")
p.printf("\n parent object ID: %s", ecHeader.Parent().EncodeToString())
p.printf("\n index: %d", ecHeader.Index())
p.printf("\n total: %d", ecHeader.Total())
p.printf("\n header length: %d", ecHeader.HeaderLength())
}
p.printSplitHeader(obj)
}
func (p *printer) printSplitHeader(obj *object.Object) {
if splitID := obj.SplitID(); splitID != nil {
p.printf("Split ID: %s\n", splitID)
}
if objID, ok := obj.ParentID(); ok {
p.printf("Split ParentID: %s\n", objID)
}
if prev, ok := obj.PreviousID(); ok {
p.printf("\nSplit PreviousID: %s", prev)
}
for _, child := range obj.Children() {
p.printf("\nSplit ChildID: %s", child.String())
}
parent := obj.Parent()
if parent != nil {
p.printf("\n\nSplit Parent Header:")
p.printObject(parent)
}
}
func formattedInfoOutput(format string, cnrID cid.ID, objHeads []object.Object) (string, error) {
format = strings.ReplaceAll(format, "{cid}", cnrID.String())
objIDStr := "<empty>"
if len(objHeads) > 0 {
objID, ok := objHeads[0].ID()
if ok {
objIDStr = objID.String()
}
}
return strings.ReplaceAll(format, "{oid}", objIDStr), nil
}
func (f *Fs) infoCmd(ctx context.Context, arg []string, opt map[string]string) (out interface{}, err error) {
var cnrID cid.ID
if cnrID, err = f.resolveContainerID(ctx, f.rootContainer); err != nil {
return nil, err
}
var format string
for k, v := range opt {
switch k {
case "format":
format = v
default:
return nil, fmt.Errorf("unknown option \"%s\"", k)
}
}
var objIDs []oid.ID
var filePath string
if len(arg) > 0 {
filePath = strings.TrimPrefix(arg[0], "/")
if f.rootDirectory != "" {
filePath = f.rootDirectory + "/" + filePath
}
if objIDs, err = f.findObjectsFilePath(ctx, cnrID, filePath); err != nil {
return
}
}
cnr, err := f.containerInfo(ctx, cnrID)
if err != nil {
return
}
var objHeads []object.Object
if objHeads, err = f.getObjectsHead(ctx, cnrID, objIDs); err != nil {
return
}
if format != "" {
return formattedInfoOutput(format, cnrID, objHeads)
}
w := bytes.NewBufferString("")
p := newPrinter(w)
p.printf(" --- Container info ---\n")
p.printContainerInfo(cnrID, cnr)
if len(arg) > 0 {
p.printf("\n\n --- Object info ---")
if len(objHeads) > 0 {
// Print info about the first object only
p.printObject(&objHeads[0])
} else {
p.printf("\nNo object with \"%s\" file path was found", filePath)
}
}
if err := p.lastError(); err != nil {
return nil, err
}
return w.String(), nil
}
// Shutdown the backend, closing any background tasks and any
// cached connections.
func (f *Fs) Shutdown(_ context.Context) error {
@ -286,7 +617,7 @@ func NewFs(ctx context.Context, name string, root string, m configmap.Mapper) (f
return f, nil
}
func newObject(f *Fs, obj object.Object, container string) *Object {
func newObject(f *Fs, obj object.Object, container string) (*Object, error) {
// we should not include rootDirectory into remote name
prefix := f.rootDirectory
if prefix != "" {
@ -320,9 +651,12 @@ func newObject(f *Fs, obj object.Object, container string) *Object {
}
}
if objInfo.filePath == "" {
objInfo.filePath = objInfo.name
// We expect that the FilePath attribute is present in the object and that it starts with a leading slash
if objInfo.filePath == "" || objInfo.filePath[0] != '/' {
return nil, errMalformedObject
}
// Don't include a leading slash in the resulting object's file path.
objInfo.filePath = objInfo.filePath[1:]
objInfo.remote = objInfo.filePath
if strings.Contains(objInfo.remote, prefix) {
@ -333,7 +667,7 @@ func newObject(f *Fs, obj object.Object, container string) *Object {
}
}
return objInfo
return objInfo, nil
}
// MimeType of an Object if known, "" otherwise
@ -501,26 +835,26 @@ func (f *Fs) Features() *fs.Features {
// List the objects and directories in dir into entries.
func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
containerStr, containerPath := bucket.Split(path.Join(f.root, dir))
rootDirName, containerPath := bucket.Split(path.Join(f.root, dir))
if containerStr == "" {
if rootDirName == "" {
if containerPath != "" {
return nil, fs.ErrorListBucketRequired
}
return f.listContainers(ctx)
}
return f.listEntries(ctx, containerStr, containerPath, dir, false)
return f.listEntries(ctx, rootDirName, containerPath, dir, false)
}
// ListR lists the objects and directories of the Fs starting
// from dir recursively into out.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) error {
containerStr, containerPath := bucket.Split(path.Join(f.root, dir))
rootDirName, containerPath := bucket.Split(path.Join(f.root, dir))
list := walk.NewListRHelper(callback)
if containerStr == "" {
if rootDirName == "" {
if containerPath != "" {
return fs.ErrorListBucketRequired
}
@ -536,15 +870,15 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) e
return list.Flush()
}
if err := f.listR(ctx, list, containerStr, containerPath, dir); err != nil {
if err := f.listR(ctx, list, rootDirName, containerPath, dir); err != nil {
return err
}
return list.Flush()
}
func (f *Fs) listR(ctx context.Context, list *walk.ListRHelper, containerStr, containerPath, dir string) error {
entries, err := f.listEntries(ctx, containerStr, containerPath, dir, true)
func (f *Fs) listR(ctx context.Context, list *walk.ListRHelper, rootDirName, containerPath, dir string) error {
entries, err := f.listEntries(ctx, rootDirName, containerPath, dir, true)
if err != nil {
return err
}
@ -557,31 +891,37 @@ func (f *Fs) listR(ctx context.Context, list *walk.ListRHelper, containerStr, co
return nil
}
func (f *Fs) resolveOrCreateContainer(ctx context.Context, containerStr string) (cid.ID, error) {
func (f *Fs) resolveOrCreateContainer(ctx context.Context, rootDirName string) (cid.ID, error) {
// Due to the fact that this method is called when performing "put" operations,
// which can be run in parallel in several goroutines,
// we need to use a global lock here so that if a requested container is missing,
// multiple goroutines do not attempt to create a container with the same name simultaneously,
// which could cause unexpected behavior.
f.m.Lock()
defer f.m.Unlock()
cnrID, err := f.resolveContainerIDHelper(ctx, containerStr)
if err == nil {
return cnrID, err
cnrIDStr, ok := f.containerIDCache[rootDirName]
if ok {
return parseContainerID(cnrIDStr)
}
if cnrID, err = f.createContainer(ctx, containerStr); err != nil {
delete(f.containerIDCache, containerStr)
return cid.ID{}, fmt.Errorf("createContainer: %w", err)
cnrID, err := f.resolveCIDByRootDirName(ctx, rootDirName)
if err != nil {
if cnrID, err = f.createContainer(ctx, rootDirName); err != nil {
return cid.ID{}, fmt.Errorf("createContainer: %w", err)
}
}
f.containerIDCache[containerStr] = cnrID.String()
f.containerIDCache[rootDirName] = cnrID.String()
return cnrID, nil
}
// Put the Object into the container
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
containerStr, containerPath := bucket.Split(filepath.Join(f.root, src.Remote()))
rootDirName, containerPath := bucket.Split(filepath.Join(f.root, src.Remote()))
cnrID, err := parseContainerID(containerStr)
cnrID, err := parseContainerID(rootDirName)
if err != nil {
if cnrID, err = f.resolveOrCreateContainer(ctx, containerStr); err != nil {
if cnrID, err = f.resolveOrCreateContainer(ctx, rootDirName); err != nil {
return nil, err
}
}
@ -619,7 +959,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
_ = f.pool.DeleteObject(ctx, prmDelete)
}
return newObject(f, obj, ""), nil
return newObject(f, obj, "")
}
func fillHeaders(ctx context.Context, filePath string, src fs.ObjectInfo, options ...fs.OpenOption) map[string]string {
@ -630,7 +970,7 @@ func fillHeaders(ctx context.Context, filePath string, src fs.ObjectInfo, option
})
}
headers := map[string]string{object.AttributeFilePath: filePath}
headers := map[string]string{object.AttributeFilePath: "/" + filePath}
for _, option := range options {
key, value := option.Header()
@ -663,10 +1003,10 @@ func fillHeaders(ctx context.Context, filePath string, src fs.ObjectInfo, option
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
// When updating an object, the path to it should not change.
src = robject.NewStaticObjectInfo(o.Remote(), src.ModTime(ctx), src.Size(), src.Storable(), nil, src.Fs())
containerStr, containerPath := bucket.Split(filepath.Join(o.fs.root, src.Remote()))
rootDirName, containerPath := bucket.Split(filepath.Join(o.fs.root, src.Remote()))
var cnrID cid.ID
var err error
if cnrID, err = o.fs.parseContainer(ctx, containerStr); err != nil {
if cnrID, err = o.fs.parseContainer(ctx, rootDirName); err != nil {
return fmt.Errorf("parse container: %w", err)
}
@ -706,8 +1046,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
return fmt.Errorf("fetch head object: %w", err)
}
objInfo := newObject(o.fs, obj, "")
var objInfo *Object
if objInfo, err = newObject(o.fs, obj, ""); err != nil {
return err
}
o.filePath = objInfo.filePath
o.remote = objInfo.remote
@ -718,6 +1060,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return nil
}
func (f *Fs) getContainerNameAndZone(containerStr string) (string, string) {
return getContainerNameAndZone(containerStr, f.opt.DefaultContainerZone)
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
cnrID, _ := o.ContainerID()
@ -753,7 +1099,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return nil, fmt.Errorf("head object: %w", err)
}
return newObject(f, obj, ""), nil
return newObject(f, obj, "")
}
func (f *Fs) waitForAPECacheInvalidated(ctx context.Context, expectedCh chain.Chain, cnrID cid.ID) error {
@ -805,7 +1151,7 @@ func (f *Fs) waitForAPECacheInvalidated(ctx context.Context, expectedCh chain.Ch
}
}
func (f *Fs) createContainer(ctx context.Context, containerName string) (cid.ID, error) {
func (f *Fs) createContainer(ctx context.Context, rootDirName string) (cid.ID, error) {
var policy netmap.PlacementPolicy
if err := policy.DecodeString(f.opt.PlacementPolicy); err != nil {
return cid.ID{}, fmt.Errorf("parse placement policy: %w", err)
@ -817,10 +1163,12 @@ func (f *Fs) createContainer(ctx context.Context, containerName string) (cid.ID,
cnr.SetOwner(*f.owner)
container.SetCreationTime(&cnr, time.Now())
container.SetName(&cnr, containerName)
container.SetName(&cnr, rootDirName)
cnrName, cnrZone := f.getContainerNameAndZone(rootDirName)
var domain container.Domain
domain.SetName(containerName)
domain.SetZone(cnrZone)
domain.SetName(cnrName)
container.WriteDomain(&cnr, domain)
if err := pool.SyncContainerWithNetwork(ctx, &cnr, f.pool); err != nil {
@ -866,14 +1214,14 @@ func (f *Fs) createContainer(ctx context.Context, containerName string) (cid.ID,
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
containerStr, _ := bucket.Split(path.Join(f.root, dir))
if containerStr == "" {
rootDirName, _ := bucket.Split(path.Join(f.root, dir))
if rootDirName == "" {
return nil
}
_, err := parseContainerID(containerStr)
_, err := parseContainerID(rootDirName)
if err != nil {
if _, err = f.resolveOrCreateContainer(ctx, containerStr); err != nil {
if _, err = f.resolveOrCreateContainer(ctx, rootDirName); err != nil {
return err
}
}
@ -883,12 +1231,12 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
// Rmdir deletes the bucket if the fs is at the root
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
containerStr, containerPath := bucket.Split(path.Join(f.root, dir))
if containerStr == "" || containerPath != "" {
rootDirName, containerPath := bucket.Split(path.Join(f.root, dir))
if rootDirName == "" || containerPath != "" {
return nil
}
cnrID, err := f.parseContainer(ctx, containerStr)
cnrID, err := f.parseContainer(ctx, rootDirName)
if err != nil {
return fs.ErrorDirNotFound
}
@ -908,18 +1256,18 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
f.m.Lock()
defer f.m.Unlock()
if err = f.pool.DeleteContainer(ctx, prm); err != nil {
return fmt.Errorf("couldn't delete container %s '%s': %w", cnrID, containerStr, err)
return fmt.Errorf("couldn't delete container %s '%s': %w", cnrID, rootDirName, err)
}
delete(f.containerIDCache, containerStr)
delete(f.containerIDCache, rootDirName)
return nil
}
// Purge deletes all the files and directories including the old versions.
func (f *Fs) Purge(ctx context.Context, dir string) error {
containerStr, containerPath := bucket.Split(path.Join(f.root, dir))
rootDirName, containerPath := bucket.Split(path.Join(f.root, dir))
cnrID, err := f.parseContainer(ctx, containerStr)
cnrID, err := f.parseContainer(ctx, rootDirName)
if err != nil {
return nil
}
@ -944,8 +1292,9 @@ func parseContainerID(containerStr string) (cid.ID, error) {
return cnrID, err
}
func getContainerIDByName(dirEntry fs.DirEntry, containerName string) (ok bool, cnrID cid.ID, err error) {
if dirEntry.Remote() != containerName {
func getContainerIDByNameAndZone(dirEntry fs.DirEntry, cnrName, cnrZone, defaultZone string) (cnrID cid.ID, ok bool, err error) {
actualName, actualZone := getContainerNameAndZone(dirEntry.Remote(), defaultZone)
if cnrName != actualName || cnrZone != actualZone {
return
}
var idEr fs.IDer
@ -956,61 +1305,62 @@ func getContainerIDByName(dirEntry fs.DirEntry, containerName string) (ok bool,
return
}
func resolveContainerIDWithNNS(resolver *resolver.NNS, containerName string) (cid.ID, error) {
func resolveContainerIDWithNNS(resolver *resolver.NNS, cnrName, cnrZone string) (cid.ID, error) {
var d container.Domain
d.SetName(containerName)
d.SetZone(cnrZone)
d.SetName(cnrName)
if cnrID, err := resolver.ResolveContainerDomain(d); err == nil {
return cnrID, err
}
return cid.ID{}, fmt.Errorf("couldn't resolve container '%s'", containerName)
return cid.ID{}, fmt.Errorf("couldn't resolve container with name '%s' and zone '%s'", cnrName, cnrZone)
}
func (f *Fs) resolveContainerIDHelper(ctx context.Context, containerName string) (cid.ID, error) {
cnrIDStr, ok := f.containerIDCache[containerName]
if ok {
return parseContainerID(cnrIDStr)
func (f *Fs) resolveCIDByRootDirName(ctx context.Context, rootDirName string) (cid.ID, error) {
cnrName, cnrZone := f.getContainerNameAndZone(rootDirName)
if cnrName == "" {
return cid.ID{}, fmt.Errorf("couldn't resolve container '%s'", rootDirName)
}
if f.resolver != nil {
var err error
var cnrID cid.ID
if cnrID, err = resolveContainerIDWithNNS(f.resolver, containerName); err == nil {
f.containerIDCache[containerName] = cnrID.String()
}
return cnrID, err
return resolveContainerIDWithNNS(f.resolver, cnrName, cnrZone)
}
if dirEntries, err := f.listContainers(ctx); err == nil {
for _, dirEntry := range dirEntries {
if ok, cnrID, err := getContainerIDByName(dirEntry, containerName); ok {
if err == nil {
f.containerIDCache[containerName] = cnrID.String()
}
if cnrID, ok, err := getContainerIDByNameAndZone(dirEntry, cnrName, cnrZone, f.opt.DefaultContainerZone); ok {
return cnrID, err
}
}
}
return cid.ID{}, fmt.Errorf("couldn't resolve container '%s'", containerName)
return cid.ID{}, fmt.Errorf("couldn't resolve container '%s'", rootDirName)
}
func (f *Fs) resolveContainerID(ctx context.Context, containerName string) (cid.ID, error) {
func (f *Fs) resolveContainerID(ctx context.Context, rootDirName string) (cid.ID, error) {
f.m.Lock()
defer f.m.Unlock()
return f.resolveContainerIDHelper(ctx, containerName)
}
func (f *Fs) parseContainer(ctx context.Context, containerName string) (cid.ID, error) {
cnrID, err := parseContainerID(containerName)
if err == nil {
return cnrID, err
cnrIDStr, ok := f.containerIDCache[rootDirName]
if ok {
return parseContainerID(cnrIDStr)
}
return f.resolveContainerID(ctx, containerName)
cnrID, err := f.resolveCIDByRootDirName(ctx, rootDirName)
if err != nil {
return cid.ID{}, err
}
f.containerIDCache[rootDirName] = cnrID.String()
return cnrID, nil
}
func (f *Fs) listEntries(ctx context.Context, containerStr, containerPath, directory string, recursive bool) (fs.DirEntries, error) {
cnrID, err := f.parseContainer(ctx, containerStr)
func (f *Fs) parseContainer(ctx context.Context, rootDirName string) (cid.ID, error) {
cnrID, err := parseContainerID(rootDirName)
if err != nil {
return f.resolveContainerID(ctx, rootDirName)
}
return cnrID, nil
}
func (f *Fs) listEntries(ctx context.Context, rootDirName, containerPath, directory string, recursive bool) (fs.DirEntries, error) {
cnrID, err := f.parseContainer(ctx, rootDirName)
if err != nil {
return nil, fs.ErrorDirNotFound
}
@ -1033,7 +1383,11 @@ func (f *Fs) listEntries(ctx context.Context, containerStr, containerPath, direc
return nil, err
}
objInf := newObject(f, obj, containerStr)
objInf, err := newObject(f, obj, rootDirName)
if err != nil {
// skip an erroneous object
continue
}
if !recursive {
withoutPath := strings.TrimPrefix(objInf.filePath, containerPath)
@ -1083,7 +1437,7 @@ func (f *Fs) listContainers(ctx context.Context) (fs.DirEntries, error) {
return nil, fmt.Errorf("couldn't get container '%s': %w", containerID, err)
}
res[i] = newDir(containerID, cnr)
res[i] = newDir(containerID, cnr, f.opt.DefaultContainerZone)
}
return res, nil
@ -1092,7 +1446,7 @@ func (f *Fs) listContainers(ctx context.Context) (fs.DirEntries, error) {
func (f *Fs) findObjectsFilePath(ctx context.Context, cnrID cid.ID, filePath string) ([]oid.ID, error) {
return f.findObjects(ctx, cnrID, searchFilter{
Header: object.AttributeFilePath,
Value: filePath,
Value: "/" + filePath,
MatchType: object.MatchStringEqual,
})
}
@ -1100,7 +1454,7 @@ func (f *Fs) findObjectsFilePath(ctx context.Context, cnrID cid.ID, filePath str
func (f *Fs) findObjectsPrefix(ctx context.Context, cnrID cid.ID, prefix string) ([]oid.ID, error) {
return f.findObjects(ctx, cnrID, searchFilter{
Header: object.AttributeFilePath,
Value: prefix,
Value: "/" + prefix,
MatchType: object.MatchCommonPrefix,
})
}
@ -1119,7 +1473,7 @@ func (f *Fs) findObjects(ctx context.Context, cnrID cid.ID, filters ...searchFil
func (f *Fs) deleteByPrefix(ctx context.Context, cnrID cid.ID, prefix string) error {
filters := object.NewSearchFilters()
filters.AddRootFilter()
filters.AddFilter(object.AttributeFilePath, prefix, object.MatchCommonPrefix)
filters.AddFilter(object.AttributeFilePath, "/"+prefix, object.MatchCommonPrefix)
var prmSearch pool.PrmObjectSearch
prmSearch.SetContainerID(cnrID)

View file

@ -296,16 +296,31 @@ func formObject(own *user.ID, cnrID cid.ID, name string, header map[string]strin
return obj
}
func newDir(cnrID cid.ID, cnr container.Container) *fs.Dir {
func newDir(cnrID cid.ID, cnr container.Container, defaultZone string) *fs.Dir {
remote := cnrID.EncodeToString()
timestamp := container.CreatedAt(cnr)
if domain := container.ReadDomain(cnr); domain.Name() != "" {
remote = domain.Name()
if defaultZone != domain.Zone() {
remote = domain.Name() + "." + domain.Zone()
} else {
remote = domain.Name()
}
}
dir := fs.NewDir(remote, timestamp)
dir.SetID(cnrID.String())
return dir
}
func getContainerNameAndZone(containerStr, defaultZone string) (cnrName string, cnrZone string) {
defer func() {
if len(cnrZone) == 0 {
cnrZone = defaultZone
}
}()
if idx := strings.Index(containerStr, "."); idx >= 0 {
return containerStr[:idx], containerStr[idx+1:]
}
return containerStr, defaultZone
}

View file

@ -7,6 +7,57 @@ import (
"github.com/stretchr/testify/require"
)
func TestGetZoneAndContainerNames(t *testing.T) {
for i, tc := range []struct {
cnrStr string
defZone string
expectedName string
expectedZone string
}{
{
cnrStr: "",
defZone: "def_zone",
expectedName: "",
expectedZone: "def_zone",
},
{
cnrStr: "",
defZone: "def_zone",
expectedName: "",
expectedZone: "def_zone",
},
{
cnrStr: "cnr_name",
defZone: "def_zone",
expectedName: "cnr_name",
expectedZone: "def_zone",
},
{
cnrStr: "cnr_name.",
defZone: "def_zone",
expectedName: "cnr_name",
expectedZone: "def_zone",
},
{
cnrStr: ".cnr_zone",
defZone: "def_zone",
expectedName: "",
expectedZone: "cnr_zone",
}, {
cnrStr: ".cnr_zone",
defZone: "def_zone",
expectedName: "",
expectedZone: "cnr_zone",
},
} {
t.Run(strconv.Itoa(i), func(t *testing.T) {
actualName, actualZone := getContainerNameAndZone(tc.cnrStr, tc.defZone)
require.Equal(t, tc.expectedZone, actualZone)
require.Equal(t, tc.expectedName, actualName)
})
}
}
func TestParseContainerCreationPolicy(t *testing.T) {
for i, tc := range []struct {
ACLString string

View file

@ -42,6 +42,7 @@ docs = [
"dropbox.md",
"filefabric.md",
"filescom.md",
"frostfs.md",
"ftp.md",
"gofile.md",
"googlecloudstorage.md",

View file

@ -123,6 +123,7 @@ WebDAV or S3, that work out of the box.)
{{< provider name="Enterprise File Fabric" home="https://storagemadeeasy.com/about/" config="/filefabric/" >}}
{{< provider name="Fastmail Files" home="https://www.fastmail.com/" config="/webdav/#fastmail-files" >}}
{{< provider name="Files.com" home="https://www.files.com/" config="/filescom/" >}}
{{< provider name="FrostFS" home="https://git.frostfs.info/TrueCloudLab/" config="/frostfs/" >}}
{{< provider name="FTP" home="https://en.wikipedia.org/wiki/File_Transfer_Protocol" config="/ftp/" >}}
{{< provider name="Gofile" home="https://gofile.io/" config="/gofile/" >}}
{{< provider name="Google Cloud Storage" home="https://cloud.google.com/storage/" config="/googlecloudstorage/" >}}

287
docs/content/frostfs.md Normal file
View file

@ -0,0 +1,287 @@
---
title: "FrostFS"
description: "Rclone docs for FrostFS backend"
versionIntroduced: "---"
---
# {{< icon "fa fa-file" >}} FrostFS
Rclone FrostFS support is provided using the
[git.frostfs.info/TrueCloudLab/frostfs-sdk-go](https://git.frostfs.info/TrueCloudLab/frostfs-sdk-go) package
## Configuration
To create an FrostFS configuration named `remote`, run
rclone config
Rclone config guides you through an interactive setup process. A minimal
rclone FrostFS remote definition only requires endpoint and path to FrostFS user wallet.
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.
name> remote
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
1 / 1Fichier
\ (fichier)
2 / Akamai NetStorage
\ (netstorage)
3 / Alias for an existing remote
\ (alias)
4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
\ (s3)
5 / Backblaze B2
\ (b2)
6 / Better checksums for other remotes
\ (hasher)
7 / Box
\ (box)
8 / Cache a remote
\ (cache)
9 / Citrix Sharefile
\ (sharefile)
10 / Combine several remotes into one
\ (combine)
11 / Compress a remote
\ (compress)
12 / Distributed, decentralized object storage FrostFS
\ (frostfs)
13 / Dropbox
\ (dropbox)
14 / Encrypt/Decrypt a remote
\ (crypt)
15 / Enterprise File Fabric
\ (filefabric)
16 / FTP
\ (ftp)
17 / Files.com
\ (filescom)
18 / Gofile
\ (gofile)
19 / Google Cloud Storage (this is not Google Drive)
\ (google cloud storage)
20 / Google Drive
\ (drive)
21 / Google Photos
\ (google photos)
22 / HTTP
\ (http)
23 / Hadoop distributed file system
\ (hdfs)
24 / HiDrive
\ (hidrive)
25 / ImageKit.io
\ (imagekit)
26 / In memory object storage system.
\ (memory)
27 / Internet Archive
\ (internetarchive)
28 / Jottacloud
\ (jottacloud)
29 / Koofr, Digi Storage and other Koofr-compatible storage providers
\ (koofr)
30 / Linkbox
\ (linkbox)
31 / Local Disk
\ (local)
32 / Mail.ru Cloud
\ (mailru)
33 / Mega
\ (mega)
34 / Microsoft Azure Blob Storage
\ (azureblob)
35 / Microsoft Azure Files
\ (azurefiles)
36 / Microsoft OneDrive
\ (onedrive)
37 / OpenDrive
\ (opendrive)
38 / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)
\ (swift)
39 / Oracle Cloud Infrastructure Object Storage
\ (oracleobjectstorage)
40 / Pcloud
\ (pcloud)
41 / PikPak
\ (pikpak)
42 / Pixeldrain Filesystem
\ (pixeldrain)
43 / Proton Drive
\ (protondrive)
44 / Put.io
\ (putio)
45 / QingCloud Object Storage
\ (qingstor)
46 / Quatrix by Maytech
\ (quatrix)
47 / SMB / CIFS
\ (smb)
48 / SSH/SFTP
\ (sftp)
49 / Sia Decentralized Cloud
\ (sia)
50 / Storj Decentralized Cloud Storage
\ (storj)
51 / Sugarsync
\ (sugarsync)
52 / Transparently chunk/split large files
\ (chunker)
53 / Uloz.to
\ (ulozto)
54 / Union merges the contents of several upstream fs
\ (union)
55 / Uptobox
\ (uptobox)
56 / WebDAV
\ (webdav)
57 / Yandex Disk
\ (yandex)
58 / Zoho
\ (zoho)
59 / premiumize.me
\ (premiumizeme)
60 / seafile
\ (seafile)
Storage> frostfs
Option endpoint.
Endpoints to connect to FrostFS node
Choose a number from below, or type in your own value.
1 / One endpoint.
\ (s01.frostfs.devenv:8080)
2 / Multiple endpoints to form pool.
\ (s01.frostfs.devenv:8080 s02.frostfs.devenv:8080)
3 / Multiple endpoints with priority (less value is higher priority). Until s01 is healthy all request will be send to it.
\ (s01.frostfs.devenv:8080,1 s02.frostfs.devenv:8080,2)
4 / Multiple endpoints with priority and weights. After s01 is unhealthy requests will be send to s02 and s03 in proportions 10% and 90% respectively.
\ (s01.frostfs.devenv:8080,1,1 s02.frostfs.devenv:8080,2,1 s03.frostfs.devenv:8080,2,9)
endpoint> s01.frostfs.devenv:8080,1 s02.frostfs.devenv:8080,2
Option connection_timeout.
FrostFS connection timeout
Enter a value of type Duration. Press Enter for the default (4s).
connection_timeout>
Option request_timeout.
FrostFS request timeout
Enter a value of type Duration. Press Enter for the default (12s).
request_timeout>
Option rebalance_interval.
FrostFS rebalance connections interval
Enter a value of type Duration. Press Enter for the default (15s).
rebalance_interval>
Option session_expiration.
FrostFS session expiration epoch
Enter a signed integer. Press Enter for the default (4294967295).
session_expiration>
Option ape_cache_invalidation_duration.
APE cache invalidation duration
Enter a value of type Duration. Press Enter for the default (8s).
ape_cache_invalidation_duration>
Option ape_cache_invalidation_timeout.
APE cache invalidation timeout
Enter a value of type Duration. Press Enter for the default (24s).
ape_cache_invalidation_timeout>
Option ape_chain_check_interval.
The interval for verifying that the APE chain is saved in FrostFS.
Enter a value of type Duration. Press Enter for the default (500ms).
ape_chain_check_interval>
Option rpc_endpoint.
Endpoint to connect to Neo rpc node
Enter a value. Press Enter to leave empty.
rpc_endpoint>
Option wallet.
Path to wallet
Enter a value.
wallet> /wallets/wallet.conf
Option address.
Address of account
Enter a value. Press Enter to leave empty.
address>
Option password.
Password to decrypt wallet
Enter a value. Press Enter to leave empty.
password>
Option placement_policy.
Placement policy for new containers
Choose a number from below, or type in your own value of type string.
Press Enter for the default (REP 3).
1 / Container will have 3 replicas
\ (REP 3)
placement_policy> REP 1
Option default_container_zone.
The name of the zone in which containers will be created or resolved if the zone name is not explicitly specified with the container name. Can be empty.
Enter a value of type string. Press Enter for the default (container).
default_container_zone>
Option container_creation_policy.
Container creation policy for new containers
Choose a number from below, or type in your own value of type string.
Press Enter for the default (private).
1 / Public container, anyone can read and write
\ (public-read-write)
2 / Public container, owner can read and write, others can only read
\ (public-read)
3 / Private container, only owner has access to it
\ (private)
container_creation_policy>
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: frostfs
- endpoint: s01.frostfs.devenv:8080,1 s02.frostfs.devenv:8080,2
- password: M@zPGpWDravchenko/develop/go/playground/play_with_ape/cfg/wallet_akrav.js
- placement_policy: REP 1
Keep this "remote" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
As a remote root directory, you can use either the container identifier or its user-friendly name.
If you choose to use a container name, rclone will create a new container with that name when
it's used for the first time.
For example, both of the following commands would be valid if there was a `~/test-copy` directory and a container with
the identifier `23fk3Bcw5mPZ4YtYkTLJbQebtM2WXHz4HL8FgsrTJkSf`:
rclone copy ~/test-copy remote:23fk3Bcw5mPZ4YtYkTLJbQebtM2WXHz4HL8FgsrTJkSf/test-copy
rclone copy ~/test-copy remote:container-name/test-copy
Also, for user-friendly container names, you can explicitly specify the name of the zone in which you want
to create or search for a container:
rclone copy ~/test-copy remote:container-name.container-zone/test-copy
If the zone is not explicitly specified, its name will be obtained from the configuration parameter
`default_container_zone`.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/frostfs/frostfs.go then run make backenddocs" >}}
{{< rem autogenerated options stop >}}