Compare commits

..

68 commits

Author SHA1 Message Date
Evgenii Stratonikov
9426fd5046 WIP: pilorama: add custom batches
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-20 12:13:14 +03:00
Evgenii Stratonikov
34d20fd592 services/tree: allow to customize some parameters
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-19 11:03:13 +03:00
Evgenii Stratonikov
609dbe83db [#1559] engine: Do not count logical errors as storage ones
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-13 10:09:30 +03:00
Evgenii Stratonikov
f9eb15254e engine: remove default error threshold
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:47:39 +03:00
Evgenii Stratonikov
c5bd51e934 neofs-node: initialize storage before other services
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:47:39 +03:00
Evgenii Stratonikov
85aa30e89c local_object_storage: ignore pilorama errors
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:47:39 +03:00
Evgenii Stratonikov
61ae8b0a2c shard: ignore errors in UpdateID
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:47:39 +03:00
Evgenii Stratonikov
b193352d1e [#1548] morph/client: Execute close callback without switch mutex
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:45:57 +03:00
Evgenii Stratonikov
7b5b735fb2 [#1550] engine: Split errors on write- and meta- errors
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:45:57 +03:00
Evgenii Stratonikov
dafc21b052 [#1550] engine: Set default error threshold
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:45:57 +03:00
Leonard Lyubich
a6d1eefeff [#1549] shard: Always close metabase
Make `meta.DB` to call `Close` method on `bbolt.DB` instance if it is
non-nil only. Call `meta.DB.Close` in `shard.Shard.Close` anyway.

Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
2022-07-08 13:45:57 +03:00
Leonard Lyubich
596d877a44 [#1549] engine: Disable shard on blobovnicza init failure
There is a need to support working w/o shard if it has problems with
blobovnicza tree.

Make `BlobStor.Init` to return new `ErrInitBlobovniczas` error. Remove
shard from storage engine's shard set if it returned this error from
`Init` call. So if some of the shards (but not all) return this error,
the node will be able to continue working without them.

Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
2022-07-08 13:45:57 +03:00
Leonard Lyubich
263497a92b [#1549] shard: Turn to ModeDegraded on metabase failure
Make `Shard` to work in degraded mode if metabase is unavailable on
opening/init stage. Close metabase in non-degraded mode only.

Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
2022-07-08 13:45:57 +03:00
Pavel Karpy
1684cd63fa [#1558] node: Do not put SHA256 hash as homomorphic
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:45:16 +03:00
Pavel Karpy
33676ad832 [#1370] adm: Support changing NeoFS config value
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:45:13 +03:00
Pavel Karpy
83dd963ab7 [#1367] adm: Support homomorphic hashing config in dump-config
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:44:13 +03:00
Evgenii Stratonikov
e0e4f1f7ee engine: initialize shards in parallel
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:44:05 +03:00
Pavel Karpy
90b4820ee0 [#1365] morph: Do not return errors if config key is missing
Return default values instead of casting errors in `HomomorphicHashDisabled`
method.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:43:46 +03:00
Pavel Karpy
2d9c805c81 [#1365] adm: Add homomorphic hash disabling option
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:43:45 +03:00
Pavel Karpy
adcda361a7 [#1365] node: Calculate object homomorphic hash flexibly
Do not calculate and do not write homomorphic hash for containers that were
configured to store objects without hash.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:41:35 +03:00
Pavel Karpy
e9c534b0a0 [#1365] ir: Check homomorphic hash flexibly in audit
Do not perform that check if it was turned off for the container being
checked.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:39:19 +03:00
Pavel Karpy
455096ab53 [#1365] ir: Check homomorphic hash setting on ContainerPut
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:36:22 +03:00
Pavel Karpy
fdc934a360 [#1365] morph: Add HomomorphicHashDisabled config getter
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:34:40 +03:00
Pavel Karpy
ab749460cd [#1365] cli: Calculate homomorphic hash flexibly
Do not use homomorphic hash in storage group for containers that have
`homomorphic_hashing_disabled` set to `true`.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:34:39 +03:00
Pavel Karpy
a455f4e3a7 [#1365] cli: Sync container with network config
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:32:42 +03:00
Pavel Karpy
7308c333cc [#1365] cli: Add SyncContainerSettings func to internal client
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:32:19 +03:00
Evgenii Stratonikov
9857a20c0d [#1505] pilorama: Provide timeout to bbolt.Open
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:28:23 +03:00
Evgenii Stratonikov
1fed255c5b [#1505] pilorama: Allow to customize database parameters
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:27:59 +03:00
Evgenii Stratonikov
2c8a87a469 [#1334] services/tree: Document *.proto files
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:27:57 +03:00
Evgenii Stratonikov
c8fce0d3e4 [#1333] neofs-cli: add control synchronize-tree command
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:27:54 +03:00
Evgenii Stratonikov
681df24547 [#1333] services/control: allow to synchronize local trees
Do not check that a node indeed belongs to the container, because the
synchronization will fail in this case anyway.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:27:49 +03:00
Evgenii Stratonikov
5af89b4bbe [#1333] neofs-node: initialize tree service before the control one
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:27:46 +03:00
Evgenii Stratonikov
982cb987a3 [#1333] engine: Increase error counter for pilorama errors
1. Modifying operations are not expected to fail, unless the shard is
   read-only.
2. `Get*` operations should increase error counter too, unless the
   error is `ErrTreeNotFound`.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:27:38 +03:00
Evgenii Stratonikov
5408efef82 [#1333] services/control: Return pilorama info in ListShards RPC
Do not return backend type from the service for now, because memory
backend is expected to vanish.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:27:17 +03:00
Evgenii Stratonikov
62b2769a66 [#1333] local_object_storage: Support ReadOnly mode in pilorama
The tricky part here is the engine itself: we stop iteration on
`ErrReadOnly` because it is better to synchronize the shard later than
to have partial trees stored in 2 shards.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:26:53 +03:00
Evgenii Stratonikov
199ee3a680 [#1481] pilorama: Fix TreeApply
Current implementation prevents invalid operations to become valid at
some later point (consider adding a child to the non-existent parent and
then adding the parent). This seems to diverge from the paper algorithm
and complicates implementation. Make it simpler.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:26:30 +03:00
Evgenii Stratonikov
73df95b8d3 [#1456] services/tree: wait some time before reconnecting after failure
In case node is down or failing for some reason, we can expect `Dial` to
fail. In case we actively try to replicate and `Dial` always takes 2
seconds, replication-related channels quickly become full. That affects
latency of all other write operations.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:26:26 +03:00
Evgenii Stratonikov
96277c650f [#1445] services/tree: Cache the list of container nodes
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:26:24 +03:00
Evgenii Stratonikov
879c1de59d [#1446] services/tree: Use grpc.WithInsecure only for nodes without TLS
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:23:45 +03:00
Evgenii Stratonikov
6b02df7b8c [#1444] pilorama: Fix TreeMove in bbolt backend
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:23:23 +03:00
Evgenii Stratonikov
578fbdca57 [#1427] services/tree: Parallelize replicator
Before this commit the replication channel was quickly filled under
heavy load. This lead to the continuously increasing latency for all
write operations. Now it looks better.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:23:19 +03:00
Evgenii Stratonikov
aec4f54a00 [#1444] pilorama: Optimize internal encoding/decoding
```
name                      old time/op    new time/op    delta
ApplySequential/bbolt-8     55.5µs ± 4%    55.5µs ± 3%     ~     (p=1.000 n=10+7)
ApplyReorderLast/bbolt-8     108µs ± 6%     112µs ± 8%     ~     (p=0.077 n=9+9)

name                      old alloc/op   new alloc/op   delta
ApplySequential/bbolt-8     28.8kB ± 3%    27.7kB ± 6%   -3.79%  (p=0.005 n=10+10)
ApplyReorderLast/bbolt-8    41.4kB ± 5%    38.9kB ± 5%   -6.19%  (p=0.001 n=10+9)

name                      old allocs/op  new allocs/op  delta
ApplySequential/bbolt-8        262 ± 2%       235 ±10%  -10.41%  (p=0.000 n=10+10)
ApplyReorderLast/bbolt-8       684 ± 6%       616 ± 7%  -10.04%  (p=0.000 n=10+9)
```

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:22:00 +03:00
Evgenii Stratonikov
c9ddc8fbeb [#1446] services/tree: Cache connections to the container nodes
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:20:33 +03:00
Evgenii Stratonikov
06f2681178 [#1442] pilorama: Generate timestamp based on node position in the container
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:19:50 +03:00
Evgenii Stratonikov
55a9a39f9e [#1442] services/tree: Fix log message for failed Apply
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:19:45 +03:00
Evgenii Stratonikov
d244b2658a [#1401] services/tree: Marshal public key once
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:19:31 +03:00
Evgenii Stratonikov
86c6c24b86 [#1401] services/tree: Retransmit queries to container nodes
Also fix a bug with replicator using the multiaddress instead of
<host>:<port> format expected by gRPC library.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:19:21 +03:00
Evgenii Stratonikov
fa57a8be44 [#1431] pilorama: Use Batch for write transactions
Helps a lot in case of concurrent request flow.

```
name                      old time/op    new time/op    delta
ApplySequential/bbolt-8     78.0µs ± 9%    59.8µs ± 4%  -23.39%  (p=0.000 n=10+9)
ApplyReorderLast/bbolt-8     143µs ± 5%     113µs ±15%  -21.06%  (p=0.000 n=10+10)

name                      old alloc/op   new alloc/op   delta
ApplySequential/bbolt-8     56.9kB ± 8%    28.9kB ± 3%  -49.22%  (p=0.000 n=10+10)
ApplyReorderLast/bbolt-8    87.3kB ± 3%    40.9kB ±10%  -53.16%  (p=0.000 n=10+10)

name                      old allocs/op  new allocs/op  delta
ApplySequential/bbolt-8        224 ±11%       262 ± 5%  +16.93%  (p=0.000 n=9+10)
ApplyReorderLast/bbolt-8       518 ± 4%       674 ±11%  +30.09%  (p=0.000 n=10+10)
```

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:18:55 +03:00
Evgenii Stratonikov
d6d7e35454 [#1431] pilorama: Cache attributes in the index
Currently to find a node by path we iterate over all the children on
each level. This is far from optimal and scales badly with the number of
nodes on a single level. Thus we introduce "indexed attributes" for
which an additional information is stored and which can be use in
`*ByPath` operations. Currently this set only includes `FileName`
attribute but this may change in future.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:18:23 +03:00
Evgenii Stratonikov
241d4d6810 [#1431] engine: Add benchmark for Select vs TreeGetByPath
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:18:15 +03:00
Evgenii Stratonikov
b3ca9ce775 [#1329] services/tree: Synchronize from the last stored height
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:18:09 +03:00
Evgenii Stratonikov
35fa445195 [#1329] pilorama: Allow to benchmark all tree backends
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:18:01 +03:00
Evgenii Stratonikov
9cbd4271f1 [#1329] services/tree: Implement GetOpLog RPC
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:17:22 +03:00
Evgenii Stratonikov
b19de6116f [#1426] services/tree: Do not replicate to a local node
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:17:15 +03:00
Evgenii Stratonikov
3cc67db083 [#1419] pilorama: Create new nodes in path if needed
Consider a node `{FileName: "dir", Attribute: "xxx"}`. In case we add
a new node by path `["dir", "file.txt"]`, create a new intermediate node
with a single attribute.

`GetByPath` now also considers only nodes with a single attribute while building a path.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:16:16 +03:00
Evgenii Stratonikov
730f14e4eb [#1406] pilorama: Return parent from TreeGetMeta
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:15:52 +03:00
Denis Kirillov
7af3424bad [#1404] services/tree: fix nodeId in GetSubTree
Signed-off-by: Denis Kirillov <denis@nspcc.ru>
2022-07-08 13:15:48 +03:00
Evgenii Stratonikov
427f63e359 [#1328] services/tree: Fix grpc import path
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:15:43 +03:00
Evgenii Stratonikov
035963d147 [#1328] services/tree: Implement access control
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:15:41 +03:00
Evgenii Stratonikov
f6589331b6 [#1328] services/tree: Fix proto field numbers
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:13:53 +03:00
Evgenii Stratonikov
319fd212dc [#1342] neofs-node: Use the default endpoint for tree service
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:13:48 +03:00
Evgenii Stratonikov
34cab7be82 [#1344] pilorama: Document errors for Get* methods
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:13:39 +03:00
Evgenii Stratonikov
59bd5ac973 [#1344] engine: Log errors in Tree* operations
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:13:32 +03:00
Evgenii Stratonikov
e2c88a9983 [#1344] pilorama: Use require.ErrorIs in tests
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:13:27 +03:00
Evgenii Stratonikov
dd7c4385c6 [#1326] services/tree: Implement GetSubTree RPC
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 12:50:13 +03:00
Evgenii Stratonikov
375c30e687 [#1324] services/tree: Implement Object Tree Service
Object Tree Service allows changing trees assotiated with
the container in runtime.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 12:50:12 +03:00
Evgenii Stratonikov
4a65eb7e5f [#1324] engine: Implement Forest interface for storage engine
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 12:47:40 +03:00
Evgenii Stratonikov
cf73feb3f8 [#1324] local_object_storage: Implement tree service backend
In this commit we implement algorithm for CRDT trees from
https://martin.klepmann.com/papers/move-op.pdf

Each tree is identified by the ID of a container it belongs to
and the tree name itself. Essentially, it is a sequence of operations
which should be applied in chronological order to get a usual tree
representation.

There are 2 backends for now: bbolt database and in-memory.
In-memory backend is here for debugging and will eventually act
as a memory-cache for the on-disk database.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 12:47:40 +03:00
1318 changed files with 46166 additions and 62501 deletions

View file

@ -1,19 +1,19 @@
FROM golang:1.21 as builder
FROM golang:1.17 as builder
ARG BUILD=now
ARG VERSION=dev
ARG REPO=repository
WORKDIR /src
COPY . /src
RUN make bin/frostfs-adm
RUN make bin/neofs-adm
# Executable image
FROM alpine AS frostfs-adm
FROM alpine AS neofs-adm
RUN apk add --no-cache bash
WORKDIR /
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=builder /src/bin/frostfs-adm /bin/frostfs-adm
COPY --from=builder /src/bin/neofs-adm /bin/neofs-adm
CMD ["frostfs-adm"]
CMD ["neofs-adm"]

View file

@ -1,25 +0,0 @@
FROM golang:1.21
WORKDIR /tmp
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
pip \
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# Dash → Bash
RUN echo "dash dash/sh boolean false" | debconf-set-selections
RUN DEBIAN_FRONTEND=noninteractive dpkg-reconfigure dash
RUN useradd -u 1234 -d /home/ci -m ci
USER ci
ENV PATH="$PATH:/home/ci/.local/bin"
COPY .pre-commit-config.yaml .
RUN pip install "pre-commit==3.1.1" \
&& git init . \
&& pre-commit install-hooks \
&& rm -rf /tmp/*

View file

@ -1,19 +1,19 @@
FROM golang:1.21 as builder
FROM golang:1.17 as builder
ARG BUILD=now
ARG VERSION=dev
ARG REPO=repository
WORKDIR /src
COPY . /src
RUN make bin/frostfs-cli
RUN make bin/neofs-cli
# Executable image
FROM alpine AS frostfs-cli
FROM alpine AS neofs-cli
RUN apk add --no-cache bash
WORKDIR /
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=builder /src/bin/frostfs-cli /bin/frostfs-cli
COPY --from=builder /src/bin/neofs-cli /bin/neofs-cli
CMD ["frostfs-cli"]
CMD ["neofs-cli"]

View file

@ -3,6 +3,6 @@ RUN apk add --no-cache bash ca-certificates
WORKDIR /
COPY bin/frostfs-adm /bin/frostfs-adm
COPY bin/neofs-adm /bin/neofs-adm
CMD ["frostfs-adm"]
CMD ["neofs-adm"]

View file

@ -3,6 +3,6 @@ RUN apk add --no-cache bash ca-certificates
WORKDIR /
COPY bin/frostfs-cli /bin/frostfs-cli
COPY bin/neofs-cli /bin/neofs-cli
CMD ["frostfs-cli"]
CMD ["neofs-cli"]

View file

@ -3,6 +3,6 @@ RUN apk add --no-cache bash ca-certificates
WORKDIR /
COPY bin/frostfs-ir /bin/frostfs-ir
COPY bin/neofs-ir /bin/neofs-ir
CMD ["frostfs-ir"]
CMD ["neofs-ir"]

View file

@ -3,6 +3,6 @@ RUN apk add --no-cache bash ca-certificates
WORKDIR /
COPY bin/frostfs-node /bin/frostfs-node
COPY bin/neofs-node /bin/neofs-node
CMD ["frostfs-node"]
CMD ["neofs-node"]

View file

@ -1,18 +1,18 @@
FROM golang:1.21 as builder
FROM golang:1.17 as builder
ARG BUILD=now
ARG VERSION=dev
ARG REPO=repository
WORKDIR /src
COPY . /src
RUN make bin/frostfs-ir
RUN make bin/neofs-ir
# Executable image
FROM alpine AS frostfs-ir
FROM alpine AS neofs-ir
RUN apk add --no-cache bash
WORKDIR /
COPY --from=builder /src/bin/frostfs-ir /bin/frostfs-ir
COPY --from=builder /src/bin/neofs-ir /bin/neofs-ir
CMD ["frostfs-ir"]
CMD ["neofs-ir"]

View file

@ -1,18 +1,18 @@
FROM golang:1.21 as builder
FROM golang:1.17 as builder
ARG BUILD=now
ARG VERSION=dev
ARG REPO=repository
WORKDIR /src
COPY . /src
RUN make bin/frostfs-node
RUN make bin/neofs-node
# Executable image
FROM alpine AS frostfs-node
FROM alpine AS neofs-node
RUN apk add --no-cache bash
WORKDIR /
COPY --from=builder /src/bin/frostfs-node /bin/frostfs-node
COPY --from=builder /src/bin/neofs-node /bin/neofs-node
CMD ["frostfs-node"]
CMD ["neofs-node"]

View file

@ -0,0 +1,19 @@
FROM golang:1.17 as builder
ARG BUILD=now
ARG VERSION=dev
ARG REPO=repository
WORKDIR /src
COPY . /src
RUN make bin/neofs-node
# Executable image
FROM alpine AS neofs-node
RUN apk add --no-cache bash
WORKDIR /
COPY --from=builder /src/bin/neofs-node /bin/neofs-node
COPY --from=builder /src/config/testnet/config.yml /config.yml
CMD ["neofs-node", "--config", "/config.yml"]

View file

@ -5,5 +5,4 @@ docker-compose.yml
Dockerfile
temp
.dockerignore
docker
.cache
docker

View file

@ -1,41 +0,0 @@
name: Build
on: [pull_request]
jobs:
build:
name: Build Components
runs-on: ubuntu-latest
strategy:
matrix:
go_versions: [ '1.20', '1.21' ]
steps:
- uses: actions/checkout@v3
with:
# Allows to fetch all history for all branches and tags.
# Need this for proper versioning.
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '${{ matrix.go_versions }}'
- name: Build CLI
run: make bin/frostfs-cli
- run: bin/frostfs-cli --version
- name: Build NODE
run: make bin/frostfs-node
- name: Build IR
run: make bin/frostfs-ir
- name: Build ADM
run: make bin/frostfs-adm
- run: bin/frostfs-adm --version
- name: Build LENS
run: make bin/frostfs-lens
- run: bin/frostfs-lens --version

View file

@ -1,21 +0,0 @@
name: DCO action
on: [pull_request]
jobs:
dco:
name: DCO
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: '1.21'
- name: Run commit format checker
uses: https://git.frostfs.info/TrueCloudLab/dco-go@v2
with:
from: 'origin/${{ github.event.pull_request.base.ref }}'

View file

@ -1,73 +0,0 @@
name: Tests and linters
on: [pull_request]
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.21'
cache: true
- name: Install linters
run: make lint-install
- name: Run linters
run: make lint
tests:
name: Tests
runs-on: ubuntu-latest
strategy:
matrix:
go_versions: [ '1.20', '1.21' ]
fail-fast: false
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '${{ matrix.go_versions }}'
cache: true
- name: Run tests
run: make test
tests-race:
name: Tests with -race
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.21'
cache: true
- name: Run tests
run: go test ./... -count=1 -race
staticcheck:
name: Staticcheck
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.21'
cache: true
- name: Install staticcheck
run: make staticcheck-install
- name: Run staticcheck
run: make staticcheck-run

View file

@ -1,22 +0,0 @@
name: Vulncheck
on: [pull_request]
jobs:
vulncheck:
name: Vulncheck
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: '1.21'
- name: Install govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest
- name: Run govulncheck
run: govulncheck ./...

1
.gitattributes vendored
View file

@ -1,3 +1,2 @@
/**/*.pb.go -diff -merge
/**/*.pb.go linguist-generated=true
/go.sum -diff

View file

@ -2,7 +2,7 @@
name: Bug report
about: Create a report to help us improve
title: ''
labels: community, triage, bug
labels: community, triage
assignees: ''
---
@ -18,11 +18,8 @@ assignees: ''
If suggesting a change/improvement, explain the difference from current behavior -->
## Possible Solution
<!-- Not obligatory
If no reason/fix/additions for the bug can be suggested,
uncomment the following phrase:
No fix can be suggested by a QA engineer. Further solutions shall be up to developers. -->
<!-- Not obligatory, but suggest a fix/reason for the bug,
or ideas how to implement the addition or change -->
## Steps to Reproduce (for bugs)
<!-- Provide a link to a live example, or an unambiguous set of steps

View file

@ -18,11 +18,3 @@ assignees: ''
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
## Don't forget to add labels!
- component label (`neofs-adm`, `neofs-storage`, ...)
- issue type (`enhancement`, `refactor`, ...)
- `goodfirstissue`, `helpwanted` if needed
- does this issue belong to an epic?
- priority (`P0`-`P4`) if already triaged
- quarter label (`202XQY`) if possible

197
.github/logo.svg vendored
View file

@ -1,70 +1,129 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 25.0.1, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg version="1.1" id="Слой_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="0 0 184.2 51.8" style="enable-background:new 0 0 184.2 51.8;" xml:space="preserve">
<style type="text/css">
.st0{display:none;}
.st1{display:inline;}
.st2{fill:#01E397;}
.st3{display:inline;fill:#010032;}
.st4{display:inline;fill:#00E599;}
.st5{display:inline;fill:#00AF92;}
.st6{fill:#00C3E5;}
</style>
<g id="Layer_2">
<g id="Layer_1-2" class="st0">
<g class="st1">
<path class="st2" d="M146.6,18.3v7.2h10.9V29h-10.9v10.7h-4V14.8h18v3.5H146.6z"/>
<path class="st2" d="M180,15.7c1.7,0.9,3,2.2,4,3.8l-3,2.7c-0.6-1.3-1.5-2.4-2.6-3.3c-1.3-0.7-2.8-1-4.3-1
c-1.4-0.1-2.8,0.3-4,1.1c-0.9,0.5-1.5,1.5-1.4,2.6c0,1,0.5,1.9,1.4,2.4c1.5,0.8,3.2,1.3,4.9,1.5c1.9,0.3,3.7,0.8,5.4,1.6
c1.2,0.5,2.2,1.3,2.9,2.3c0.6,1,1,2.2,0.9,3.4c0,1.4-0.5,2.7-1.3,3.8c-0.9,1.2-2.1,2.1-3.5,2.6c-1.7,0.6-3.4,0.9-5.2,0.8
c-5,0-8.6-1.6-10.7-5l2.9-2.8c0.7,1.4,1.8,2.5,3.1,3.3c1.5,0.7,3.1,1.1,4.7,1c1.5,0.1,2.9-0.2,4.2-0.9c0.9-0.5,1.5-1.5,1.5-2.6
c0-0.9-0.5-1.8-1.3-2.2c-1.5-0.7-3.1-1.2-4.8-1.5c-1.9-0.3-3.7-0.8-5.5-1.5c-1.2-0.5-2.2-1.4-3-2.4c-0.6-1-1-2.2-0.9-3.4
c0-1.4,0.4-2.7,1.2-3.8c0.8-1.2,2-2.2,3.3-2.8c1.6-0.7,3.4-1.1,5.2-1C176.1,14.3,178.2,14.8,180,15.7z"/>
</g>
<path class="st3" d="M73.3,16.3c1.9,1.9,2.9,4.5,2.7,7.1v15.9h-4V24.8c0-2.6-0.5-4.5-1.6-5.7c-1.2-1.2-2.8-1.8-4.5-1.7
c-1.3,0-2.5,0.3-3.7,0.8c-1.2,0.7-2.2,1.7-2.9,2.9c-0.8,1.5-1.1,3.2-1.1,4.9v13.3h-4V15.1l3.6,1.5v1.7c0.8-1.5,2.1-2.6,3.6-3.3
c1.5-0.8,3.2-1.2,4.9-1.1C68.9,13.8,71.3,14.7,73.3,16.3z"/>
<path class="st3" d="M104.4,28.3H85.6c0.1,2.2,1,4.3,2.5,5.9c1.5,1.4,3.5,2.2,5.6,2.1c1.6,0.1,3.2-0.2,4.6-0.9
c1.1-0.6,2-1.6,2.5-2.8l3.3,1.8c-0.9,1.7-2.3,3.1-4,4c-2,1-4.2,1.5-6.4,1.4c-3.7,0-6.7-1.1-8.8-3.4s-3.2-5.5-3.2-9.6s1-7.2,3-9.5
s5-3.4,8.7-3.4c2.1-0.1,4.2,0.5,6.1,1.5c1.6,1,3,2.5,3.8,4.2c0.9,1.8,1.3,3.9,1.3,5.9C104.6,26.4,104.6,27.4,104.4,28.3z
M88.1,19.3c-1.4,1.5-2.2,3.4-2.4,5.5h15.1c-0.2-2-1-3.9-2.3-5.5c-1.4-1.3-3.2-2-5.1-1.9C91.5,17.3,89.6,18,88.1,19.3z"/>
<path class="st3" d="M131,17.3c2.2,2.3,3.2,5.5,3.2,9.5s-1,7.3-3.2,9.6s-5.1,3.4-8.8,3.4s-6.7-1.1-8.9-3.4s-3.2-5.5-3.2-9.6
s1.1-7.2,3.2-9.5s5.1-3.4,8.9-3.4S128.9,15,131,17.3z M116.2,19.9c-1.5,2-2.2,4.4-2.1,6.9c-0.2,2.5,0.6,5,2.1,7
c1.5,1.7,3.7,2.7,6,2.6c2.3,0.1,4.4-0.9,5.9-2.6c1.5-2,2.3-4.5,2.1-7c0.1-2.5-0.6-4.9-2.1-6.9c-1.5-1.7-3.6-2.7-5.9-2.6
C119.9,17.2,117.7,18.2,116.2,19.9z"/>
<polygon class="st4" points="0,9.1 0,43.7 22.5,51.8 22.5,16.9 46.8,7.9 24.8,0 "/>
<polygon class="st5" points="24.3,17.9 24.3,36.8 46.8,44.9 46.8,9.6 "/>
</g>
<g>
<g>
<path class="st6" d="M41.6,17.5H28.2v6.9h10.4v3.3H28.2v10.2h-3.9V14.2h17.2V17.5z"/>
<path class="st6" d="M45.8,37.9v-18h3.3l0.4,3.2c0.5-1.2,1.2-2.1,2.1-2.7c0.9-0.6,2.1-0.9,3.5-0.9c0.4,0,0.7,0,1.1,0.1
c0.4,0.1,0.7,0.2,0.9,0.3l-0.5,3.4c-0.3-0.1-0.6-0.2-0.9-0.2C55.4,23,54.9,23,54.4,23c-0.7,0-1.5,0.2-2.2,0.6
c-0.7,0.4-1.3,1-1.8,1.8s-0.7,1.8-0.7,3v9.5H45.8z"/>
<path class="st6" d="M68.6,19.6c1.8,0,3.3,0.4,4.6,1.1c1.3,0.7,2.4,1.8,3.1,3.2s1.1,3.1,1.1,5c0,1.9-0.4,3.6-1.1,5
c-0.8,1.4-1.8,2.5-3.1,3.2c-1.3,0.7-2.9,1.1-4.6,1.1s-3.3-0.4-4.6-1.1c-1.3-0.7-2.4-1.8-3.2-3.2c-0.8-1.4-1.2-3.1-1.2-5
c0-1.9,0.4-3.6,1.2-5s1.8-2.5,3.2-3.2C65.3,19.9,66.8,19.6,68.6,19.6z M68.6,22.6c-1.1,0-2,0.2-2.8,0.7c-0.8,0.5-1.3,1.2-1.7,2.1
s-0.6,2.1-0.6,3.5c0,1.3,0.2,2.5,0.6,3.4s1,1.7,1.7,2.2s1.7,0.7,2.8,0.7c1.1,0,2-0.2,2.7-0.7c0.7-0.5,1.3-1.2,1.7-2.2
s0.6-2.1,0.6-3.4c0-1.4-0.2-2.5-0.6-3.5s-1-1.6-1.7-2.1C70.6,22.8,69.6,22.6,68.6,22.6z"/>
<path class="st6" d="M89.2,38.3c-1.8,0-3.4-0.3-4.9-1c-1.5-0.7-2.7-1.7-3.5-3l2.7-2.3c0.5,1,1.3,1.8,2.3,2.4
c1,0.6,2.2,0.9,3.6,0.9c1.1,0,2-0.2,2.6-0.6c0.6-0.4,1-0.9,1-1.6c0-0.5-0.2-0.9-0.5-1.2s-0.9-0.6-1.7-0.8l-3.8-0.8
c-1.9-0.4-3.3-1-4.1-1.9c-0.8-0.9-1.2-1.9-1.2-3.3c0-1,0.3-1.9,0.9-2.7c0.6-0.8,1.4-1.5,2.5-2s2.5-0.8,4-0.8c1.8,0,3.3,0.3,4.6,1
c1.3,0.6,2.2,1.5,2.9,2.7l-2.7,2.2c-0.5-1-1.1-1.7-2-2.1c-0.9-0.5-1.8-0.7-2.8-0.7c-0.8,0-1.4,0.1-2,0.3c-0.6,0.2-1,0.5-1.3,0.8
c-0.3,0.3-0.4,0.7-0.4,1.2c0,0.5,0.2,0.9,0.5,1.3s1,0.6,1.9,0.8l4.1,0.9c1.7,0.3,2.9,0.9,3.7,1.7c0.7,0.8,1.1,1.8,1.1,2.9
c0,1.2-0.3,2.2-0.9,3c-0.6,0.9-1.5,1.6-2.6,2C92.1,38.1,90.7,38.3,89.2,38.3z"/>
<path class="st6" d="M112.8,19.9v3H99.3v-3H112.8z M106.6,14.6v17.9c0,0.9,0.2,1.5,0.7,1.9c0.5,0.4,1.1,0.6,1.9,0.6
c0.6,0,1.2-0.1,1.7-0.3c0.5-0.2,0.9-0.5,1.3-0.8l0.9,2.8c-0.6,0.5-1.2,0.9-2,1.1c-0.8,0.3-1.7,0.4-2.7,0.4c-1,0-2-0.2-2.8-0.5
s-1.5-0.9-2-1.6c-0.5-0.8-0.7-1.7-0.8-3V15.7L106.6,14.6z"/>
<path d="M137.9,17.5h-13.3v6.9h10.4v3.3h-10.4v10.2h-3.9V14.2h17.2V17.5z"/>
<path d="M150.9,13.8c2.1,0,4,0.4,5.5,1.2c1.6,0.8,2.9,2,4,3.5l-2.6,2.5c-0.9-1.4-1.9-2.4-3.1-3c-1.1-0.6-2.5-0.9-4-0.9
c-1.2,0-2.1,0.2-2.8,0.5c-0.7,0.3-1.3,0.7-1.6,1.2c-0.3,0.5-0.5,1.1-0.5,1.7c0,0.7,0.3,1.4,0.8,1.9c0.5,0.6,1.5,1,2.9,1.3
l4.8,1.1c2.3,0.5,3.9,1.3,4.9,2.3c1,1,1.4,2.3,1.4,3.9c0,1.5-0.4,2.7-1.2,3.8c-0.8,1.1-1.9,1.9-3.3,2.5s-3.1,0.9-5,0.9
c-1.7,0-3.2-0.2-4.5-0.6c-1.3-0.4-2.5-1-3.5-1.8c-1-0.7-1.8-1.6-2.5-2.6l2.7-2.7c0.5,0.8,1.1,1.6,1.9,2.2
c0.8,0.7,1.7,1.2,2.7,1.5c1,0.4,2.2,0.5,3.4,0.5c1.1,0,2.1-0.1,2.9-0.4c0.8-0.3,1.4-0.7,1.8-1.2c0.4-0.5,0.6-1.1,0.6-1.9
c0-0.7-0.2-1.3-0.7-1.8c-0.5-0.5-1.3-0.9-2.6-1.2l-5.2-1.2c-1.4-0.3-2.6-0.8-3.6-1.3c-0.9-0.6-1.6-1.3-2.1-2.1s-0.7-1.8-0.7-2.8
c0-1.3,0.4-2.6,1.1-3.7c0.7-1.1,1.8-2,3.2-2.6C147.3,14.1,148.9,13.8,150.9,13.8z"/>
</g>
</g>
</g>
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
sodipodi:docname="logo_fs.svg"
inkscape:version="1.0 (4035a4fb49, 2020-05-01)"
id="svg57"
version="1.1"
viewBox="0 0 105 25"
height="25mm"
width="105mm">
<defs
id="defs51">
<clipPath
clipPathUnits="userSpaceOnUse"
id="clipPath434">
<path
d="M 0,0 H 1366 V 768 H 0 Z"
id="path432" />
</clipPath>
</defs>
<sodipodi:namedview
inkscape:window-maximized="0"
inkscape:window-y="0"
inkscape:window-x="130"
inkscape:window-height="1040"
inkscape:window-width="1274"
height="50mm"
units="mm"
showgrid="false"
inkscape:document-rotation="0"
inkscape:current-layer="layer1"
inkscape:document-units="mm"
inkscape:cy="344.49897"
inkscape:cx="468.64708"
inkscape:zoom="0.7"
inkscape:pageshadow="2"
inkscape:pageopacity="0.0"
borderopacity="1.0"
bordercolor="#666666"
pagecolor="#ffffff"
id="base" />
<metadata
id="metadata54">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title></dc:title>
</cc:Work>
</rdf:RDF>
</metadata>
<g
id="layer1"
inkscape:groupmode="layer"
inkscape:label="Layer 1">
<g
id="g424"
transform="matrix(0.35277777,0,0,-0.35277777,63.946468,10.194047)">
<path
d="m 0,0 v -8.093 h 12.287 v -3.94 H 0 V -24.067 H -4.534 V 3.898 H 15.677 V 0 Z"
style="fill:#00e396;fill-opacity:1;fill-rule:nonzero;stroke:none"
id="path426" />
</g>
<g
transform="matrix(0.35277777,0,0,-0.35277777,-315.43002,107.34005)"
id="g428">
<g
id="g430"
clip-path="url(#clipPath434)">
<g
id="g436"
transform="translate(1112.874,278.2981)">
<path
d="M 0,0 C 1.822,-0.932 3.354,-2.359 4.597,-4.28 L 1.165,-7.373 c -0.791,1.695 -1.779,2.924 -2.966,3.686 -1.186,0.763 -2.768,1.145 -4.745,1.145 -1.949,0 -3.461,-0.389 -4.534,-1.166 -1.074,-0.777 -1.61,-1.772 -1.61,-2.987 0,-1.13 0.523,-2.027 1.568,-2.69 1.045,-0.664 2.909,-1.236 5.593,-1.716 2.514,-0.452 4.512,-1.024 5.995,-1.716 1.483,-0.693 2.564,-1.554 3.242,-2.585 0.677,-1.031 1.016,-2.309 1.016,-3.834 0,-1.639 -0.466,-3.079 -1.398,-4.322 -0.932,-1.243 -2.239,-2.197 -3.919,-2.86 -1.681,-0.664 -3.623,-0.996 -5.826,-0.996 -5.678,0 -9.689,1.892 -12.033,5.678 l 3.178,3.178 c 0.903,-1.695 2.068,-2.939 3.495,-3.729 1.426,-0.791 3.199,-1.186 5.318,-1.186 2.005,0 3.58,0.345 4.724,1.038 1.144,0.692 1.716,1.674 1.716,2.945 0,1.017 -0.516,1.835 -1.547,2.457 -1.031,0.621 -2.832,1.172 -5.402,1.653 -2.571,0.479 -4.618,1.073 -6.143,1.779 -1.526,0.706 -2.635,1.582 -3.326,2.627 -0.693,1.045 -1.039,2.316 -1.039,3.813 0,1.582 0.438,3.023 1.314,4.322 0.875,1.299 2.14,2.33 3.792,3.093 1.653,0.763 3.58,1.144 5.783,1.144 C -4.018,1.398 -1.822,0.932 0,0"
style="fill:#00e396;fill-opacity:1;fill-rule:nonzero;stroke:none"
id="path438" />
</g>
<g
id="g440"
transform="translate(993.0239,277.5454)">
<path
d="m 0,0 c 2.054,-1.831 3.083,-4.465 3.083,-7.902 v -17.935 h -4.484 v 16.366 c 0,2.914 -0.626,5.024 -1.877,6.332 -1.253,1.308 -2.924,1.962 -5.016,1.962 -1.495,0 -2.896,-0.327 -4.204,-0.981 -1.308,-0.654 -2.381,-1.719 -3.222,-3.194 -0.841,-1.477 -1.261,-3.335 -1.261,-5.576 v -14.909 h -4.484 V 1.328 l 4.086,-1.674 0.118,-1.84 c 0.933,1.681 2.222,2.923 3.867,3.727 1.643,0.803 3.493,1.205 5.548,1.205 C -4.671,2.746 -2.055,1.83 0,0"
style="fill:#000033;fill-opacity:1;fill-rule:nonzero;stroke:none"
id="path442" />
</g>
<g
id="g444"
transform="translate(1027.9968,264.0386)">
<path
d="m 0,0 h -21.128 c 0.261,-2.84 1.205,-5.044 2.83,-6.613 1.625,-1.57 3.727,-2.355 6.305,-2.355 2.054,0 3.763,0.356 5.128,1.065 1.363,0.71 2.288,1.738 2.774,3.083 l 3.755,-1.961 c -1.121,-1.981 -2.616,-3.495 -4.484,-4.54 -1.868,-1.046 -4.259,-1.569 -7.173,-1.569 -4.223,0 -7.538,1.289 -9.948,3.867 -2.41,2.578 -3.615,6.146 -3.615,10.704 0,4.558 1.149,8.127 3.447,10.705 2.298,2.578 5.557,3.867 9.779,3.867 2.615,0 4.876,-0.58 6.782,-1.738 1.905,-1.158 3.343,-2.728 4.315,-4.707 C -0.262,7.827 0.224,5.605 0.224,3.139 0.224,2.092 0.149,1.046 0,0 m -18.298,10.144 c -1.513,-1.457 -2.438,-3.512 -2.775,-6.165 h 16.982 c -0.3,2.615 -1.159,4.661 -2.578,6.137 -1.42,1.476 -3.307,2.214 -5.661,2.214 -2.466,0 -4.455,-0.728 -5.968,-2.186"
style="fill:#000033;fill-opacity:1;fill-rule:nonzero;stroke:none"
id="path446" />
</g>
<g
id="g448"
transform="translate(1057.8818,276.4246)">
<path
d="m 0,0 c 2.41,-2.578 3.615,-6.147 3.615,-10.705 0,-4.558 -1.205,-8.126 -3.615,-10.704 -2.41,-2.578 -5.726,-3.867 -9.948,-3.867 -4.222,0 -7.537,1.289 -9.947,3.867 -2.41,2.578 -3.615,6.146 -3.615,10.704 0,4.558 1.205,8.127 3.615,10.705 2.41,2.578 5.725,3.867 9.947,3.867 C -5.726,3.867 -2.41,2.578 0,0 m -16.617,-2.858 c -1.607,-1.906 -2.41,-4.522 -2.41,-7.847 0,-3.326 0.803,-5.94 2.41,-7.846 1.607,-1.905 3.83,-2.858 6.669,-2.858 2.839,0 5.063,0.953 6.67,2.858 1.606,1.906 2.41,4.52 2.41,7.846 0,3.325 -0.804,5.941 -2.41,7.847 C -4.885,-0.953 -7.109,0 -9.948,0 c -2.839,0 -5.062,-0.953 -6.669,-2.858"
style="fill:#000033;fill-opacity:1;fill-rule:nonzero;stroke:none"
id="path450" />
</g>
</g>
</g>
<g
id="g452"
transform="matrix(0.35277777,0,0,-0.35277777,5.8329581,6.5590171)">
<path
d="m 0,0 0.001,-38.946 25.286,-9.076 V -8.753 L 52.626,1.321 27.815,10.207 Z"
style="fill:#00e599;fill-opacity:1;fill-rule:nonzero;stroke:none"
id="path454" />
</g>
<g
id="g456"
transform="matrix(0.35277777,0,0,-0.35277777,15.479008,10.041927)">
<path
d="M 0,0 V -21.306 L 25.293,-30.364 25.282,9.347 Z"
style="fill:#00b091;fill-opacity:1;fill-rule:nonzero;stroke:none"
id="path458" />
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 6.5 KiB

21
.github/workflows/dco.yml vendored Normal file
View file

@ -0,0 +1,21 @@
name: DCO check
on:
pull_request:
branches:
- master
jobs:
commits_check_job:
runs-on: ubuntu-latest
name: Commits Check
steps:
- name: Get PR Commits
id: 'get-pr-commits'
uses: tim-actions/get-pr-commits@master
with:
token: ${{ secrets.GITHUB_TOKEN }}
- name: DCO Check
uses: tim-actions/dco@master
with:
commits: ${{ steps.get-pr-commits.outputs.commits }}

57
.github/workflows/go.yml vendored Normal file
View file

@ -0,0 +1,57 @@
name: neofs-node tests
on:
push:
branches:
- master
paths-ignore:
- '*.md'
pull_request:
branches:
- master
paths-ignore:
- '*.md'
jobs:
test:
runs-on: ubuntu-20.04
strategy:
matrix:
go: [ '1.17.x', '1.18.x' ]
steps:
- name: Setup go
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go }}
- name: Check out code
uses: actions/checkout@v2
- name: Cache go mod
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ matrix.go }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-${{ matrix.go }}-
- name: Run go test
run: go test -coverprofile=coverage.txt -covermode=atomic ./...
- name: Codecov
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
run: bash <(curl -s https://codecov.io/bash)
lint:
runs-on: ubuntu-20.04
steps:
- name: Check out code
uses: actions/checkout@v2
- name: golangci-lint
uses: golangci/golangci-lint-action@v2
with:
version: v1.45.2
args: --timeout=5m
only-new-issues: true

19
.github/workflows/project.yml vendored Normal file
View file

@ -0,0 +1,19 @@
name: Add issues to NeoFS Core project
on:
issues:
types:
- opened
- transferred
- labeled
jobs:
add-to-project:
name: Add issue to project
runs-on: ubuntu-latest
steps:
- uses: actions/add-to-project@main
with:
project-url: https://github.com/orgs/nspcc-dev/projects/1
github-token: ${{ secrets.GITHUB_TOKEN }}
labeled: triage

21
.gitignore vendored
View file

@ -28,24 +28,3 @@ testfile
# misc
.neofs-cli.yml
# debhelpers
debian/*debhelper*
# logfiles
debian/*.log
# .substvars
debian/*.substvars
# .bash-completion
debian/*.bash-completion
# Install folders and files
debian/frostfs-cli/
debian/frostfs-ir/
debian/files
debian/frostfs-storage/
debian/changelog
man/
debs/

View file

@ -1,11 +0,0 @@
[general]
fail-without-commits=True
regex-style-search=True
contrib=CC1
[title-match-regex]
regex=^\[\#[0-9Xx]+\]\s
[ignore-by-title]
regex=^Release(.*)
ignore=title-match-regex

View file

@ -4,7 +4,7 @@
# options for analysis running
run:
# timeout for analysis, e.g. 30s, 5m, default is 1m
timeout: 20m
timeout: 5m
# include test files or not, default is true
tests: false
@ -24,28 +24,6 @@ linters-settings:
govet:
# report about shadowed variables
check-shadowing: false
staticcheck:
checks: ["all", "-SA1019"] # TODO Enable SA1019 after deprecated warning are fixed.
funlen:
lines: 80 # default 60
statements: 60 # default 40
gocognit:
min-complexity: 40 # default 30
importas:
no-unaliased: true
no-extra-aliases: false
alias:
pkg: git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object
alias: objectSDK
custom:
truecloudlab-linters:
path: bin/external_linters.so
original-url: git.frostfs.info/TrueCloudLab/linters.git
settings:
noliteral:
target-methods : ["reportFlushError", "reportError"]
disable-packages: ["codes", "err", "res","exec"]
constants-package: "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
linters:
enable:
@ -56,28 +34,16 @@ linters:
# some default golangci-lint linters
- errcheck
- gosimple
- godot
- ineffassign
- staticcheck
- typecheck
- unused
# extra linters
- bidichk
- durationcheck
- exhaustive
- exportloopref
- gofmt
- goimports
- misspell
- predeclared
- reassign
- whitespace
- containedctx
- funlen
- gocognit
- contextcheck
- importas
- truecloudlab-linters
- goimports
- unused
disable-all: true
fast: false

View file

@ -1,54 +0,0 @@
ci:
autofix_prs: false
repos:
- repo: https://github.com/jorisroovers/gitlint
rev: v0.19.1
hooks:
- id: gitlint
stages: [commit-msg]
- id: gitlint-ci
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-added-large-files
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: check-shebang-scripts-are-executable
- id: check-merge-conflict
- id: check-json
- id: check-xml
- id: check-yaml
- id: trailing-whitespace
args: [--markdown-linebreak-ext=md]
- id: end-of-file-fixer
exclude: ".key$"
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.9.0.5
hooks:
- id: shellcheck
- repo: local
hooks:
- id: make-lint
name: Run Make Lint
entry: make lint
language: system
pass_filenames: false
- repo: local
hooks:
- id: go-unit-tests
name: go unit tests
entry: make test
pass_filenames: false
types: [go]
language: system
- repo: https://github.com/TekWizely/pre-commit-golang
rev: v1.0.0-rc.1
hooks:
- id: go-staticcheck-repo-mod
- id: go-mod-tidy

View file

@ -1,11 +0,0 @@
pipeline:
# Kludge for non-root containers under WoodPecker
fix-ownership:
image: alpine:latest
commands: chown -R 1234:1234 .
pre-commit:
image: git.frostfs.info/truecloudlab/frostfs-ci:v0.36
commands:
- export HOME="$(getent passwd $(id -u) | cut '-d:' -f6)"
- pre-commit run --hook-stage manual

File diff suppressed because it is too large Load diff

View file

@ -3,8 +3,8 @@
First, thank you for contributing! We love and encourage pull requests from
everyone. Please follow the guidelines:
- Check the open [issues](https://git.frostfs.info/TrueCloudLab/frostfs-node/issues) and
[pull requests](https://git.frostfs.info/TrueCloudLab/frostfs-node/pulls) for existing
- Check the open [issues](https://github.com/nspcc-dev/neofs-node/issues) and
[pull requests](https://github.com/nspcc-dev/neofs-node/pulls) for existing
discussions.
- Open an issue first, to discuss a new feature or enhancement.
@ -23,23 +23,23 @@ everyone. Please follow the guidelines:
## Development Workflow
Start by forking the `frostfs-node` repository, make changes in a branch and then
Start by forking the `neofs-node` repository, make changes in a branch and then
send a pull request. We encourage pull requests to discuss code changes. Here
are the steps in details:
### Set up your Forgejo repository
Fork [FrostFS node upstream](https://git.frostfs.info/TrueCloudLab/frostfs-node) source
### Set up your GitHub Repository
Fork [NeoFS node upstream](https://github.com/nspcc-dev/neofs-node/fork) source
repository to your own personal repository. Copy the URL of your fork (you will
need it for the `git clone` command below).
```sh
$ git clone https://git.frostfs.info/TrueCloudLab/frostfs-node
$ git clone https://github.com/nspcc-dev/neofs-node
```
### Set up git remote as ``upstream``
```sh
$ cd frostfs-node
$ git remote add upstream https://git.frostfs.info/TrueCloudLab/frostfs-node
$ cd neofs-node
$ git remote add upstream https://github.com/nspcc-dev/neofs-node
$ git fetch upstream
$ git merge upstream/master
...
@ -58,7 +58,7 @@ $ git checkout -b feature/123-something_awesome
After your code changes, make sure
- To add test cases for the new code.
- To run `make lint` and `make staticcheck-run`
- To run `make lint`
- To squash your commits into a single commit or a series of logically separated
commits run `git rebase -i`. It's okay to force update your pull request.
- To run `make test` and `make all` completes.
@ -79,7 +79,7 @@ Description
```
```
$ git commit -sam '[#123] Add some feature'
$ git commit -am '[#123] Add some feature'
```
### Push to the branch
@ -89,8 +89,8 @@ $ git push origin feature/123-something_awesome
```
### Create a Pull Request
Pull requests can be created via Forgejo. Refer to [this
document](https://docs.codeberg.org/collaborating/pull-requests-and-git-flow/) for
Pull requests can be created via GitHub. Refer to [this
document](https://help.github.com/articles/creating-a-pull-request/) for
detailed steps on how to create a pull request. After a Pull Request gets peer
reviewed and approved, it will be merged.
@ -106,8 +106,7 @@ contributors".
To sign your work, just add a line like this at the end of your commit message:
```
Signed-off-by: Samii Sakisaka <samii@ivunojikan.co.jp>
Signed-off-by: Samii Sakisaka <samii@nspcc.ru>
```
This can easily be done with the `--signoff` option to `git commit`.

View file

@ -1,7 +1,5 @@
# Credits
FrostFS continues the development of NeoFS.
Initial NeoFS research and development (2018-2020) was done by
[NeoSPCC](https://nspcc.ru) team.
@ -10,7 +8,7 @@ In alphabetical order:
- Alexey Vanin
- Anastasia Prasolova
- Anatoly Bogatyrev
- Evgeny Kulikov
- Evgeny Kulikov
- Evgeny Stratonikov
- Leonard Liubich
- Sergei Liubich

98
Makefile Executable file → Normal file
View file

@ -2,14 +2,15 @@
SHELL = bash
REPO ?= $(shell go list -m)
VERSION ?= $(shell git describe --tags --dirty --match "v*" --always --abbrev=8 2>/dev/null || cat VERSION 2>/dev/null || echo "develop")
VERSION ?= $(shell git describe --tags --dirty --always 2>/dev/null || cat VERSION 2>/dev/null || echo "develop")
BUILD ?= $(shell date -u --iso=seconds)
DEBUG ?= false
HUB_IMAGE ?= truecloudlab/frostfs
HUB_IMAGE ?= nspccdev/neofs
HUB_TAG ?= "$(shell echo ${VERSION} | sed 's/^v//')"
GO_VERSION ?= 1.21
LINT_VERSION ?= 1.54.0
TRUECLOUDLAB_LINT_VERSION ?= 0.0.2
GO_VERSION ?= 1.17
LINT_VERSION ?= 1.46.2
ARCH = amd64
BIN = bin
@ -17,24 +18,13 @@ RELEASE = release
DIRS = $(BIN) $(RELEASE)
# List of binaries to build.
CMDS = $(notdir $(basename $(wildcard cmd/frostfs-*)))
CMDS = $(notdir $(basename $(wildcard cmd/*)))
BINS = $(addprefix $(BIN)/, $(CMDS))
# .deb package versioning
OS_RELEASE = $(shell lsb_release -cs)
PKG_VERSION ?= $(shell echo $(VERSION) | sed "s/^v//" | \
sed -E "s/(.*)-(g[a-fA-F0-9]{6,8})(.*)/\1\3~\2/" | \
sed "s/-/~/")-${OS_RELEASE}
OUTPUT_LINT_DIR ?= $(shell pwd)/bin
LINT_DIR = $(OUTPUT_LINT_DIR)/golangci-lint-$(LINT_VERSION)-v$(TRUECLOUDLAB_LINT_VERSION)
TMP_DIR := .cache
.PHONY: help all images dep clean fmts fmt imports test lint docker/lint
prepare-release debpackage pre-commit unpre-commit
.PHONY: help all images dep clean fmts fmt imports test lint docker/lint prepare-release
# To build a specific binary, use it's name prefix with bin/ as a target
# For example `make bin/frostfs-node` will build only storage node binary
# For example `make bin/neofs-node` will build only storage node binary
# Just `make` will build all possible binaries
all: $(DIRS) $(BINS)
@ -45,7 +35,9 @@ $(BINS): $(DIRS) dep
@echo "⇒ Build $@"
CGO_ENABLED=0 \
go build -v -trimpath \
-ldflags "-X $(REPO)/misc.Version=$(VERSION)" \
-ldflags "-X $(REPO)/misc.Version=$(VERSION) \
-X $(REPO)/misc.Build=$(BUILD) \
-X $(REPO)/misc.Debug=$(DEBUG)" \
-o $@ ./cmd/$(notdir $@)
$(DIRS):
@ -55,7 +47,7 @@ $(DIRS):
# Prepare binaries and archives for release
.ONESHELL:
prepare-release: docker/all
@for file in `ls -1 $(BIN)/frostfs-*`; do
@for file in `ls -1 $(BIN)/neofs-*`; do
cp $$file $(RELEASE)/`basename $$file`-$(ARCH)
strip $(RELEASE)/`basename $$file`-$(ARCH)
tar -czf $(RELEASE)/`basename $$file`-$(ARCH).tar.gz $(RELEASE)/`basename $$file`-$(ARCH)
@ -72,26 +64,26 @@ dep:
# Regenerate proto files:
protoc:
@GOPRIVATE=github.com/TrueCloudLab go mod vendor
@GOPRIVATE=github.com/nspcc-dev go mod vendor
# Install specific version for protobuf lib
@go list -f '{{.Path}}/...@{{.Version}}' -m github.com/golang/protobuf | xargs go install -v
@GOBIN=$(abspath $(BIN)) go install -mod=mod -v git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/util/protogen
@GOBIN=$(abspath $(BIN)) go install -mod=mod -v github.com/nspcc-dev/neofs-api-go/v2/util/protogen
# Protoc generate
@for f in `find . -type f -name '*.proto' -not -path './vendor/*'`; do \
echo "⇒ Processing $$f "; \
protoc \
--proto_path=.:./vendor:/usr/local/include \
--plugin=protoc-gen-go-frostfs=$(BIN)/protogen \
--go-frostfs_out=. --go-frostfs_opt=paths=source_relative \
--plugin=protoc-gen-go-neofs=$(BIN)/protogen \
--go-neofs_out=. --go-neofs_opt=paths=source_relative \
--go_out=. --go_opt=paths=source_relative \
--go-grpc_opt=require_unimplemented_servers=false \
--go-grpc_out=. --go-grpc_opt=paths=source_relative $$f; \
done
rm -rf vendor
# Build FrostFS component's docker image
# Build NeoFS component's docker image
image-%:
@echo "⇒ Build FrostFS $* docker image "
@echo "⇒ Build NeoFS $* docker image "
@docker build \
--build-arg REPO=$(REPO) \
--build-arg VERSION=$(VERSION) \
@ -100,7 +92,7 @@ image-%:
-t $(HUB_IMAGE)-$*:$(HUB_TAG) .
# Build all Docker images
images: image-storage image-ir image-cli image-adm
images: image-storage image-ir image-cli image-adm image-storage-testnet
# Build dirty local Docker images
dirty-images: image-dirty-storage image-dirty-ir image-dirty-cli image-dirty-adm
@ -131,36 +123,11 @@ imports:
# Run Unit Test with go test
test:
@echo "⇒ Running go test"
@go test ./... -count=1
pre-commit-run:
@pre-commit run -a --hook-stage manual
# Install linters
lint-install:
@mkdir -p $(TMP_DIR)
@rm -rf $(TMP_DIR)/linters
@git -c advice.detachedHead=false clone --branch v$(TRUECLOUDLAB_LINT_VERSION) https://git.frostfs.info/TrueCloudLab/linters.git $(TMP_DIR)/linters
@@make -C $(TMP_DIR)/linters lib CGO_ENABLED=1 OUT_DIR=$(OUTPUT_LINT_DIR)
@rm -rf $(TMP_DIR)/linters
@rmdir $(TMP_DIR) 2>/dev/null || true
@CGO_ENABLED=1 GOBIN=$(LINT_DIR) go install github.com/golangci/golangci-lint/cmd/golangci-lint@v$(LINT_VERSION)
@go test ./...
# Run linters
lint:
@if [ ! -d "$(LINT_DIR)" ]; then \
echo "Run make lint-install"; \
exit 1; \
fi
$(LINT_DIR)/golangci-lint run
# Install staticcheck
staticcheck-install:
@go install honnef.co/go/tools/cmd/staticcheck@latest
# Run staticcheck
staticcheck-run:
@staticcheck ./...
@golangci-lint --timeout=5m run
# Run linters in Docker
docker/lint:
@ -170,33 +137,12 @@ docker/lint:
--env HOME=/src \
golangci/golangci-lint:v$(LINT_VERSION) bash -c 'cd /src/ && make lint'
# Activate pre-commit hooks
pre-commit:
pre-commit install -t pre-commit -t commit-msg
# Deactivate pre-commit hooks
unpre-commit:
pre-commit uninstall -t pre-commit -t commit-msg
# Print version
version:
@echo $(VERSION)
# Delete built artifacts
clean:
rm -rf vendor
rm -rf .cache
rm -rf $(BIN)
rm -rf $(RELEASE)
# Package for Debian
debpackage:
dch -b --package frostfs-node \
--controlmaint \
--newversion $(PKG_VERSION) \
--distribution $(OS_RELEASE) \
"Please see CHANGELOG.md for code changes for $(VERSION)"
dpkg-buildpackage --no-sign -b
debclean:
dh clean

View file

@ -1,40 +1,39 @@
<p align="center">
<img src="./.github/logo.svg" width="500px" alt="FrostFS">
<img src="./.github/logo.svg" width="500px" alt="NeoFS">
</p>
<p align="center">
<a href="https://frostfs.info">FrostFS</a> is a decentralized distributed object storage integrated with the <a href="https://neo.org">NEO Blockchain</a>.
<a href="https://fs.neo.org">NeoFS</a> is a decentralized distributed object storage integrated with the <a href="https://neo.org">NEO Blockchain</a>.
</p>
---
[![Report](https://goreportcard.com/badge/github.com/TrueCloudLab/frostfs-node)](https://goreportcard.com/report/github.com/TrueCloudLab/frostfs-node)
![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/TrueCloudLab/frostfs-node?sort=semver)
![License](https://img.shields.io/github/license/TrueCloudLab/frostfs-node.svg?style=popout)
[![Report](https://goreportcard.com/badge/github.com/nspcc-dev/neofs-node)](https://goreportcard.com/report/github.com/nspcc-dev/neofs-node)
![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/nspcc-dev/neofs-node?sort=semver)
![License](https://img.shields.io/github/license/nspcc-dev/neofs-node.svg?style=popout)
# Overview
FrostFS Nodes are organized in a peer-to-peer network that takes care of storing
NeoFS Nodes are organized in a peer-to-peer network that takes care of storing
and distributing user's data. Any Neo user may participate in the network and
get paid for providing storage resources to other users or store their data in
FrostFS and pay a competitive price for it.
NeoFS and pay a competitive price for it.
Users can reliably store object data in the FrostFS network and have a transparent
Users can reliably store object data in the NeoFS network and have a transparent
data placement process due to a decentralized architecture and flexible storage
policies. Each node is responsible for executing the storage policies that the
users select for geographical location, reliability level, number of nodes, type
of disks, capacity, etc. Thus, FrostFS gives full control over data to users.
of disks, capacity, etc. Thus, NeoFS gives full control over data to users.
Deep [Neo Blockchain](https://neo.org) integration allows FrostFS to be used by
Deep [Neo Blockchain](https://neo.org) integration allows NeoFS to be used by
dApps directly from
[NeoVM](https://docs.neo.org/docs/en-us/basic/technology/neovm.html) on the
[Smart Contract](https://docs.neo.org/docs/en-us/intro/glossary.html)
code level. This way dApps are not limited to on-chain storage and can
manipulate large amounts of data without paying a prohibitive price.
FrostFS has a native [gRPC API](https://git.frostfs.info/TrueCloudLab/frostfs-api) and has
NeoFS has a native [gRPC API](https://github.com/nspcc-dev/neofs-api) and has
protocol gateways for popular protocols such as [AWS
S3](https://github.com/TrueCloudLab/frostfs-s3-gw),
[HTTP](https://github.com/TrueCloudLab/frostfs-http-gw),
S3](https://github.com/nspcc-dev/neofs-s3-gw),
[HTTP](https://github.com/nspcc-dev/neofs-http-gw),
[FUSE](https://wikipedia.org/wiki/Filesystem_in_Userspace) and
[sFTP](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol) allowing
developers to integrate applications without rewriting their code.
@ -44,12 +43,12 @@ developers to integrate applications without rewriting their code.
Now, we only support GNU/Linux on amd64 CPUs with AVX/AVX2 instructions. More
platforms will be officially supported after release `1.0`.
The latest version of frostfs-node works with frostfs-contract
[v0.16.0](https://github.com/TrueCloudLab/frostfs-contract/releases/tag/v0.16.0).
The latest version of neofs-node works with neofs-contract
[v0.13.0](https://github.com/nspcc-dev/neofs-contract/releases/tag/v0.13.0).
# Building
To make all binaries you need Go 1.20+ and `make`:
To make all binaries you need Go 1.17+ and `make`:
```
make all
```
@ -57,7 +56,7 @@ The resulting binaries will appear in `bin/` folder.
To make a specific binary use:
```
make bin/frostfs-<name>
make bin/neofs-<name>
```
See the list of all available commands in the `cmd` folder.
@ -66,12 +65,12 @@ See the list of all available commands in the `cmd` folder.
Building can also be performed in a container:
```
make docker/all # build all binaries
make docker/bin/frostfs-<name> # build a specific binary
make docker/bin/neofs-<name> # build a specific binary
```
## Docker images
To make docker images suitable for use in [frostfs-dev-env](https://github.com/TrueCloudLab/frostfs-dev-env/) use:
To make docker images suitable for use in [neofs-dev-env](https://github.com/nspcc-dev/neofs-dev-env/) use:
```
make images
```
@ -86,7 +85,7 @@ the feature/topic you are going to implement.
# Credits
FrostFS is maintained by [True Cloud Lab](https://github.com/TrueCloudLab/) with the help and
NeoFS is maintained by [NeoSPCC](https://nspcc.ru) with the help and
contributions from community members.
Please see [CREDITS](CREDITS.md) for details.

View file

@ -1 +1 @@
v0.36.0
v0.28.3

View file

@ -1,14 +0,0 @@
package commonflags
const (
ConfigFlag = "config"
ConfigFlagShorthand = "c"
ConfigFlagUsage = "Config file"
ConfigDirFlag = "config-dir"
ConfigDirFlagUsage = "Config directory"
Verbose = "verbose"
VerboseShorthand = "v"
VerboseUsage = "Verbose output"
)

View file

@ -1,29 +0,0 @@
package config
import (
"github.com/spf13/cobra"
)
const configPathFlag = "path"
var (
// RootCmd is a root command of config section.
RootCmd = &cobra.Command{
Use: "config",
Short: "Section for frostfs-adm config related commands",
}
initCmd = &cobra.Command{
Use: "init",
Short: "Initialize basic frostfs-adm configuration file",
Example: `frostfs-adm config init
frostfs-adm config init --path .config/frostfs-adm.yml`,
RunE: initConfig,
}
)
func init() {
RootCmd.AddCommand(initCmd)
initCmd.Flags().String(configPathFlag, "", "Path to config (default ~/.frostfs/adm/config.yml)")
}

View file

@ -1,242 +0,0 @@
package morph
import (
"crypto/elliptic"
"errors"
"fmt"
"math/big"
"git.frostfs.info/TrueCloudLab/frostfs-contract/nns"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"github.com/nspcc-dev/neo-go/pkg/core/native/noderoles"
"github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/nspcc-dev/neo-go/pkg/encoding/fixedn"
"github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/gas"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/rolemgmt"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/vm/emit"
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
"github.com/nspcc-dev/neo-go/pkg/vm/vmstate"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
type accBalancePair struct {
scriptHash util.Uint160
balance *big.Int
}
const (
dumpBalancesStorageFlag = "storage"
dumpBalancesAlphabetFlag = "alphabet"
dumpBalancesProxyFlag = "proxy"
dumpBalancesUseScriptHashFlag = "script-hash"
)
func dumpBalances(cmd *cobra.Command, _ []string) error {
var (
dumpStorage, _ = cmd.Flags().GetBool(dumpBalancesStorageFlag)
dumpAlphabet, _ = cmd.Flags().GetBool(dumpBalancesAlphabetFlag)
dumpProxy, _ = cmd.Flags().GetBool(dumpBalancesProxyFlag)
nnsCs *state.Contract
nmHash util.Uint160
)
c, err := getN3Client(viper.GetViper())
if err != nil {
return err
}
inv := invoker.New(c, nil)
if dumpStorage || dumpAlphabet || dumpProxy {
nnsCs, err = c.GetContractStateByID(1)
if err != nil {
return fmt.Errorf("can't get NNS contract info: %w", err)
}
nmHash, err = nnsResolveHash(inv, nnsCs.Hash, netmapContract+".frostfs")
if err != nil {
return fmt.Errorf("can't get netmap contract hash: %w", err)
}
}
irList, err := fetchIRNodes(c, rolemgmt.Hash)
if err != nil {
return err
}
if err := fetchBalances(inv, gas.Hash, irList); err != nil {
return err
}
printBalances(cmd, "Inner ring nodes balances:", irList)
if dumpStorage {
if err := printStorageNodeBalances(cmd, inv, nmHash); err != nil {
return err
}
}
if dumpProxy {
if err := printProxyContractBalance(cmd, inv, nnsCs.Hash); err != nil {
return err
}
}
if dumpAlphabet {
if err := printAlphabetContractBalances(cmd, c, inv, len(irList), nnsCs.Hash); err != nil {
return err
}
}
return nil
}
func printStorageNodeBalances(cmd *cobra.Command, inv *invoker.Invoker, nmHash util.Uint160) error {
arr, err := unwrap.Array(inv.Call(nmHash, "netmap"))
if err != nil {
return errors.New("can't fetch the list of storage nodes")
}
snList := make([]accBalancePair, len(arr))
for i := range arr {
node, ok := arr[i].Value().([]stackitem.Item)
if !ok || len(node) == 0 {
return errors.New("can't parse the list of storage nodes")
}
bs, err := node[0].TryBytes()
if err != nil {
return errors.New("can't parse the list of storage nodes")
}
var ni netmap.NodeInfo
if err := ni.Unmarshal(bs); err != nil {
return fmt.Errorf("can't parse the list of storage nodes: %w", err)
}
pub, err := keys.NewPublicKeyFromBytes(ni.PublicKey(), elliptic.P256())
if err != nil {
return fmt.Errorf("can't parse storage node public key: %w", err)
}
snList[i].scriptHash = pub.GetScriptHash()
}
if err := fetchBalances(inv, gas.Hash, snList); err != nil {
return err
}
printBalances(cmd, "\nStorage node balances:", snList)
return nil
}
func printProxyContractBalance(cmd *cobra.Command, inv *invoker.Invoker, nnsHash util.Uint160) error {
h, err := nnsResolveHash(inv, nnsHash, proxyContract+".frostfs")
if err != nil {
return fmt.Errorf("can't get hash of the proxy contract: %w", err)
}
proxyList := []accBalancePair{{scriptHash: h}}
if err := fetchBalances(inv, gas.Hash, proxyList); err != nil {
return err
}
printBalances(cmd, "\nProxy contract balance:", proxyList)
return nil
}
func printAlphabetContractBalances(cmd *cobra.Command, c Client, inv *invoker.Invoker, count int, nnsHash util.Uint160) error {
alphaList := make([]accBalancePair, count)
w := io.NewBufBinWriter()
for i := range alphaList {
emit.AppCall(w.BinWriter, nnsHash, "resolve", callflag.ReadOnly,
getAlphabetNNSDomain(i),
int64(nns.TXT))
}
if w.Err != nil {
panic(w.Err)
}
alphaRes, err := c.InvokeScript(w.Bytes(), nil)
if err != nil {
return fmt.Errorf("can't fetch info from NNS: %w", err)
}
for i := range alphaList {
h, err := parseNNSResolveResult(alphaRes.Stack[i])
if err != nil {
return fmt.Errorf("can't fetch the alphabet contract #%d hash: %w", i, err)
}
alphaList[i].scriptHash = h
}
if err := fetchBalances(inv, gas.Hash, alphaList); err != nil {
return err
}
printBalances(cmd, "\nAlphabet contracts balances:", alphaList)
return nil
}
func fetchIRNodes(c Client, desigHash util.Uint160) ([]accBalancePair, error) {
inv := invoker.New(c, nil)
height, err := c.GetBlockCount()
if err != nil {
return nil, fmt.Errorf("can't get block height: %w", err)
}
arr, err := getDesignatedByRole(inv, desigHash, noderoles.NeoFSAlphabet, height)
if err != nil {
return nil, errors.New("can't fetch list of IR nodes from the netmap contract")
}
irList := make([]accBalancePair, len(arr))
for i := range arr {
irList[i].scriptHash = arr[i].GetScriptHash()
}
return irList, nil
}
func printBalances(cmd *cobra.Command, prefix string, accounts []accBalancePair) {
useScriptHash, _ := cmd.Flags().GetBool(dumpBalancesUseScriptHashFlag)
cmd.Println(prefix)
for i := range accounts {
var addr string
if useScriptHash {
addr = accounts[i].scriptHash.StringLE()
} else {
addr = address.Uint160ToString(accounts[i].scriptHash)
}
cmd.Printf("%s: %s\n", addr, fixedn.ToString(accounts[i].balance, 8))
}
}
func fetchBalances(c *invoker.Invoker, gasHash util.Uint160, accounts []accBalancePair) error {
w := io.NewBufBinWriter()
for i := range accounts {
emit.AppCall(w.BinWriter, gasHash, "balanceOf", callflag.ReadStates, accounts[i].scriptHash)
}
if w.Err != nil {
panic(w.Err)
}
res, err := c.Run(w.Bytes())
if err != nil || res.State != vmstate.Halt.String() || len(res.Stack) != len(accounts) {
return errors.New("can't fetch account balances")
}
for i := range accounts {
bal, err := res.Stack[i].TryInteger()
if err != nil {
return fmt.Errorf("can't parse account balance: %w", err)
}
accounts[i].balance = bal
}
return nil
}

View file

@ -1,164 +0,0 @@
package morph
import (
"bytes"
"encoding/binary"
"encoding/hex"
"errors"
"fmt"
"strconv"
"strings"
"text/tabwriter"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap"
"github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/vm/emit"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
const forceConfigSet = "force"
func dumpNetworkConfig(cmd *cobra.Command, _ []string) error {
c, err := getN3Client(viper.GetViper())
if err != nil {
return fmt.Errorf("can't create N3 client: %w", err)
}
inv := invoker.New(c, nil)
cs, err := c.GetContractStateByID(1)
if err != nil {
return fmt.Errorf("can't get NNS contract info: %w", err)
}
nmHash, err := nnsResolveHash(inv, cs.Hash, netmapContract+".frostfs")
if err != nil {
return fmt.Errorf("can't get netmap contract hash: %w", err)
}
arr, err := unwrap.Array(inv.Call(nmHash, "listConfig"))
if err != nil {
return errors.New("can't fetch list of network config keys from the netmap contract")
}
buf := bytes.NewBuffer(nil)
tw := tabwriter.NewWriter(buf, 0, 2, 2, ' ', 0)
m, err := parseConfigFromNetmapContract(arr)
if err != nil {
return err
}
for k, v := range m {
switch k {
case netmap.ContainerFeeConfig, netmap.ContainerAliasFeeConfig,
netmap.EpochDurationConfig, netmap.IrCandidateFeeConfig,
netmap.MaxObjectSizeConfig, netmap.WithdrawFeeConfig:
nbuf := make([]byte, 8)
copy(nbuf[:], v)
n := binary.LittleEndian.Uint64(nbuf)
_, _ = tw.Write([]byte(fmt.Sprintf("%s:\t%d (int)\n", k, n)))
case netmap.HomomorphicHashingDisabledKey, netmap.MaintenanceModeAllowedConfig:
if len(v) == 0 || len(v) > 1 {
return invalidConfigValueErr(k)
}
_, _ = tw.Write([]byte(fmt.Sprintf("%s:\t%t (bool)\n", k, v[0] == 1)))
default:
_, _ = tw.Write([]byte(fmt.Sprintf("%s:\t%s (hex)\n", k, hex.EncodeToString(v))))
}
}
_ = tw.Flush()
cmd.Print(buf.String())
return nil
}
func setConfigCmd(cmd *cobra.Command, args []string) error {
if len(args) == 0 {
return errors.New("empty config pairs")
}
wCtx, err := newInitializeContext(cmd, viper.GetViper())
if err != nil {
return fmt.Errorf("can't initialize context: %w", err)
}
cs, err := wCtx.Client.GetContractStateByID(1)
if err != nil {
return fmt.Errorf("can't get NNS contract info: %w", err)
}
nmHash, err := nnsResolveHash(wCtx.ReadOnlyInvoker, cs.Hash, netmapContract+".frostfs")
if err != nil {
return fmt.Errorf("can't get netmap contract hash: %w", err)
}
forceFlag, _ := cmd.Flags().GetBool(forceConfigSet)
bw := io.NewBufBinWriter()
for _, arg := range args {
k, v, err := parseConfigPair(arg, forceFlag)
if err != nil {
return err
}
// In NeoFS this is done via Notary contract. Here, however, we can form the
// transaction locally. The first `nil` argument is required only for notary
// disabled environment which is not supported by that command.
emit.AppCall(bw.BinWriter, nmHash, "setConfig", callflag.All, nil, k, v)
if bw.Err != nil {
return fmt.Errorf("can't form raw transaction: %w", bw.Err)
}
}
err = wCtx.sendConsensusTx(bw.Bytes())
if err != nil {
return err
}
return wCtx.awaitTx()
}
func parseConfigPair(kvStr string, force bool) (key string, val any, err error) {
k, v, found := strings.Cut(kvStr, "=")
if !found {
return "", nil, fmt.Errorf("invalid parameter format: must be 'key=val', got: %s", kvStr)
}
key = k
valRaw := v
switch key {
case netmap.ContainerFeeConfig, netmap.ContainerAliasFeeConfig,
netmap.EpochDurationConfig, netmap.IrCandidateFeeConfig,
netmap.MaxObjectSizeConfig, netmap.WithdrawFeeConfig:
val, err = strconv.ParseInt(valRaw, 10, 64)
if err != nil {
err = fmt.Errorf("could not parse %s's value '%s' as int: %w", key, valRaw, err)
}
case netmap.HomomorphicHashingDisabledKey, netmap.MaintenanceModeAllowedConfig:
val, err = strconv.ParseBool(valRaw)
if err != nil {
err = fmt.Errorf("could not parse %s's value '%s' as bool: %w", key, valRaw, err)
}
default:
if !force {
return "", nil, fmt.Errorf(
"'%s' key is not well-known, use '--%s' flag if want to set it anyway",
key, forceConfigSet)
}
val = valRaw
}
return
}
func invalidConfigValueErr(key string) error {
return fmt.Errorf("invalid %s config value from netmap contract", key)
}

View file

@ -1,232 +0,0 @@
package morph
import (
"encoding/json"
"fmt"
"os"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-contract/nns"
"github.com/nspcc-dev/neo-go/cli/cmdargs"
"github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
"github.com/nspcc-dev/neo-go/pkg/services/rpcsrv/params"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/vm/emit"
"github.com/nspcc-dev/neo-go/pkg/vm/opcode"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
const (
contractPathFlag = "contract"
updateFlag = "update"
customZoneFlag = "domain"
)
var deployCmd = &cobra.Command{
Use: "deploy",
Short: "Deploy additional smart-contracts",
Long: `Deploy additional smart-contract which are not related to core.
All contracts are deployed by the committee, so access to the alphabet wallets is required.
Optionally, arguments can be provided to be passed to a contract's _deploy function.
The syntax is the same as for 'neo-go contract testinvokefunction' command.
Compiled contract file name must contain '_contract.nef' suffix.
Contract's manifest file name must be 'config.json'.
NNS name is taken by stripping '_contract.nef' from the NEF file (similar to frostfs contracts).`,
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(alphabetWalletsFlag, cmd.Flags().Lookup(alphabetWalletsFlag))
_ = viper.BindPFlag(endpointFlag, cmd.Flags().Lookup(endpointFlag))
},
RunE: deployContractCmd,
}
func init() {
ff := deployCmd.Flags()
ff.String(alphabetWalletsFlag, "", "Path to alphabet wallets dir")
_ = deployCmd.MarkFlagFilename(alphabetWalletsFlag)
ff.StringP(endpointFlag, "r", "", "N3 RPC node endpoint")
ff.String(contractPathFlag, "", "Path to the contract directory")
_ = deployCmd.MarkFlagFilename(contractPathFlag)
ff.Bool(updateFlag, false, "Update an existing contract")
ff.String(customZoneFlag, "frostfs", "Custom zone for NNS")
}
func deployContractCmd(cmd *cobra.Command, args []string) error {
v := viper.GetViper()
c, err := newInitializeContext(cmd, v)
if err != nil {
return fmt.Errorf("initialization error: %w", err)
}
defer c.close()
ctrPath, _ := cmd.Flags().GetString(contractPathFlag)
ctrName, err := probeContractName(ctrPath)
if err != nil {
return err
}
cs, err := readContract(ctrPath, ctrName)
if err != nil {
return err
}
nnsCs, err := c.Client.GetContractStateByID(1)
if err != nil {
return fmt.Errorf("can't fetch NNS contract state: %w", err)
}
callHash := management.Hash
method := deployMethodName
zone, _ := cmd.Flags().GetString(customZoneFlag)
domain := ctrName + "." + zone
isUpdate, _ := cmd.Flags().GetBool(updateFlag)
if isUpdate {
cs.Hash, err = nnsResolveHash(c.ReadOnlyInvoker, nnsCs.Hash, domain)
if err != nil {
return fmt.Errorf("can't fetch contract hash from NNS: %w", err)
}
callHash = cs.Hash
method = updateMethodName
} else {
cs.Hash = state.CreateContractHash(
c.CommitteeAcc.Contract.ScriptHash(),
cs.NEF.Checksum,
cs.Manifest.Name)
}
writer := io.NewBufBinWriter()
if err := emitDeploymentArguments(writer.BinWriter, args); err != nil {
return err
}
emit.Bytes(writer.BinWriter, cs.RawManifest)
emit.Bytes(writer.BinWriter, cs.RawNEF)
emit.Int(writer.BinWriter, 3)
emit.Opcodes(writer.BinWriter, opcode.PACK)
emit.AppCallNoArgs(writer.BinWriter, callHash, method, callflag.All)
emit.Opcodes(writer.BinWriter, opcode.DROP) // contract state on stack
if !isUpdate {
err := registerNNS(nnsCs, c, zone, domain, cs, writer)
if err != nil {
return err
}
}
if writer.Err != nil {
panic(fmt.Errorf("BUG: can't create deployment script: %w", writer.Err))
}
if err := c.sendCommitteeTx(writer.Bytes(), false); err != nil {
return err
}
return c.awaitTx()
}
func registerNNS(nnsCs *state.Contract, c *initializeContext, zone string, domain string, cs *contractState, writer *io.BufBinWriter) error {
bw := io.NewBufBinWriter()
emit.Instruction(bw.BinWriter, opcode.INITSSLOT, []byte{1})
emit.AppCall(bw.BinWriter, nnsCs.Hash, "getPrice", callflag.All)
emit.Opcodes(bw.BinWriter, opcode.STSFLD0)
emit.AppCall(bw.BinWriter, nnsCs.Hash, "setPrice", callflag.All, 1)
start := bw.Len()
needRecord := false
ok, err := c.nnsRootRegistered(nnsCs.Hash, zone)
if err != nil {
return err
} else if !ok {
needRecord = true
emit.AppCall(bw.BinWriter, nnsCs.Hash, "register", callflag.All,
zone, c.CommitteeAcc.Contract.ScriptHash(),
frostfsOpsEmail, int64(3600), int64(600), int64(defaultExpirationTime), int64(3600))
emit.Opcodes(bw.BinWriter, opcode.ASSERT)
emit.AppCall(bw.BinWriter, nnsCs.Hash, "register", callflag.All,
domain, c.CommitteeAcc.Contract.ScriptHash(),
frostfsOpsEmail, int64(3600), int64(600), int64(defaultExpirationTime), int64(3600))
emit.Opcodes(bw.BinWriter, opcode.ASSERT)
} else {
s, ok, err := c.nnsRegisterDomainScript(nnsCs.Hash, cs.Hash, domain)
if err != nil {
return err
}
needRecord = !ok
if len(s) != 0 {
bw.WriteBytes(s)
}
}
if needRecord {
emit.AppCall(bw.BinWriter, nnsCs.Hash, "deleteRecords", callflag.All, domain, int64(nns.TXT))
emit.AppCall(bw.BinWriter, nnsCs.Hash, "addRecord", callflag.All,
domain, int64(nns.TXT), address.Uint160ToString(cs.Hash))
}
if bw.Err != nil {
panic(fmt.Errorf("BUG: can't create deployment script: %w", writer.Err))
} else if bw.Len() != start {
writer.WriteBytes(bw.Bytes())
emit.Opcodes(writer.BinWriter, opcode.LDSFLD0, opcode.PUSH1, opcode.PACK)
emit.AppCallNoArgs(writer.BinWriter, nnsCs.Hash, "setPrice", callflag.All)
if needRecord {
c.Command.Printf("NNS: Set %s -> %s\n", domain, cs.Hash.StringLE())
}
}
return nil
}
func emitDeploymentArguments(w *io.BinWriter, args []string) error {
_, ps, err := cmdargs.ParseParams(args, true)
if err != nil {
return err
}
if len(ps) == 0 {
emit.Opcodes(w, opcode.NEWARRAY0)
return nil
}
if len(ps) != 1 {
return fmt.Errorf("at most one argument is expected for deploy, got %d", len(ps))
}
// We could emit this directly, but round-trip through JSON is more robust.
// This a CLI, so optimizing the conversion is not worth the effort.
data, err := json.Marshal(ps)
if err != nil {
return err
}
var pp params.Params
if err := json.Unmarshal(data, &pp); err != nil {
return err
}
return params.ExpandArrayIntoScript(w, pp)
}
func probeContractName(ctrPath string) (string, error) {
ds, err := os.ReadDir(ctrPath)
if err != nil {
return "", fmt.Errorf("can't read directory: %w", err)
}
var ctrName string
for i := range ds {
if strings.HasSuffix(ds[i].Name(), "_contract.nef") {
ctrName = strings.TrimSuffix(ds[i].Name(), "_contract.nef")
break
}
}
if ctrName == "" {
return "", fmt.Errorf("can't find any NEF files in %s", ctrPath)
}
return ctrName, nil
}

View file

@ -1,251 +0,0 @@
package morph
import (
"bytes"
"errors"
"fmt"
"strings"
"text/tabwriter"
"git.frostfs.info/TrueCloudLab/frostfs-contract/nns"
morphClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
"github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/vm/emit"
"github.com/nspcc-dev/neo-go/pkg/vm/opcode"
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
"github.com/nspcc-dev/neo-go/pkg/vm/vmstate"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
const lastGlagoliticLetter = 41
type contractDumpInfo struct {
hash util.Uint160
name string
version string
}
func dumpContractHashes(cmd *cobra.Command, _ []string) error {
c, err := getN3Client(viper.GetViper())
if err != nil {
return fmt.Errorf("can't create N3 client: %w", err)
}
cs, err := c.GetContractStateByID(1)
if err != nil {
return err
}
zone, _ := cmd.Flags().GetString(customZoneFlag)
if zone != "" {
return dumpCustomZoneHashes(cmd, cs.Hash, zone, c)
}
infos := []contractDumpInfo{{name: nnsContract, hash: cs.Hash}}
irSize := 0
for ; irSize < lastGlagoliticLetter; irSize++ {
ok, err := nnsIsAvailable(c, cs.Hash, getAlphabetNNSDomain(irSize))
if err != nil {
return err
} else if ok {
break
}
}
bw := io.NewBufBinWriter()
if irSize != 0 {
bw.Reset()
for i := 0; i < irSize; i++ {
emit.AppCall(bw.BinWriter, cs.Hash, "resolve", callflag.ReadOnly,
getAlphabetNNSDomain(i),
int64(nns.TXT))
}
alphaRes, err := c.InvokeScript(bw.Bytes(), nil)
if err != nil {
return fmt.Errorf("can't fetch info from NNS: %w", err)
}
for i := 0; i < irSize; i++ {
info := contractDumpInfo{name: fmt.Sprintf("alphabet %d", i)}
if h, err := parseNNSResolveResult(alphaRes.Stack[i]); err == nil {
info.hash = h
}
infos = append(infos, info)
}
}
for _, ctrName := range contractList {
bw.Reset()
emit.AppCall(bw.BinWriter, cs.Hash, "resolve", callflag.ReadOnly,
ctrName+".frostfs", int64(nns.TXT))
res, err := c.InvokeScript(bw.Bytes(), nil)
if err != nil {
return fmt.Errorf("can't fetch info from NNS: %w", err)
}
info := contractDumpInfo{name: ctrName}
if len(res.Stack) != 0 {
if h, err := parseNNSResolveResult(res.Stack[0]); err == nil {
info.hash = h
}
}
infos = append(infos, info)
}
fillContractVersion(cmd, c, infos)
printContractInfo(cmd, infos)
return nil
}
func dumpCustomZoneHashes(cmd *cobra.Command, nnsHash util.Uint160, zone string, c Client) error {
const nnsMaxTokens = 100
inv := invoker.New(c, nil)
if !strings.HasPrefix(zone, ".") {
zone = "." + zone
}
var infos []contractDumpInfo
processItem := func(item stackitem.Item) {
bs, err := item.TryBytes()
if err != nil {
cmd.PrintErrf("Invalid NNS record: %v\n", err)
return
}
if !bytes.HasSuffix(bs, []byte(zone)) || bytes.HasPrefix(bs, []byte(morphClient.NNSGroupKeyName)) {
// Related https://github.com/nspcc-dev/neofs-contract/issues/316.
return
}
h, err := nnsResolveHash(inv, nnsHash, string(bs))
if err != nil {
cmd.PrintErrf("Could not resolve name %s: %v\n", string(bs), err)
return
}
infos = append(infos, contractDumpInfo{
hash: h,
name: strings.TrimSuffix(string(bs), zone),
})
}
sessionID, iter, err := unwrap.SessionIterator(inv.Call(nnsHash, "tokens"))
if err != nil {
if errors.Is(err, unwrap.ErrNoSessionID) {
items, err := unwrap.Array(inv.CallAndExpandIterator(nnsHash, "tokens", nnsMaxTokens))
if err != nil {
return fmt.Errorf("can't get a list of NNS domains: %w", err)
}
if len(items) == nnsMaxTokens {
cmd.PrintErrln("Provided RPC endpoint doesn't support sessions, some hashes might be lost.")
}
for i := range items {
processItem(items[i])
}
} else {
return err
}
} else {
defer func() {
_ = inv.TerminateSession(sessionID)
}()
items, err := inv.TraverseIterator(sessionID, &iter, nnsMaxTokens)
for err == nil && len(items) != 0 {
for i := range items {
processItem(items[i])
}
items, err = inv.TraverseIterator(sessionID, &iter, nnsMaxTokens)
}
if err != nil {
return fmt.Errorf("error during NNS domains iteration: %w", err)
}
}
fillContractVersion(cmd, c, infos)
printContractInfo(cmd, infos)
return nil
}
func parseContractVersion(item stackitem.Item) string {
bi, err := item.TryInteger()
if err != nil || bi.Sign() == 0 || !bi.IsInt64() {
return "unknown"
}
v := bi.Int64()
major := v / 1_000_000
minor := (v % 1_000_000) / 1000
patch := v % 1_000
return fmt.Sprintf("v%d.%d.%d", major, minor, patch)
}
func printContractInfo(cmd *cobra.Command, infos []contractDumpInfo) {
if len(infos) == 0 {
return
}
buf := bytes.NewBuffer(nil)
tw := tabwriter.NewWriter(buf, 0, 2, 2, ' ', 0)
for _, info := range infos {
if info.version == "" {
info.version = "unknown"
}
_, _ = tw.Write([]byte(fmt.Sprintf("%s\t(%s):\t%s\n",
info.name, info.version, info.hash.StringLE())))
}
_ = tw.Flush()
cmd.Print(buf.String())
}
func fillContractVersion(cmd *cobra.Command, c Client, infos []contractDumpInfo) {
bw := io.NewBufBinWriter()
sub := io.NewBufBinWriter()
for i := range infos {
if infos[i].hash.Equals(util.Uint160{}) {
emit.Int(bw.BinWriter, 0)
} else {
sub.Reset()
emit.AppCall(sub.BinWriter, infos[i].hash, "version", callflag.NoneFlag)
if sub.Err != nil {
panic(fmt.Errorf("BUG: can't create version script: %w", bw.Err))
}
script := sub.Bytes()
emit.Instruction(bw.BinWriter, opcode.TRY, []byte{byte(3 + len(script) + 2), 0})
bw.BinWriter.WriteBytes(script)
emit.Instruction(bw.BinWriter, opcode.ENDTRY, []byte{2 + 1})
emit.Opcodes(bw.BinWriter, opcode.PUSH0)
}
}
emit.Opcodes(bw.BinWriter, opcode.NOP) // for the last ENDTRY target
if bw.Err != nil {
panic(fmt.Errorf("BUG: can't create version script: %w", bw.Err))
}
res, err := c.InvokeScript(bw.Bytes(), nil)
if err != nil {
cmd.Printf("Can't fetch version from NNS: %v\n", err)
return
}
if res.State == vmstate.Halt.String() {
for i := range res.Stack {
infos[i].version = parseContractVersion(res.Stack[i])
}
}
}

View file

@ -1,303 +0,0 @@
package morph
import (
"encoding/hex"
"errors"
"fmt"
"strconv"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-contract/nns"
morphClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
"github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
nnsClient "github.com/nspcc-dev/neo-go/pkg/rpcclient/nns"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/vm/emit"
"github.com/nspcc-dev/neo-go/pkg/vm/opcode"
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
"github.com/nspcc-dev/neo-go/pkg/vm/vmstate"
)
const defaultExpirationTime = 10 * 365 * 24 * time.Hour / time.Second
const frostfsOpsEmail = "ops@frostfs.info"
func (c *initializeContext) setNNS() error {
nnsCs, err := c.Client.GetContractStateByID(1)
if err != nil {
return err
}
ok, err := c.nnsRootRegistered(nnsCs.Hash, "frostfs")
if err != nil {
return err
} else if !ok {
bw := io.NewBufBinWriter()
emit.AppCall(bw.BinWriter, nnsCs.Hash, "register", callflag.All,
"frostfs", c.CommitteeAcc.Contract.ScriptHash(),
frostfsOpsEmail, int64(3600), int64(600), int64(defaultExpirationTime), int64(3600))
emit.Opcodes(bw.BinWriter, opcode.ASSERT)
if err := c.sendCommitteeTx(bw.Bytes(), true); err != nil {
return fmt.Errorf("can't add domain root to NNS: %w", err)
}
if err := c.awaitTx(); err != nil {
return err
}
}
alphaCs := c.getContract(alphabetContract)
for i, acc := range c.Accounts {
alphaCs.Hash = state.CreateContractHash(acc.Contract.ScriptHash(), alphaCs.NEF.Checksum, alphaCs.Manifest.Name)
domain := getAlphabetNNSDomain(i)
if err := c.nnsRegisterDomain(nnsCs.Hash, alphaCs.Hash, domain); err != nil {
return err
}
c.Command.Printf("NNS: Set %s -> %s\n", domain, alphaCs.Hash.StringLE())
}
for _, ctrName := range contractList {
cs := c.getContract(ctrName)
domain := ctrName + ".frostfs"
if err := c.nnsRegisterDomain(nnsCs.Hash, cs.Hash, domain); err != nil {
return err
}
c.Command.Printf("NNS: Set %s -> %s\n", domain, cs.Hash.StringLE())
}
groupKey := c.ContractWallet.Accounts[0].PrivateKey().PublicKey()
err = c.updateNNSGroup(nnsCs.Hash, groupKey)
if err != nil {
return err
}
c.Command.Printf("NNS: Set %s -> %s\n", morphClient.NNSGroupKeyName, hex.EncodeToString(groupKey.Bytes()))
return c.awaitTx()
}
func (c *initializeContext) updateNNSGroup(nnsHash util.Uint160, pub *keys.PublicKey) error {
bw := io.NewBufBinWriter()
keyAlreadyAdded, domainRegCodeEmitted, err := c.emitUpdateNNSGroupScript(bw, nnsHash, pub)
if keyAlreadyAdded || err != nil {
return err
}
script := bw.Bytes()
if domainRegCodeEmitted {
w := io.NewBufBinWriter()
emit.Instruction(w.BinWriter, opcode.INITSSLOT, []byte{1})
wrapRegisterScriptWithPrice(w, nnsHash, script)
script = w.Bytes()
}
return c.sendCommitteeTx(script, true)
}
// emitUpdateNNSGroupScript emits script for updating group key stored in NNS.
// First return value is true iff the key is already there and nothing should be done.
// Second return value is true iff a domain registration code was emitted.
func (c *initializeContext) emitUpdateNNSGroupScript(bw *io.BufBinWriter, nnsHash util.Uint160, pub *keys.PublicKey) (bool, bool, error) {
isAvail, err := nnsIsAvailable(c.Client, nnsHash, morphClient.NNSGroupKeyName)
if err != nil {
return false, false, err
}
if !isAvail {
currentPub, err := nnsResolveKey(c.ReadOnlyInvoker, nnsHash, morphClient.NNSGroupKeyName)
if err != nil {
return false, false, err
}
if pub.Equal(currentPub) {
return true, false, nil
}
}
if isAvail {
emit.AppCall(bw.BinWriter, nnsHash, "register", callflag.All,
morphClient.NNSGroupKeyName, c.CommitteeAcc.Contract.ScriptHash(),
frostfsOpsEmail, int64(3600), int64(600), int64(defaultExpirationTime), int64(3600))
emit.Opcodes(bw.BinWriter, opcode.ASSERT)
}
emit.AppCall(bw.BinWriter, nnsHash, "deleteRecords", callflag.All, "group.frostfs", int64(nns.TXT))
emit.AppCall(bw.BinWriter, nnsHash, "addRecord", callflag.All,
"group.frostfs", int64(nns.TXT), hex.EncodeToString(pub.Bytes()))
return false, isAvail, nil
}
func getAlphabetNNSDomain(i int) string {
return alphabetContract + strconv.FormatUint(uint64(i), 10) + ".frostfs"
}
// wrapRegisterScriptWithPrice wraps a given script with `getPrice`/`setPrice` calls for NNS.
// It is intended to be used for a single transaction, and not as a part of other scripts.
// It is assumed that script already contains static slot initialization code, the first one
// (with index 0) is used to store the price.
func wrapRegisterScriptWithPrice(w *io.BufBinWriter, nnsHash util.Uint160, s []byte) {
if len(s) == 0 {
return
}
emit.AppCall(w.BinWriter, nnsHash, "getPrice", callflag.All)
emit.Opcodes(w.BinWriter, opcode.STSFLD0)
emit.AppCall(w.BinWriter, nnsHash, "setPrice", callflag.All, 1)
w.WriteBytes(s)
emit.Opcodes(w.BinWriter, opcode.LDSFLD0, opcode.PUSH1, opcode.PACK)
emit.AppCallNoArgs(w.BinWriter, nnsHash, "setPrice", callflag.All)
if w.Err != nil {
panic(fmt.Errorf("BUG: can't wrap register script: %w", w.Err))
}
}
func (c *initializeContext) nnsRegisterDomainScript(nnsHash, expectedHash util.Uint160, domain string) ([]byte, bool, error) {
ok, err := nnsIsAvailable(c.Client, nnsHash, domain)
if err != nil {
return nil, false, err
}
if ok {
bw := io.NewBufBinWriter()
emit.AppCall(bw.BinWriter, nnsHash, "register", callflag.All,
domain, c.CommitteeAcc.Contract.ScriptHash(),
frostfsOpsEmail, int64(3600), int64(600), int64(defaultExpirationTime), int64(3600))
emit.Opcodes(bw.BinWriter, opcode.ASSERT)
if bw.Err != nil {
panic(bw.Err)
}
return bw.Bytes(), false, nil
}
s, err := nnsResolveHash(c.ReadOnlyInvoker, nnsHash, domain)
if err != nil {
return nil, false, err
}
return nil, s == expectedHash, nil
}
func (c *initializeContext) nnsRegisterDomain(nnsHash, expectedHash util.Uint160, domain string) error {
script, ok, err := c.nnsRegisterDomainScript(nnsHash, expectedHash, domain)
if ok || err != nil {
return err
}
w := io.NewBufBinWriter()
emit.Instruction(w.BinWriter, opcode.INITSSLOT, []byte{1})
wrapRegisterScriptWithPrice(w, nnsHash, script)
emit.AppCall(w.BinWriter, nnsHash, "deleteRecords", callflag.All, domain, int64(nns.TXT))
emit.AppCall(w.BinWriter, nnsHash, "addRecord", callflag.All,
domain, int64(nns.TXT), expectedHash.StringLE())
emit.AppCall(w.BinWriter, nnsHash, "addRecord", callflag.All,
domain, int64(nns.TXT), address.Uint160ToString(expectedHash))
return c.sendCommitteeTx(w.Bytes(), true)
}
func (c *initializeContext) nnsRootRegistered(nnsHash util.Uint160, zone string) (bool, error) {
res, err := c.CommitteeAct.Call(nnsHash, "isAvailable", "name."+zone)
if err != nil {
return false, err
}
return res.State == vmstate.Halt.String(), nil
}
var errMissingNNSRecord = errors.New("missing NNS record")
// Returns errMissingNNSRecord if invocation fault exception contains "token not found".
func nnsResolveHash(inv *invoker.Invoker, nnsHash util.Uint160, domain string) (util.Uint160, error) {
item, err := nnsResolve(inv, nnsHash, domain)
if err != nil {
return util.Uint160{}, err
}
return parseNNSResolveResult(item)
}
func nnsResolve(inv *invoker.Invoker, nnsHash util.Uint160, domain string) (stackitem.Item, error) {
return unwrap.Item(inv.Call(nnsHash, "resolve", domain, int64(nns.TXT)))
}
func nnsResolveKey(inv *invoker.Invoker, nnsHash util.Uint160, domain string) (*keys.PublicKey, error) {
res, err := nnsResolve(inv, nnsHash, domain)
if err != nil {
return nil, err
}
if _, ok := res.Value().(stackitem.Null); ok {
return nil, errors.New("NNS record is missing")
}
arr, ok := res.Value().([]stackitem.Item)
if !ok {
return nil, errors.New("API of the NNS contract method `resolve` has changed")
}
for i := range arr {
var bs []byte
bs, err = arr[i].TryBytes()
if err != nil {
continue
}
return keys.NewPublicKeyFromString(string(bs))
}
return nil, errors.New("no valid keys are found")
}
// parseNNSResolveResult parses the result of resolving NNS record.
// It works with multiple formats (corresponding to multiple NNS versions).
// If array of hashes is provided, it returns only the first one.
func parseNNSResolveResult(res stackitem.Item) (util.Uint160, error) {
arr, ok := res.Value().([]stackitem.Item)
if !ok {
arr = []stackitem.Item{res}
}
if _, ok := res.Value().(stackitem.Null); ok || len(arr) == 0 {
return util.Uint160{}, errors.New("NNS record is missing")
}
for i := range arr {
bs, err := arr[i].TryBytes()
if err != nil {
continue
}
// We support several formats for hash encoding, this logic should be maintained in sync
// with nnsResolve from pkg/morph/client/nns.go
h, err := util.Uint160DecodeStringLE(string(bs))
if err == nil {
return h, nil
}
h, err = address.StringToUint160(string(bs))
if err == nil {
return h, nil
}
}
return util.Uint160{}, errors.New("no valid hashes are found")
}
func nnsIsAvailable(c Client, nnsHash util.Uint160, name string) (bool, error) {
switch c.(type) {
case *rpcclient.Client:
inv := invoker.New(c, nil)
reader := nnsClient.NewReader(inv, nnsHash)
return reader.IsAvailable(name)
default:
b, err := unwrap.Bool(invokeFunction(c, nnsHash, "isAvailable", []any{name}, nil))
if err != nil {
return false, fmt.Errorf("`isAvailable`: invalid response: %w", err)
}
return b, nil
}
}

View file

@ -1,164 +0,0 @@
package morph
import (
"errors"
"fmt"
"github.com/nspcc-dev/neo-go/pkg/core/native"
"github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/core/transaction"
"github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/neo"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/vm/emit"
"github.com/nspcc-dev/neo-go/pkg/vm/opcode"
)
// initialAlphabetNEOAmount represents the total amount of GAS distributed between alphabet nodes.
const (
initialAlphabetNEOAmount = native.NEOTotalSupply
registerBatchSize = transaction.MaxAttributes - 1
)
func (c *initializeContext) registerCandidateRange(start, end int) error {
regPrice, err := c.getCandidateRegisterPrice()
if err != nil {
return fmt.Errorf("can't fetch registration price: %w", err)
}
w := io.NewBufBinWriter()
emit.AppCall(w.BinWriter, neo.Hash, "setRegisterPrice", callflag.States, 1)
for _, acc := range c.Accounts[start:end] {
emit.AppCall(w.BinWriter, neo.Hash, "registerCandidate", callflag.States, acc.PrivateKey().PublicKey().Bytes())
emit.Opcodes(w.BinWriter, opcode.ASSERT)
}
emit.AppCall(w.BinWriter, neo.Hash, "setRegisterPrice", callflag.States, regPrice)
if w.Err != nil {
panic(fmt.Sprintf("BUG: %v", w.Err))
}
signers := []rpcclient.SignerAccount{{
Signer: c.getSigner(false, c.CommitteeAcc),
Account: c.CommitteeAcc,
}}
for _, acc := range c.Accounts[start:end] {
signers = append(signers, rpcclient.SignerAccount{
Signer: transaction.Signer{
Account: acc.Contract.ScriptHash(),
Scopes: transaction.CustomContracts,
AllowedContracts: []util.Uint160{neo.Hash},
},
Account: acc,
})
}
tx, err := c.Client.CreateTxFromScript(w.Bytes(), c.CommitteeAcc, -1, 0, signers)
if err != nil {
return fmt.Errorf("can't create tx: %w", err)
}
if err := c.multiSign(tx, committeeAccountName); err != nil {
return fmt.Errorf("can't sign a transaction: %w", err)
}
network := c.CommitteeAct.GetNetwork()
for _, acc := range c.Accounts[start:end] {
if err := acc.SignTx(network, tx); err != nil {
return fmt.Errorf("can't sign a transaction: %w", err)
}
}
return c.sendTx(tx, c.Command, true)
}
func (c *initializeContext) registerCandidates() error {
cc, err := unwrap.Array(c.ReadOnlyInvoker.Call(neo.Hash, "getCandidates"))
if err != nil {
return fmt.Errorf("`getCandidates`: %w", err)
}
need := len(c.Accounts)
have := len(cc)
if need == have {
c.Command.Println("Candidates are already registered.")
return nil
}
// Register candidates in batches in order to overcome the signers amount limit.
// See: https://github.com/nspcc-dev/neo-go/blob/master/pkg/core/transaction/transaction.go#L27
for i := 0; i < need; i += registerBatchSize {
start, end := i, i+registerBatchSize
if end > need {
end = need
}
// This check is sound because transactions are accepted/rejected atomically.
if have >= end {
continue
}
if err := c.registerCandidateRange(start, end); err != nil {
return fmt.Errorf("registering candidates %d..%d: %q", start, end-1, err)
}
}
return nil
}
func (c *initializeContext) transferNEOToAlphabetContracts() error {
neoHash := neo.Hash
ok, err := c.transferNEOFinished(neoHash)
if ok || err != nil {
return err
}
cs := c.getContract(alphabetContract)
amount := initialAlphabetNEOAmount / len(c.Wallets)
bw := io.NewBufBinWriter()
for _, acc := range c.Accounts {
h := state.CreateContractHash(acc.Contract.ScriptHash(), cs.NEF.Checksum, cs.Manifest.Name)
emit.AppCall(bw.BinWriter, neoHash, "transfer", callflag.All,
c.CommitteeAcc.Contract.ScriptHash(), h, int64(amount), nil)
emit.Opcodes(bw.BinWriter, opcode.ASSERT)
}
if err := c.sendCommitteeTx(bw.Bytes(), false); err != nil {
return err
}
return c.awaitTx()
}
func (c *initializeContext) transferNEOFinished(neoHash util.Uint160) (bool, error) {
bal, err := c.Client.NEP17BalanceOf(neoHash, c.CommitteeAcc.Contract.ScriptHash())
return bal < native.NEOTotalSupply, err
}
var errGetPriceInvalid = errors.New("`getRegisterPrice`: invalid response")
func (c *initializeContext) getCandidateRegisterPrice() (int64, error) {
switch c.Client.(type) {
case *rpcclient.Client:
inv := invoker.New(c.Client, nil)
reader := neo.NewReader(inv)
return reader.GetRegisterPrice()
default:
neoHash := neo.Hash
res, err := invokeFunction(c.Client, neoHash, "getRegisterPrice", nil, nil)
if err != nil {
return 0, err
}
if len(res.Stack) == 0 {
return 0, errGetPriceInvalid
}
bi, err := res.Stack[0].TryInteger()
if err != nil || !bi.IsInt64() {
return 0, errGetPriceInvalid
}
return bi.Int64(), nil
}
}

View file

@ -1,141 +0,0 @@
package morph
import (
"encoding/hex"
"fmt"
"os"
"path/filepath"
"strconv"
"testing"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring"
"github.com/nspcc-dev/neo-go/pkg/config"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/vm"
"github.com/nspcc-dev/neo-go/pkg/wallet"
"github.com/spf13/viper"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
)
const (
contractsPath = "../../../../../../frostfs-contract/frostfs-contract-v0.16.0.tar.gz"
protoFileName = "proto.yml"
)
func TestInitialize(t *testing.T) {
// This test needs frostfs-contract tarball, so it is skipped by default.
// It is here for performing local testing after the changes.
t.Skip()
t.Run("1 nodes", func(t *testing.T) {
testInitialize(t, 1)
})
t.Run("4 nodes", func(t *testing.T) {
testInitialize(t, 4)
})
t.Run("7 nodes", func(t *testing.T) {
testInitialize(t, 7)
})
t.Run("16 nodes", func(t *testing.T) {
testInitialize(t, 16)
})
t.Run("max nodes", func(t *testing.T) {
testInitialize(t, maxAlphabetNodes)
})
t.Run("too many nodes", func(t *testing.T) {
require.ErrorIs(t, generateTestData(t, t.TempDir(), maxAlphabetNodes+1), ErrTooManyAlphabetNodes)
})
}
func testInitialize(t *testing.T, committeeSize int) {
testdataDir := t.TempDir()
v := viper.GetViper()
require.NoError(t, generateTestData(t, testdataDir, committeeSize))
v.Set(protoConfigPath, filepath.Join(testdataDir, protoFileName))
// Set to the path or remove the next statement to download from the network.
require.NoError(t, initCmd.Flags().Set(contractsInitFlag, contractsPath))
v.Set(localDumpFlag, filepath.Join(testdataDir, "out"))
v.Set(alphabetWalletsFlag, testdataDir)
v.Set(epochDurationInitFlag, 1)
v.Set(maxObjectSizeInitFlag, 1024)
setTestCredentials(v, committeeSize)
require.NoError(t, initializeSideChainCmd(initCmd, nil))
t.Run("force-new-epoch", func(t *testing.T) {
require.NoError(t, forceNewEpochCmd(forceNewEpoch, nil))
})
t.Run("set-config", func(t *testing.T) {
require.NoError(t, setConfigCmd(setConfig, []string{"MaintenanceModeAllowed=true"}))
})
t.Run("set-policy", func(t *testing.T) {
require.NoError(t, setPolicyCmd(setPolicy, []string{"ExecFeeFactor=1"}))
})
t.Run("remove-node", func(t *testing.T) {
pk, err := keys.NewPrivateKey()
require.NoError(t, err)
pub := hex.EncodeToString(pk.PublicKey().Bytes())
require.NoError(t, removeNodesCmd(removeNodes, []string{pub}))
})
}
func generateTestData(t *testing.T, dir string, size int) error {
v := viper.GetViper()
v.Set(alphabetWalletsFlag, dir)
sizeStr := strconv.FormatUint(uint64(size), 10)
if err := generateAlphabetCmd.Flags().Set(alphabetSizeFlag, sizeStr); err != nil {
return err
}
setTestCredentials(v, size)
if err := generateAlphabetCreds(generateAlphabetCmd, nil); err != nil {
return err
}
var pubs []string
for i := 0; i < size; i++ {
p := filepath.Join(dir, innerring.GlagoliticLetter(i).String()+".json")
w, err := wallet.NewWalletFromFile(p)
if err != nil {
return fmt.Errorf("wallet doesn't exist: %w", err)
}
for _, acc := range w.Accounts {
if acc.Label == singleAccountName {
pub, ok := vm.ParseSignatureContract(acc.Contract.Script)
if !ok {
return fmt.Errorf("could not parse signature script for %s", acc.Address)
}
pubs = append(pubs, hex.EncodeToString(pub))
continue
}
}
}
cfg := config.Config{}
cfg.ProtocolConfiguration.Magic = 12345
cfg.ProtocolConfiguration.ValidatorsCount = uint32(size)
cfg.ProtocolConfiguration.TimePerBlock = time.Second
cfg.ProtocolConfiguration.StandbyCommittee = pubs // sorted by glagolic letters
cfg.ProtocolConfiguration.P2PSigExtensions = true
cfg.ProtocolConfiguration.VerifyTransactions = true
data, err := yaml.Marshal(cfg)
if err != nil {
return err
}
protoPath := filepath.Join(dir, protoFileName)
return os.WriteFile(protoPath, data, os.ModePerm)
}
func setTestCredentials(v *viper.Viper, size int) {
for i := 0; i < size; i++ {
v.Set("credentials."+innerring.GlagoliticLetter(i).String(), strconv.FormatUint(uint64(i), 10))
}
v.Set("credentials.contract", testContractPassword)
}

View file

@ -1,29 +0,0 @@
package morph
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
func listNetmapCandidatesNodes(cmd *cobra.Command, _ []string) {
c, err := getN3Client(viper.GetViper())
commonCmd.ExitOnErr(cmd, "can't create N3 client: %w", err)
inv := invoker.New(c, nil)
cs, err := c.GetContractStateByID(1)
commonCmd.ExitOnErr(cmd, "can't get NNS contract info: %w", err)
nmHash, err := nnsResolveHash(inv, cs.Hash, netmapContract+".frostfs")
commonCmd.ExitOnErr(cmd, "can't get netmap contract hash: %w", err)
res, err := inv.Call(nmHash, "netmapCandidates")
commonCmd.ExitOnErr(cmd, "can't fetch list of network config keys from the netmap contract", err)
nm, err := netmap.DecodeNetMap(res.Stack)
commonCmd.ExitOnErr(cmd, "unable to decode netmap: %w", err)
commonCmd.PrettyPrintNetMap(cmd, *nm, !viper.GetBool(commonflags.Verbose))
}

View file

@ -1,44 +0,0 @@
package morph
import (
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap"
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
"github.com/spf13/viper"
)
func getDefaultNetmapContractConfigMap() map[string]any {
m := make(map[string]any)
m[netmap.EpochDurationConfig] = viper.GetInt64(epochDurationInitFlag)
m[netmap.MaxObjectSizeConfig] = viper.GetInt64(maxObjectSizeInitFlag)
m[netmap.ContainerFeeConfig] = viper.GetInt64(containerFeeInitFlag)
m[netmap.ContainerAliasFeeConfig] = viper.GetInt64(containerAliasFeeInitFlag)
m[netmap.IrCandidateFeeConfig] = viper.GetInt64(candidateFeeInitFlag)
m[netmap.WithdrawFeeConfig] = viper.GetInt64(withdrawFeeInitFlag)
m[netmap.HomomorphicHashingDisabledKey] = viper.GetBool(homomorphicHashDisabledInitFlag)
m[netmap.MaintenanceModeAllowedConfig] = viper.GetBool(maintenanceModeAllowedInitFlag)
return m
}
func parseConfigFromNetmapContract(arr []stackitem.Item) (map[string][]byte, error) {
m := make(map[string][]byte, len(arr))
for _, param := range arr {
tuple, ok := param.Value().([]stackitem.Item)
if !ok || len(tuple) != 2 {
return nil, errors.New("invalid ListConfig response from netmap contract")
}
k, err := tuple[0].TryBytes()
if err != nil {
return nil, errors.New("invalid config key from netmap contract")
}
v, err := tuple[1].TryBytes()
if err != nil {
return nil, invalidConfigValueErr(string(k))
}
m[string(k)] = v
}
return m, nil
}

View file

@ -1,85 +0,0 @@
package modules
import (
"os"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/storagecfg"
"git.frostfs.info/TrueCloudLab/frostfs-node/misc"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/autocomplete"
utilConfig "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/gendoc"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var (
rootCmd = &cobra.Command{
Use: "frostfs-adm",
Short: "FrostFS Administrative Tool",
Long: `FrostFS Administrative Tool provides functions to setup and
manage FrostFS network deployment.`,
RunE: entryPoint,
SilenceUsage: true,
}
)
func init() {
cobra.OnInitialize(func() { initConfig(rootCmd) })
// we need to init viper config to bind viper and cobra configurations for
// rpc endpoint, alphabet wallet dir, key credentials, etc.
// use stdout as default output for cmd.Print()
rootCmd.SetOut(os.Stdout)
rootCmd.PersistentFlags().StringP(commonflags.ConfigFlag, commonflags.ConfigFlagShorthand, "", commonflags.ConfigFlagUsage)
rootCmd.PersistentFlags().String(commonflags.ConfigDirFlag, "", commonflags.ConfigDirFlagUsage)
rootCmd.PersistentFlags().BoolP(commonflags.Verbose, commonflags.VerboseShorthand, false, commonflags.VerboseUsage)
_ = viper.BindPFlag(commonflags.Verbose, rootCmd.PersistentFlags().Lookup(commonflags.Verbose))
rootCmd.Flags().Bool("version", false, "Application version")
rootCmd.AddCommand(config.RootCmd)
rootCmd.AddCommand(morph.RootCmd)
rootCmd.AddCommand(storagecfg.RootCmd)
rootCmd.AddCommand(autocomplete.Command("frostfs-adm"))
rootCmd.AddCommand(gendoc.Command(rootCmd, gendoc.Options{}))
}
func Execute() error {
return rootCmd.Execute()
}
func entryPoint(cmd *cobra.Command, _ []string) error {
printVersion, _ := cmd.Flags().GetBool("version")
if printVersion {
cmd.Print(misc.BuildInfo("FrostFS Adm"))
return nil
}
return cmd.Usage()
}
func initConfig(cmd *cobra.Command) {
configFile, err := cmd.Flags().GetString(commonflags.ConfigFlag)
if err != nil {
return
}
if configFile != "" {
viper.SetConfigType("yml")
viper.SetConfigFile(configFile)
_ = viper.ReadInConfig() // if config file is set but unavailable, ignore it
}
configDir, err := cmd.Flags().GetString(commonflags.ConfigDirFlag)
if err != nil {
return
}
if configDir != "" {
_ = utilConfig.ReadConfigDir(viper.GetViper(), configDir) // if config files cannot be read, ignore it
}
}

View file

@ -1,75 +0,0 @@
# How FrostFS CLI uses session mechanism of the FrostFS
## Overview
FrostFS sessions implement a mechanism for issuing a power of attorney by one
party to another. A trusted party can provide a so-called session token as
proof of the right to act on behalf of another member of the network. The
client of operations carried out with such a token will be the user who opened
the session. The token contains information which limits power of attorney like
action context or lifetime.
The client confirms trust in a third party by signing its public (session) key
with his private key. Any operation signed using private session key with
attached session token is treated as performed by the original client.
## Types
FrostFS CLI supports two ways to execute operation within a session depending on
whether the user of the command application is an original user (1) or a trusted
one (2).
### Dynamic
For case (1) CLI user can only open dynamic sessions. Protocol call
`SessionService.Create` is used for this purpose. As a result of the call, a
private session key will be generated on the server, thus making the remote
server trusted. This type of session is useful when the client needs to
transfer part of the responsibility for the formation of strict system elements
to the trusted server. At the moment, the approach is applicable only to
creating objects.
```shell
$ frostfs-cli session create --rpc-endpoint <server_ip> --out ./blank_token
```
After this example command remote node holds session private key while its
public part is written into the session token encoded into the output file.
Later this token can be attached to the operations which support dynamic
sessions. Then the token will be finally formed and signed by CLI itself.
### Static
For case (2) CLI user can act on behalf of the person who issued the session
token to him. Unlike (1) the token must be fully prepared on the side of the
original client, and the CLI uses it only for reading. Ready token MUST have:
- correct context (object, container, etc.)
- valid lifetime
- public session key corresponding to the CLI key
- valid client signature
To sign the session token, exec:
```shell
$ frostfs-cli --wallet <client_wallet> util sign session-token --from ./blank_token --to ./token
```
Once the token is signed, it MUST NOT be modified.
## Commands
### Object
Here are sub-commands of `object` command which support only dynamic sessions (1):
- `put`
- `delete`
- `lock`
These commands accept blank token of the dynamically opened session or open
session internally if it has not been opened yet.
All other `object` sub-commands support only static sessions (2).
### Container
List of commands supporting sessions (static only):
- `create`
- `delete`
- `set-eacl`

View file

@ -1,33 +0,0 @@
# Extended headers
## Overview
Extended headers are used for request/response. They may contain any
user-defined headers to be interpreted on application level. Key name must be a
unique valid UTF-8 string. Value can't be empty. Requests or Responses with
duplicated header names or headers with empty values are considered invalid.
## Existing headers
There are some "well-known" headers starting with `__SYSTEM__` prefix that
affect system behaviour. For backward compatibility, the same set of
"well-known" headers may also use `__NEOFS__` prefix:
* `__SYSTEM__NETMAP_EPOCH` - netmap epoch to use for object placement calculation. The `value` is string
encoded `uint64` in decimal presentation. If set to '0' or omitted, the
current epoch only will be used.
* `__SYSTEM__NETMAP_LOOKUP_DEPTH` - if object can't be found using current epoch's netmap, this header limits
how many past epochs the node can look up through. Depth is applied to a current epoch or the value
of `__SYSTEM__NETMAP_EPOCH` attribute. The `value` is string encoded `uint64` in decimal presentation.
If set to '0' or not set, only the current epoch is used.
## `frostfs-cli` commands with `--xhdr`
List of commands with support of extended headers:
* `container list-objects`
* `object delete/get/hash/head/lock/put/range/search`
Example:
```shell
$ frostfs-cli object put -r s01.frostfs.devenv:8080 -w wallet.json --cid CID --file FILE --xhdr "__SYSTEM__NETMAP_EPOCH=777"
```

View file

@ -1,15 +0,0 @@
// Package internal provides functionality for FrostFS CLI application
// communication with FrostFS network.
//
// The base client for accessing remote nodes via FrostFS API is a FrostFS SDK
// Go API client. However, although it encapsulates a useful piece of business
// logic (e.g. the signature mechanism), the FrostFS CLI application does not
// fully use the client's flexible interface.
//
// In this regard, this package provides functions over base API client
// necessary for the application. This allows you to concentrate the entire
// spectrum of the client's use in one place (this will be convenient both when
// updating the base client and for evaluating the UX of SDK library). So it is
// expected that all application packages will be limited to this package for
// the development of functionality requiring FrostFS API communication.
package internal

View file

@ -1,101 +0,0 @@
package internal
import (
"context"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"errors"
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network"
tracing "git.frostfs.info/TrueCloudLab/frostfs-observability/tracing/grpc"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"google.golang.org/grpc"
)
var errInvalidEndpoint = errors.New("provided RPC endpoint is incorrect")
// GetSDKClientByFlag returns default frostfs-sdk-go client using the specified flag for the address.
// On error, outputs to stderr of cmd and exits with non-zero code.
func GetSDKClientByFlag(cmd *cobra.Command, key *ecdsa.PrivateKey, endpointFlag string) *client.Client {
cli, err := getSDKClientByFlag(cmd, key, endpointFlag)
if err != nil {
commonCmd.ExitOnErr(cmd, "can't create API client: %w", err)
}
return cli
}
func getSDKClientByFlag(cmd *cobra.Command, key *ecdsa.PrivateKey, endpointFlag string) (*client.Client, error) {
var addr network.Address
err := addr.FromString(viper.GetString(endpointFlag))
if err != nil {
return nil, fmt.Errorf("%v: %w", errInvalidEndpoint, err)
}
return GetSDKClient(cmd.Context(), cmd, key, addr)
}
// GetSDKClient returns default frostfs-sdk-go client.
func GetSDKClient(ctx context.Context, cmd *cobra.Command, key *ecdsa.PrivateKey, addr network.Address) (*client.Client, error) {
var (
c client.Client
prmInit client.PrmInit
prmDial client.PrmDial
)
prmInit.SetDefaultPrivateKey(*key)
prmInit.ResolveFrostFSFailures()
prmDial.SetServerURI(addr.URIAddr())
if timeout := viper.GetDuration(commonflags.Timeout); timeout > 0 {
// In CLI we can only set a timeout for the whole operation.
// By also setting stream timeout we ensure that no operation hands
// for too long.
prmDial.SetTimeout(timeout)
prmDial.SetStreamTimeout(timeout)
common.PrintVerbose(cmd, "Set request timeout to %s.", timeout)
}
prmDial.SetGRPCDialOptions(
grpc.WithChainUnaryInterceptor(tracing.NewUnaryClientInteceptor()),
grpc.WithChainStreamInterceptor(tracing.NewStreamClientInterceptor()))
c.Init(prmInit)
if err := c.Dial(ctx, prmDial); err != nil {
return nil, fmt.Errorf("can't init SDK client: %w", err)
}
return &c, nil
}
// GetCurrentEpoch returns current epoch.
func GetCurrentEpoch(ctx context.Context, cmd *cobra.Command, endpoint string) (uint64, error) {
var addr network.Address
if err := addr.FromString(endpoint); err != nil {
return 0, fmt.Errorf("can't parse RPC endpoint: %w", err)
}
key, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
if err != nil {
return 0, fmt.Errorf("can't generate key to sign query: %w", err)
}
c, err := GetSDKClient(ctx, cmd, key, addr)
if err != nil {
return 0, err
}
ni, err := c.NetworkInfo(ctx, client.PrmNetworkInfo{})
if err != nil {
return 0, err
}
return ni.Info().CurrentEpoch(), nil
}

View file

@ -1,28 +0,0 @@
package common
import (
"fmt"
"strconv"
"github.com/spf13/cobra"
)
// ParseEpoch parses epoch argument. Second return value is true if
// the specified epoch is relative, and false otherwise.
func ParseEpoch(cmd *cobra.Command, flag string) (uint64, bool, error) {
s, _ := cmd.Flags().GetString(flag)
if len(s) == 0 {
return 0, false, nil
}
relative := s[0] == '+'
if relative {
s = s[1:]
}
epoch, err := strconv.ParseUint(s, 10, 64)
if err != nil {
return 0, relative, fmt.Errorf("can't parse epoch for %s argument: %w", flag, err)
}
return epoch, relative, nil
}

View file

@ -1,67 +0,0 @@
package common
import (
"encoding/json"
"errors"
"fmt"
"os"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
"github.com/spf13/cobra"
)
// ReadBearerToken reads bearer token from the path provided in a specified flag.
func ReadBearerToken(cmd *cobra.Command, flagname string) *bearer.Token {
path, err := cmd.Flags().GetString(flagname)
commonCmd.ExitOnErr(cmd, "", err)
if len(path) == 0 {
return nil
}
PrintVerbose(cmd, "Reading bearer token from file [%s]...", path)
var tok bearer.Token
err = ReadBinaryOrJSON(cmd, &tok, path)
commonCmd.ExitOnErr(cmd, "invalid bearer token: %v", err)
return &tok
}
// BinaryOrJSON is an interface of entities which provide json.Unmarshaler
// and FrostFS binary decoder.
type BinaryOrJSON interface {
Unmarshal([]byte) error
json.Unmarshaler
}
// ReadBinaryOrJSON reads file data using provided path and decodes
// BinaryOrJSON from the data.
func ReadBinaryOrJSON(cmd *cobra.Command, dst BinaryOrJSON, fPath string) error {
PrintVerbose(cmd, "Reading file [%s]...", fPath)
// try to read session token from file
data, err := os.ReadFile(fPath)
if err != nil {
return fmt.Errorf("read file <%s>: %w", fPath, err)
}
PrintVerbose(cmd, "Trying to decode binary...")
err = dst.Unmarshal(data)
if err != nil {
PrintVerbose(cmd, "Failed to decode binary: %v", err)
PrintVerbose(cmd, "Trying to decode JSON...")
err = dst.UnmarshalJSON(data)
if err != nil {
PrintVerbose(cmd, "Failed to decode JSON: %v", err)
return errors.New("invalid format")
}
}
return nil
}

View file

@ -1,62 +0,0 @@
package common
import (
"context"
"sort"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/misc"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
"github.com/spf13/cobra"
"go.opentelemetry.io/otel/trace"
)
type spanKey struct{}
// StopClientCommandSpan stops tracing span for the command and prints trace ID on the standard output.
func StopClientCommandSpan(cmd *cobra.Command, _ []string) {
span, ok := cmd.Context().Value(spanKey{}).(trace.Span)
if !ok {
return
}
span.End()
// Noop provider cannot fail on flush.
_ = tracing.Shutdown(cmd.Context())
cmd.PrintErrf("Trace ID: %s\n", span.SpanContext().TraceID())
}
// StartClientCommandSpan starts tracing span for the command.
func StartClientCommandSpan(cmd *cobra.Command) {
enableTracing, err := cmd.Flags().GetBool(commonflags.TracingFlag)
if err != nil || !enableTracing {
return
}
_, err = tracing.Setup(cmd.Context(), tracing.Config{
Enabled: true,
Exporter: tracing.NoOpExporter,
Service: "frostfs-cli",
Version: misc.Version,
})
commonCmd.ExitOnErr(cmd, "init tracing: %w", err)
var components sort.StringSlice
for c := cmd; c != nil; c = c.Parent() {
components = append(components, c.Name())
}
for i, j := 0, len(components)-1; i < j; {
components.Swap(i, j)
i++
j--
}
operation := strings.Join(components, ".")
ctx, span := tracing.StartSpanFromContext(cmd.Context(), operation)
ctx = context.WithValue(ctx, spanKey{}, span)
cmd.SetContext(ctx)
}

View file

@ -1,9 +0,0 @@
package commonflags
const (
// ExpireAt is a flag for setting last epoch of an object or a token.
ExpireAt = "expire-at"
// Lifetime is a flag for setting the lifetime of an object or a token,
// starting from the current epoch.
Lifetime = "lifetime"
)

View file

@ -1,19 +0,0 @@
package commonflags
import (
"fmt"
"github.com/spf13/cobra"
)
const SessionToken = "session"
// InitSession registers SessionToken flag representing file path to the token of
// the session with the given name. Supports FrostFS-binary and JSON files.
func InitSession(cmd *cobra.Command, name string) {
cmd.Flags().String(
SessionToken,
"",
fmt.Sprintf("Filepath to a JSON- or binary-encoded token of the %s session", name),
)
}

View file

@ -1,62 +0,0 @@
package key
import (
"crypto/ecdsa"
"errors"
"fmt"
"os"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/wallet"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var errCantGenerateKey = errors.New("can't generate new private key")
// Get returns private key from wallet or binary file.
// Ideally we want to touch file-system on the last step.
// This function assumes that all flags were bind to viper in a `PersistentPreRun`.
func Get(cmd *cobra.Command) *ecdsa.PrivateKey {
pk, err := get(cmd)
commonCmd.ExitOnErr(cmd, "can't fetch private key: %w", err)
return pk
}
func get(cmd *cobra.Command) (*ecdsa.PrivateKey, error) {
keyDesc := viper.GetString(commonflags.WalletPath)
data, err := os.ReadFile(keyDesc)
if err != nil {
return nil, fmt.Errorf("%w: %v", ErrFs, err)
}
priv, err := keys.NewPrivateKeyFromBytes(data)
if err != nil {
w, err := wallet.NewWalletFromFile(keyDesc)
if err == nil {
return FromWallet(cmd, w, viper.GetString(commonflags.Account))
}
return nil, fmt.Errorf("%w: %v", ErrInvalidKey, err)
}
return &priv.PrivateKey, nil
}
// GetOrGenerate is similar to get but generates a new key if commonflags.GenerateKey is set.
func GetOrGenerate(cmd *cobra.Command) *ecdsa.PrivateKey {
pk, err := getOrGenerate(cmd)
commonCmd.ExitOnErr(cmd, "can't fetch private key: %w", err)
return pk
}
func getOrGenerate(cmd *cobra.Command) (*ecdsa.PrivateKey, error) {
if viper.GetBool(commonflags.GenerateKey) {
priv, err := keys.NewPrivateKey()
if err != nil {
return nil, fmt.Errorf("%w: %v", errCantGenerateKey, err)
}
return &priv.PrivateKey, nil
}
return get(cmd)
}

View file

@ -1,7 +0,0 @@
package main
import cmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules"
func main() {
cmd.Execute()
}

View file

@ -1,28 +0,0 @@
package basic
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
"github.com/spf13/cobra"
)
var printACLCmd = &cobra.Command{
Use: "print",
Short: "Pretty print basic ACL from the HEX representation",
Example: `frostfs-cli acl basic print 0x1C8C8CCC`,
Long: `Pretty print basic ACL from the HEX representation.
Few roles have exclusive default access to set of operation, even if particular bit deny it.
Container have access to the operations of the data replication mechanism:
Get, Head, Put, Search, Hash.
InnerRing members are allowed to data audit ops only:
Get, Head, Hash, Search.`,
Run: printACL,
Args: cobra.ExactArgs(1),
}
func printACL(cmd *cobra.Command, args []string) {
var bacl acl.Basic
commonCmd.ExitOnErr(cmd, "unable to parse basic acl: %w", bacl.DecodeString(args[0]))
util.PrettyPrintTableBACL(cmd, &bacl)
}

View file

@ -1,14 +0,0 @@
package basic
import (
"github.com/spf13/cobra"
)
var Cmd = &cobra.Command{
Use: "basic",
Short: "Operations with Basic Access Control Lists",
}
func init() {
Cmd.AddCommand(printACLCmd)
}

View file

@ -1,127 +0,0 @@
package extended
import (
"bytes"
"encoding/json"
"os"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"github.com/spf13/cobra"
)
var createCmd = &cobra.Command{
Use: "create",
Short: "Create extended ACL from the text representation",
Long: `Create extended ACL from the text representation.
Rule consist of these blocks: <action> <operation> [<filter1> ...] [<target1> ...]
Action is 'allow' or 'deny'.
Operation is an object service verb: 'get', 'head', 'put', 'search', 'delete', 'getrange', or 'getrangehash'.
Filter consists of <typ>:<key><match><value>
Typ is 'obj' for object applied filter or 'req' for request applied filter.
Key is a valid unicode string corresponding to object or request header key.
Well-known system object headers start with '$Object:' prefix.
User defined headers start without prefix.
Read more about filter keys at git.frostfs.info.com/TrueCloudLab/frostfs-api/src/branch/master/proto-docs/acl.md#message-eaclrecordfilter
Match is '=' for matching and '!=' for non-matching filter.
Value is a valid unicode string corresponding to object or request header value.
Target is
'user' for container owner,
'system' for Storage nodes in container and Inner Ring nodes,
'others' for all other request senders,
'pubkey:<key1>,<key2>,...' for exact request sender, where <key> is a hex-encoded 33-byte public key.
When both '--rule' and '--file' arguments are used, '--rule' records will be placed higher in resulting extended ACL table.
`,
Example: `frostfs-cli acl extended create --cid EutHBsdT1YCzHxjCfQHnLPL1vFrkSyLSio4vkphfnEk -f rules.txt --out table.json
frostfs-cli acl extended create --cid EutHBsdT1YCzHxjCfQHnLPL1vFrkSyLSio4vkphfnEk -r 'allow get obj:Key=Value others' -r 'deny put others'`,
Run: createEACL,
}
func init() {
createCmd.Flags().StringArrayP("rule", "r", nil, "Extended ACL table record to apply")
createCmd.Flags().StringP("file", "f", "", "Read list of extended ACL table records from text file")
createCmd.Flags().StringP("out", "o", "", "Save JSON formatted extended ACL table in file")
createCmd.Flags().StringP(commonflags.CIDFlag, "", "", commonflags.CIDFlagUsage)
_ = cobra.MarkFlagFilename(createCmd.Flags(), "file")
_ = cobra.MarkFlagFilename(createCmd.Flags(), "out")
}
func createEACL(cmd *cobra.Command, _ []string) {
rules, _ := cmd.Flags().GetStringArray("rule")
fileArg, _ := cmd.Flags().GetString("file")
outArg, _ := cmd.Flags().GetString("out")
cidArg, _ := cmd.Flags().GetString(commonflags.CIDFlag)
var containerID cid.ID
if cidArg != "" {
if err := containerID.DecodeString(cidArg); err != nil {
cmd.PrintErrf("invalid container ID: %v\n", err)
os.Exit(1)
}
}
rulesFile, err := getRulesFromFile(fileArg)
if err != nil {
cmd.PrintErrf("can't read rules from file: %v\n", err)
os.Exit(1)
}
rules = append(rules, rulesFile...)
if len(rules) == 0 {
cmd.PrintErrln("no extended ACL rules has been provided")
os.Exit(1)
}
tb := eacl.NewTable()
commonCmd.ExitOnErr(cmd, "unable to parse provided rules: %w", util.ParseEACLRules(tb, rules))
tb.SetCID(containerID)
data, err := tb.MarshalJSON()
if err != nil {
cmd.PrintErrln(err)
os.Exit(1)
}
buf := new(bytes.Buffer)
err = json.Indent(buf, data, "", " ")
if err != nil {
cmd.PrintErrln(err)
os.Exit(1)
}
if len(outArg) == 0 {
cmd.Println(buf)
return
}
err = os.WriteFile(outArg, buf.Bytes(), 0644)
if err != nil {
cmd.PrintErrln(err)
os.Exit(1)
}
}
func getRulesFromFile(filename string) ([]string, error) {
if len(filename) == 0 {
return nil, nil
}
data, err := os.ReadFile(filename)
if err != nil {
return nil, err
}
return strings.Split(strings.TrimSpace(string(data)), "\n"), nil
}

View file

@ -1,38 +0,0 @@
package extended
import (
"os"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"github.com/spf13/cobra"
)
var printEACLCmd = &cobra.Command{
Use: "print",
Short: "Pretty print extended ACL from the file(in text or json format) or for given container.",
Run: printEACL,
}
func init() {
flags := printEACLCmd.Flags()
flags.StringP("file", "f", "",
"Read list of extended ACL table records from text or json file")
_ = printEACLCmd.MarkFlagRequired("file")
}
func printEACL(cmd *cobra.Command, _ []string) {
file, _ := cmd.Flags().GetString("file")
eaclTable := new(eacl.Table)
data, err := os.ReadFile(file)
commonCmd.ExitOnErr(cmd, "can't read file with EACL: %w", err)
if strings.HasSuffix(file, ".json") {
commonCmd.ExitOnErr(cmd, "unable to parse json: %w", eaclTable.UnmarshalJSON(data))
} else {
rules := strings.Split(strings.TrimSpace(string(data)), "\n")
commonCmd.ExitOnErr(cmd, "can't parse file with EACL: %w", util.ParseEACLRules(eaclTable, rules))
}
util.PrettyPrintTableEACL(cmd, eaclTable)
}

View file

@ -1,17 +0,0 @@
package acl
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/acl/basic"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/acl/extended"
"github.com/spf13/cobra"
)
var Cmd = &cobra.Command{
Use: "acl",
Short: "Operations with Access Control Lists",
}
func init() {
Cmd.AddCommand(extended.Cmd)
Cmd.AddCommand(basic.Cmd)
}

View file

@ -1,135 +0,0 @@
package bearer
import (
"context"
"encoding/json"
"fmt"
"os"
"time"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
eaclSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/spf13/cobra"
)
const (
eaclFlag = "eacl"
issuedAtFlag = "issued-at"
notValidBeforeFlag = "not-valid-before"
ownerFlag = "owner"
outFlag = "out"
jsonFlag = commonflags.JSON
impersonateFlag = "impersonate"
)
var createCmd = &cobra.Command{
Use: "create",
Short: "Create bearer token",
Long: `Create bearer token.
All epoch flags can be specified relative to the current epoch with the +n syntax.
In this case --` + commonflags.RPC + ` flag should be specified and the epoch in bearer token
is set to current epoch + n.
`,
Run: createToken,
}
func init() {
createCmd.Flags().StringP(eaclFlag, "e", "", "Path to the extended ACL table (mutually exclusive with --impersonate flag)")
createCmd.Flags().StringP(issuedAtFlag, "i", "+0", "Epoch to issue token at")
createCmd.Flags().StringP(notValidBeforeFlag, "n", "+0", "Not valid before epoch")
createCmd.Flags().StringP(commonflags.ExpireAt, "x", "", "The last active epoch for the token")
createCmd.Flags().StringP(ownerFlag, "o", "", "Token owner")
createCmd.Flags().String(outFlag, "", "File to write token to")
createCmd.Flags().Bool(jsonFlag, false, "Output token in JSON")
createCmd.Flags().Bool(impersonateFlag, false, "Mark token as impersonate to consider the token signer as the request owner (mutually exclusive with --eacl flag)")
createCmd.Flags().StringP(commonflags.RPC, commonflags.RPCShorthand, commonflags.RPCDefault, commonflags.RPCUsage)
createCmd.MarkFlagsMutuallyExclusive(eaclFlag, impersonateFlag)
_ = cobra.MarkFlagFilename(createCmd.Flags(), eaclFlag)
_ = cobra.MarkFlagRequired(createCmd.Flags(), commonflags.ExpireAt)
_ = cobra.MarkFlagRequired(createCmd.Flags(), ownerFlag)
_ = cobra.MarkFlagRequired(createCmd.Flags(), outFlag)
}
func createToken(cmd *cobra.Command, _ []string) {
iat, iatRelative, err := common.ParseEpoch(cmd, issuedAtFlag)
commonCmd.ExitOnErr(cmd, "can't parse --"+issuedAtFlag+" flag: %w", err)
exp, expRelative, err := common.ParseEpoch(cmd, commonflags.ExpireAt)
commonCmd.ExitOnErr(cmd, "can't parse --"+commonflags.ExpireAt+" flag: %w", err)
nvb, nvbRelative, err := common.ParseEpoch(cmd, notValidBeforeFlag)
commonCmd.ExitOnErr(cmd, "can't parse --"+notValidBeforeFlag+" flag: %w", err)
if iatRelative || expRelative || nvbRelative {
endpoint, _ := cmd.Flags().GetString(commonflags.RPC)
if len(endpoint) == 0 {
commonCmd.ExitOnErr(cmd, "can't fetch current epoch: %w", fmt.Errorf("'%s' flag value must be specified", commonflags.RPC))
}
ctx, cancel := context.WithTimeout(context.Background(), time.Second*30)
defer cancel()
currEpoch, err := internalclient.GetCurrentEpoch(ctx, cmd, endpoint)
commonCmd.ExitOnErr(cmd, "can't fetch current epoch: %w", err)
if iatRelative {
iat += currEpoch
}
if expRelative {
exp += currEpoch
}
if nvbRelative {
nvb += currEpoch
}
}
if exp < nvb {
commonCmd.ExitOnErr(cmd, "",
fmt.Errorf("expiration epoch is less than not-valid-before epoch: %d < %d", exp, nvb))
}
ownerStr, _ := cmd.Flags().GetString(ownerFlag)
var ownerID user.ID
commonCmd.ExitOnErr(cmd, "can't parse recipient: %w", ownerID.DecodeString(ownerStr))
var b bearer.Token
b.SetExp(exp)
b.SetNbf(nvb)
b.SetIat(iat)
b.ForUser(ownerID)
impersonate, _ := cmd.Flags().GetBool(impersonateFlag)
b.SetImpersonate(impersonate)
eaclPath, _ := cmd.Flags().GetString(eaclFlag)
if eaclPath != "" {
table := eaclSDK.NewTable()
raw, err := os.ReadFile(eaclPath)
commonCmd.ExitOnErr(cmd, "can't read extended ACL file: %w", err)
commonCmd.ExitOnErr(cmd, "can't parse extended ACL: %w", json.Unmarshal(raw, table))
b.SetEACLTable(*table)
}
var data []byte
toJSON, _ := cmd.Flags().GetBool(jsonFlag)
if toJSON {
data, err = json.Marshal(b)
commonCmd.ExitOnErr(cmd, "can't mashal token to JSON: %w", err)
} else {
data = b.Marshal()
}
out, _ := cmd.Flags().GetString(outFlag)
err = os.WriteFile(out, data, 0644)
commonCmd.ExitOnErr(cmd, "can't write token to file: %w", err)
}

View file

@ -1,9 +0,0 @@
package cmd
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/autocomplete"
)
func init() {
rootCmd.AddCommand(autocomplete.Command("frostfs-cli"))
}

View file

@ -1,222 +0,0 @@
package container
import (
"errors"
"fmt"
"os"
"strings"
"time"
containerApi "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/container"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/spf13/cobra"
)
var (
containerACL string
containerPolicy string
containerAttributes []string
containerAwait bool
containerName string
containerNnsName string
containerNnsZone string
containerNoTimestamp bool
force bool
)
var createContainerCmd = &cobra.Command{
Use: "create",
Short: "Create new container",
Long: `Create new container and register it in the FrostFS.
It will be stored in sidechain when inner ring will accepts it.`,
Run: func(cmd *cobra.Command, args []string) {
placementPolicy, err := parseContainerPolicy(cmd, containerPolicy)
commonCmd.ExitOnErr(cmd, "", err)
key := key.Get(cmd)
cli := internalclient.GetSDKClientByFlag(cmd, key, commonflags.RPC)
if !force {
var prm internalclient.NetMapSnapshotPrm
prm.SetClient(cli)
resmap, err := internalclient.NetMapSnapshot(cmd.Context(), prm)
commonCmd.ExitOnErr(cmd, "unable to get netmap snapshot to validate container placement, "+
"use --force option to skip this check: %w", err)
nodesByRep, err := resmap.NetMap().ContainerNodes(*placementPolicy, nil)
commonCmd.ExitOnErr(cmd, "could not build container nodes based on given placement policy, "+
"use --force option to skip this check: %w", err)
for i, nodes := range nodesByRep {
if placementPolicy.ReplicaNumberByIndex(i) > uint32(len(nodes)) {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf(
"the number of nodes '%d' in selector is not enough for the number of replicas '%d', "+
"use --force option to skip this check",
len(nodes),
placementPolicy.ReplicaNumberByIndex(i),
))
}
}
}
var cnr container.Container
cnr.Init()
err = parseAttributes(&cnr, containerAttributes)
commonCmd.ExitOnErr(cmd, "", err)
var basicACL acl.Basic
commonCmd.ExitOnErr(cmd, "decode basic ACL string: %w", basicACL.DecodeString(containerACL))
tok := getSession(cmd)
if tok != nil {
issuer := tok.Issuer()
cnr.SetOwner(issuer)
} else {
var idOwner user.ID
user.IDFromKey(&idOwner, key.PublicKey)
cnr.SetOwner(idOwner)
}
cnr.SetPlacementPolicy(*placementPolicy)
cnr.SetBasicACL(basicACL)
var syncContainerPrm internalclient.SyncContainerPrm
syncContainerPrm.SetClient(cli)
syncContainerPrm.SetContainer(&cnr)
_, err = internalclient.SyncContainerSettings(cmd.Context(), syncContainerPrm)
commonCmd.ExitOnErr(cmd, "syncing container's settings rpc error: %w", err)
putPrm := internalclient.PutContainerPrm{
Client: cli,
ClientParams: client.PrmContainerPut{
Container: &cnr,
Session: tok,
},
}
res, err := internalclient.PutContainer(cmd.Context(), putPrm)
commonCmd.ExitOnErr(cmd, "put container rpc error: %w", err)
id := res.ID()
cmd.Println("container ID:", id)
if containerAwait {
cmd.Println("awaiting...")
getPrm := internalclient.GetContainerPrm{
Client: cli,
ClientParams: client.PrmContainerGet{
ContainerID: &id,
},
}
for i := 0; i < awaitTimeout; i++ {
time.Sleep(1 * time.Second)
_, err := internalclient.GetContainer(cmd.Context(), getPrm)
if err == nil {
cmd.Println("container has been persisted on sidechain")
return
}
}
commonCmd.ExitOnErr(cmd, "", errCreateTimeout)
}
},
}
func initContainerCreateCmd() {
flags := createContainerCmd.Flags()
// Init common flags
flags.StringP(commonflags.RPC, commonflags.RPCShorthand, commonflags.RPCDefault, commonflags.RPCUsage)
flags.Bool(commonflags.TracingFlag, false, commonflags.TracingFlagUsage)
flags.DurationP(commonflags.Timeout, commonflags.TimeoutShorthand, commonflags.TimeoutDefault, commonflags.TimeoutUsage)
flags.StringP(commonflags.WalletPath, commonflags.WalletPathShorthand, commonflags.WalletPathDefault, commonflags.WalletPathUsage)
flags.StringP(commonflags.Account, commonflags.AccountShorthand, commonflags.AccountDefault, commonflags.AccountUsage)
flags.StringVar(&containerACL, "basic-acl", acl.NamePrivate, fmt.Sprintf("HEX encoded basic ACL value or keywords like '%s', '%s', '%s'",
acl.NamePublicRW, acl.NamePrivate, acl.NamePublicROExtended,
))
flags.StringVarP(&containerPolicy, "policy", "p", "", "QL-encoded or JSON-encoded placement policy or path to file with it")
flags.StringSliceVarP(&containerAttributes, "attributes", "a", nil, "Comma separated pairs of container attributes in form of Key1=Value1,Key2=Value2")
flags.BoolVar(&containerAwait, "await", false, "Block execution until container is persisted")
flags.StringVar(&containerName, "name", "", "Container name attribute")
flags.StringVar(&containerNnsName, "nns-name", "", "Container nns name attribute")
flags.StringVar(&containerNnsZone, "nns-zone", "", "Container nns zone attribute")
flags.BoolVar(&containerNoTimestamp, "disable-timestamp", false, "Disable timestamp container attribute")
flags.BoolVarP(&force, commonflags.ForceFlag, commonflags.ForceFlagShorthand, false,
"Skip placement validity check")
}
func parseContainerPolicy(cmd *cobra.Command, policyString string) (*netmap.PlacementPolicy, error) {
_, err := os.Stat(policyString) // check if `policyString` is a path to file with placement policy
if err == nil {
common.PrintVerbose(cmd, "Reading placement policy from file: %s", policyString)
data, err := os.ReadFile(policyString)
if err != nil {
return nil, fmt.Errorf("can't read file with placement policy: %w", err)
}
policyString = string(data)
}
var result netmap.PlacementPolicy
err = result.DecodeString(policyString)
if err == nil {
common.PrintVerbose(cmd, "Parsed QL encoded policy")
return &result, nil
}
if err := result.UnmarshalJSON([]byte(policyString)); err == nil {
common.PrintVerbose(cmd, "Parsed JSON encoded policy")
return &result, nil
}
return nil, fmt.Errorf("can't parse placement policy: %w", err)
}
func parseAttributes(dst *container.Container, attributes []string) error {
for i := range attributes {
k, v, found := strings.Cut(attributes[i], attributeDelimiter)
if !found {
return errors.New("invalid container attribute")
}
dst.SetAttribute(k, v)
}
if !containerNoTimestamp {
container.SetCreationTime(dst, time.Now())
}
if containerName != "" {
container.SetName(dst, containerName)
}
if containerNnsName != "" {
dst.SetAttribute(containerApi.SysAttributeName, containerNnsName)
}
if containerNnsZone != "" {
dst.SetAttribute(containerApi.SysAttributeZone, containerNnsZone)
}
return nil
}

View file

@ -1,139 +0,0 @@
package container
import (
"fmt"
"time"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/spf13/cobra"
)
var deleteContainerCmd = &cobra.Command{
Use: "delete",
Short: "Delete existing container",
Long: `Delete existing container.
Only owner of the container has a permission to remove container.`,
Run: func(cmd *cobra.Command, args []string) {
id := parseContainerID(cmd)
tok := getSession(cmd)
pk := key.Get(cmd)
cli := internalclient.GetSDKClientByFlag(cmd, pk, commonflags.RPC)
if force, _ := cmd.Flags().GetBool(commonflags.ForceFlag); !force {
common.PrintVerbose(cmd, "Reading the container to check ownership...")
getPrm := internalclient.GetContainerPrm{
Client: cli,
ClientParams: client.PrmContainerGet{
ContainerID: &id,
},
}
resGet, err := internalclient.GetContainer(cmd.Context(), getPrm)
commonCmd.ExitOnErr(cmd, "can't get the container: %w", err)
owner := resGet.Container().Owner()
if tok != nil {
common.PrintVerbose(cmd, "Checking session issuer...")
if !tok.Issuer().Equals(owner) {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("session issuer differs with the container owner: expected %s, has %s", owner, tok.Issuer()))
}
} else {
common.PrintVerbose(cmd, "Checking provided account...")
var acc user.ID
user.IDFromKey(&acc, pk.PublicKey)
if !acc.Equals(owner) {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("provided account differs with the container owner: expected %s, has %s", owner, acc))
}
}
common.PrintVerbose(cmd, "Account matches the container owner.")
if tok != nil {
common.PrintVerbose(cmd, "Skip searching for LOCK objects - session provided.")
} else {
fs := objectSDK.NewSearchFilters()
fs.AddTypeFilter(objectSDK.MatchStringEqual, objectSDK.TypeLock)
var searchPrm internalclient.SearchObjectsPrm
searchPrm.SetClient(cli)
searchPrm.SetContainerID(id)
searchPrm.SetFilters(fs)
searchPrm.SetTTL(2)
common.PrintVerbose(cmd, "Searching for LOCK objects...")
res, err := internalclient.SearchObjects(cmd.Context(), searchPrm)
commonCmd.ExitOnErr(cmd, "can't search for LOCK objects: %w", err)
if len(res.IDList()) != 0 {
commonCmd.ExitOnErr(cmd, "",
fmt.Errorf("container wasn't removed because LOCK objects were found, "+
"use --%s flag to remove anyway", commonflags.ForceFlag))
}
}
}
delPrm := internalclient.DeleteContainerPrm{
Client: cli,
ClientParams: client.PrmContainerDelete{
ContainerID: &id,
Session: tok,
},
}
_, err := internalclient.DeleteContainer(cmd.Context(), delPrm)
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
cmd.Println("container delete method invoked")
if containerAwait {
cmd.Println("awaiting...")
getPrm := internalclient.GetContainerPrm{
Client: cli,
ClientParams: client.PrmContainerGet{
ContainerID: &id,
},
}
for i := 0; i < awaitTimeout; i++ {
time.Sleep(1 * time.Second)
_, err := internalclient.GetContainer(cmd.Context(), getPrm)
if err != nil {
cmd.Println("container has been removed:", containerID)
return
}
}
commonCmd.ExitOnErr(cmd, "", errDeleteTimeout)
}
},
}
func initContainerDeleteCmd() {
flags := deleteContainerCmd.Flags()
flags.StringP(commonflags.WalletPath, commonflags.WalletPathShorthand, commonflags.WalletPathDefault, commonflags.WalletPathUsage)
flags.StringP(commonflags.Account, commonflags.AccountShorthand, commonflags.AccountDefault, commonflags.AccountUsage)
flags.StringP(commonflags.RPC, commonflags.RPCShorthand, commonflags.RPCDefault, commonflags.RPCUsage)
flags.Bool(commonflags.TracingFlag, false, commonflags.TracingFlagUsage)
flags.StringVar(&containerID, commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
flags.BoolVar(&containerAwait, "await", false, "Block execution until container is removed")
flags.BoolP(commonflags.ForceFlag, commonflags.ForceFlagShorthand, false, "Skip validation checks (ownership, presence of LOCK objects)")
}

View file

@ -1,164 +0,0 @@
package container
import (
"crypto/ecdsa"
"os"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"github.com/spf13/cobra"
)
const (
fromFlag = "from"
fromFlagUsage = "Path to file with encoded container"
)
var (
containerID string
containerPathFrom string
containerPathTo string
containerJSON bool
)
var getContainerInfoCmd = &cobra.Command{
Use: "get",
Short: "Get container field info",
Long: `Get container field info`,
Run: func(cmd *cobra.Command, args []string) {
cnr, _ := getContainer(cmd)
prettyPrintContainer(cmd, cnr, containerJSON)
if containerPathTo != "" {
var (
data []byte
err error
)
if containerJSON {
data, err = cnr.MarshalJSON()
commonCmd.ExitOnErr(cmd, "can't JSON encode container: %w", err)
} else {
data = cnr.Marshal()
}
err = os.WriteFile(containerPathTo, data, 0644)
commonCmd.ExitOnErr(cmd, "can't write container to file: %w", err)
}
},
}
func initContainerInfoCmd() {
commonflags.Init(getContainerInfoCmd)
flags := getContainerInfoCmd.Flags()
flags.StringVar(&containerID, commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
flags.StringVar(&containerPathTo, "to", "", "Path to dump encoded container")
flags.StringVar(&containerPathFrom, fromFlag, "", fromFlagUsage)
flags.BoolVar(&containerJSON, commonflags.JSON, false, "Print or dump container in JSON format")
}
type stringWriter cobra.Command
func (x *stringWriter) WriteString(s string) (n int, err error) {
(*cobra.Command)(x).Print(s)
return len(s), nil
}
func prettyPrintContainer(cmd *cobra.Command, cnr container.Container, jsonEncoding bool) {
if jsonEncoding {
common.PrettyPrintJSON(cmd, cnr, "container")
return
}
var id cid.ID
container.CalculateID(&id, cnr)
cmd.Println("container ID:", id)
cmd.Println("owner ID:", cnr.Owner())
basicACL := cnr.BasicACL()
prettyPrintBasicACL(cmd, basicACL)
cmd.Println("created:", container.CreatedAt(cnr))
cmd.Println("attributes:")
cnr.IterateAttributes(func(key, val string) {
cmd.Printf("\t%s=%s\n", key, val)
})
cmd.Println("placement policy:")
commonCmd.ExitOnErr(cmd, "write policy: %w", cnr.PlacementPolicy().WriteStringTo((*stringWriter)(cmd)))
cmd.Println()
}
func prettyPrintBasicACL(cmd *cobra.Command, basicACL acl.Basic) {
cmd.Printf("basic ACL: %s", basicACL.EncodeToString())
var prettyName string
switch basicACL {
case acl.Private:
prettyName = acl.NamePrivate
case acl.PrivateExtended:
prettyName = acl.NamePrivateExtended
case acl.PublicRO:
prettyName = acl.NamePublicRO
case acl.PublicROExtended:
prettyName = acl.NamePublicROExtended
case acl.PublicRW:
prettyName = acl.NamePublicRW
case acl.PublicRWExtended:
prettyName = acl.NamePublicRWExtended
case acl.PublicAppend:
prettyName = acl.NamePublicAppend
case acl.PublicAppendExtended:
prettyName = acl.NamePublicAppendExtended
}
if prettyName != "" {
cmd.Printf(" (%s)", prettyName)
}
cmd.Println()
util.PrettyPrintTableBACL(cmd, &basicACL)
}
func getContainer(cmd *cobra.Command) (container.Container, *ecdsa.PrivateKey) {
var cnr container.Container
var pk *ecdsa.PrivateKey
if containerPathFrom != "" {
data, err := os.ReadFile(containerPathFrom)
commonCmd.ExitOnErr(cmd, "can't read file: %w", err)
err = cnr.Unmarshal(data)
commonCmd.ExitOnErr(cmd, "can't unmarshal container: %w", err)
} else {
id := parseContainerID(cmd)
pk = key.GetOrGenerate(cmd)
cli := internalclient.GetSDKClientByFlag(cmd, pk, commonflags.RPC)
prm := internalclient.GetContainerPrm{
Client: cli,
ClientParams: client.PrmContainerGet{
ContainerID: &id,
},
}
res, err := internalclient.GetContainer(cmd.Context(), prm)
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
cnr = res.Container()
}
return cnr, pk
}

View file

@ -1,68 +0,0 @@
package container
import (
"os"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"github.com/spf13/cobra"
)
var getExtendedACLCmd = &cobra.Command{
Use: "get-eacl",
Short: "Get extended ACL table of container",
Long: `Get extended ACL table of container`,
Run: func(cmd *cobra.Command, args []string) {
id := parseContainerID(cmd)
pk := key.GetOrGenerate(cmd)
cli := internalclient.GetSDKClientByFlag(cmd, pk, commonflags.RPC)
eaclPrm := internalclient.EACLPrm{
Client: cli,
ClientParams: client.PrmContainerEACL{
ContainerID: &id,
},
}
res, err := internalclient.EACL(cmd.Context(), eaclPrm)
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
eaclTable := res.EACL()
if containerPathTo == "" {
cmd.Println("eACL: ")
common.PrettyPrintJSON(cmd, &eaclTable, "eACL")
return
}
var data []byte
if containerJSON {
data, err = eaclTable.MarshalJSON()
commonCmd.ExitOnErr(cmd, "can't encode to JSON: %w", err)
} else {
data, err = eaclTable.Marshal()
commonCmd.ExitOnErr(cmd, "can't encode to binary: %w", err)
}
cmd.Println("dumping data to file:", containerPathTo)
err = os.WriteFile(containerPathTo, data, 0644)
commonCmd.ExitOnErr(cmd, "could not write eACL to file: %w", err)
},
}
func initContainerGetEACLCmd() {
commonflags.Init(getExtendedACLCmd)
flags := getExtendedACLCmd.Flags()
flags.StringVar(&containerID, commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
flags.StringVar(&containerPathTo, "to", "", "Path to dump encoded container (default: binary encoded)")
flags.BoolVar(&containerJSON, commonflags.JSON, false, "Encode EACL table in json format")
}

View file

@ -1,107 +0,0 @@
package container
import (
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/container"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
containerSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/spf13/cobra"
)
// flags of list command.
const (
flagListPrintAttr = "with-attr"
flagListContainerOwner = "owner"
flagListName = "name"
)
// flag vars of list command.
var (
flagVarListPrintAttr bool
flagVarListContainerOwner string
flagVarListName string
)
var listContainersCmd = &cobra.Command{
Use: "list",
Short: "List all created containers",
Long: "List all created containers",
Run: func(cmd *cobra.Command, args []string) {
var idUser user.ID
key := key.GetOrGenerate(cmd)
if flagVarListContainerOwner == "" {
user.IDFromKey(&idUser, key.PublicKey)
} else {
err := idUser.DecodeString(flagVarListContainerOwner)
commonCmd.ExitOnErr(cmd, "invalid user ID: %w", err)
}
cli := internalclient.GetSDKClientByFlag(cmd, key, commonflags.RPC)
var prm internalclient.ListContainersPrm
prm.SetClient(cli)
prm.SetAccount(idUser)
res, err := internalclient.ListContainers(cmd.Context(), prm)
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
prmGet := internalclient.GetContainerPrm{
Client: cli,
}
containerIDs := res.IDList()
for _, cnrID := range containerIDs {
if flagVarListName == "" && !flagVarListPrintAttr {
cmd.Println(cnrID.String())
continue
}
cnrID := cnrID
prmGet.ClientParams.ContainerID = &cnrID
res, err := internalclient.GetContainer(cmd.Context(), prmGet)
if err != nil {
cmd.Printf(" failed to read attributes: %v\n", err)
continue
}
cnr := res.Container()
if cnrName := containerSDK.Name(cnr); flagVarListName != "" && cnrName != flagVarListName {
continue
}
cmd.Println(cnrID.String())
if flagVarListPrintAttr {
cnr.IterateAttributes(func(key, val string) {
if !strings.HasPrefix(key, container.SysAttributePrefix) && !strings.HasPrefix(key, container.SysAttributePrefixNeoFS) {
// FIXME(@cthulhu-rider): https://git.frostfs.info/TrueCloudLab/frostfs-sdk-go/issues/97
// Use dedicated method to skip system attributes.
cmd.Printf(" %s: %s\n", key, val)
}
})
}
}
},
}
func initContainerListContainersCmd() {
commonflags.Init(listContainersCmd)
flags := listContainersCmd.Flags()
flags.StringVar(&flagVarListName, flagListName, "",
"List containers by the attribute name",
)
flags.StringVar(&flagVarListContainerOwner, flagListContainerOwner, "",
"Owner of containers (omit to use owner from private key)",
)
flags.BoolVar(&flagVarListPrintAttr, flagListPrintAttr, false,
"Request and print attributes of each container",
)
}

View file

@ -1,97 +0,0 @@
package container
import (
"strings"
v2object "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
objectCli "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/object"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/spf13/cobra"
)
// flags of list-object command.
const (
flagListObjectPrintAttr = "with-attr"
)
// flag vars of list-objects command.
var (
flagVarListObjectsPrintAttr bool
)
var listContainerObjectsCmd = &cobra.Command{
Use: "list-objects",
Short: "List existing objects in container",
Long: `List existing objects in container`,
Run: func(cmd *cobra.Command, args []string) {
id := parseContainerID(cmd)
filters := new(objectSDK.SearchFilters)
filters.AddRootFilter() // search only user created objects
cli := internalclient.GetSDKClientByFlag(cmd, key.GetOrGenerate(cmd), commonflags.RPC)
var prmSearch internalclient.SearchObjectsPrm
var prmHead internalclient.HeadObjectPrm
prmSearch.SetClient(cli)
if flagVarListObjectsPrintAttr {
prmHead.SetClient(cli)
objectCli.Prepare(cmd, &prmSearch, &prmHead)
} else {
objectCli.Prepare(cmd, &prmSearch)
}
prmSearch.SetContainerID(id)
prmSearch.SetFilters(*filters)
res, err := internalclient.SearchObjects(cmd.Context(), prmSearch)
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
objectIDs := res.IDList()
for i := range objectIDs {
cmd.Println(objectIDs[i].String())
if flagVarListObjectsPrintAttr {
var addr oid.Address
addr.SetContainer(id)
addr.SetObject(objectIDs[i])
prmHead.SetAddress(addr)
resHead, err := internalclient.HeadObject(cmd.Context(), prmHead)
if err == nil {
attrs := resHead.Header().Attributes()
for i := range attrs {
attrKey := attrs[i].Key()
if !strings.HasPrefix(attrKey, v2object.SysAttributePrefix) && !strings.HasPrefix(attrKey, v2object.SysAttributePrefixNeoFS) {
// FIXME(@cthulhu-rider): https://git.frostfs.info/TrueCloudLab/frostfs-sdk-go/issues/97
// Use dedicated method to skip system attributes.
cmd.Printf(" %s: %s\n", attrKey, attrs[i].Value())
}
}
} else {
cmd.Printf(" failed to read attributes: %v\n", err)
}
}
}
},
}
func initContainerListObjectsCmd() {
commonflags.Init(listContainerObjectsCmd)
objectCli.InitBearer(listContainerObjectsCmd)
flags := listContainerObjectsCmd.Flags()
flags.StringVar(&containerID, commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
flags.BoolVar(&flagVarListObjectsPrintAttr, flagListObjectPrintAttr, false,
"Request and print user attributes of each object",
)
}

View file

@ -1,64 +0,0 @@
package container
import (
"crypto/sha256"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
containerAPI "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"github.com/spf13/cobra"
)
var short bool
var containerNodesCmd = &cobra.Command{
Use: "nodes",
Short: "Show nodes for container",
Long: "Show nodes taking part in a container at the current epoch.",
Run: func(cmd *cobra.Command, args []string) {
var cnr, pkey = getContainer(cmd)
if pkey == nil {
pkey = key.GetOrGenerate(cmd)
}
cli := internalclient.GetSDKClientByFlag(cmd, pkey, commonflags.RPC)
var prm internalclient.NetMapSnapshotPrm
prm.SetClient(cli)
resmap, err := internalclient.NetMapSnapshot(cmd.Context(), prm)
commonCmd.ExitOnErr(cmd, "unable to get netmap snapshot", err)
var id cid.ID
containerAPI.CalculateID(&id, cnr)
binCnr := make([]byte, sha256.Size)
id.Encode(binCnr)
policy := cnr.PlacementPolicy()
var cnrNodes [][]netmap.NodeInfo
cnrNodes, err = resmap.NetMap().ContainerNodes(policy, binCnr)
commonCmd.ExitOnErr(cmd, "could not build container nodes for given container: %w", err)
for i := range cnrNodes {
cmd.Printf("Descriptor #%d, REP %d:\n", i+1, policy.ReplicaNumberByIndex(i))
for j := range cnrNodes[i] {
commonCmd.PrettyPrintNodeInfo(cmd, cnrNodes[i][j], j, "\t", short)
}
}
},
}
func initContainerNodesCmd() {
commonflags.Init(containerNodesCmd)
flags := containerNodesCmd.Flags()
flags.StringVar(&containerID, commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
flags.StringVar(&containerPathFrom, fromFlag, "", fromFlagUsage)
flags.BoolVar(&short, "short", false, "Shortens output of node info")
}

View file

@ -1,233 +0,0 @@
package container
import (
"bufio"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"os"
"strings"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
type policyPlaygroundREPL struct {
cmd *cobra.Command
args []string
nodes map[string]netmap.NodeInfo
}
func newPolicyPlaygroundREPL(cmd *cobra.Command, args []string) (*policyPlaygroundREPL, error) {
return &policyPlaygroundREPL{
cmd: cmd,
args: args,
nodes: map[string]netmap.NodeInfo{},
}, nil
}
func (repl *policyPlaygroundREPL) handleLs(args []string) error {
if len(args) > 0 {
return fmt.Errorf("too many arguments for command 'ls': got %d, want 0", len(args))
}
i := 1
for id, node := range repl.nodes {
var attrs []string
node.IterateAttributes(func(k, v string) {
attrs = append(attrs, fmt.Sprintf("%s:%q", k, v))
})
fmt.Printf("\t%2d: id=%s attrs={%v}\n", i, id, strings.Join(attrs, " "))
i++
}
return nil
}
func (repl *policyPlaygroundREPL) handleAdd(args []string) error {
if len(args) == 0 {
return fmt.Errorf("too few arguments for command 'add': got %d, want >0", len(args))
}
id := args[0]
key, err := hex.DecodeString(id)
if err != nil {
return fmt.Errorf("node id must be a hex string: got %q: %v", id, err)
}
node := repl.nodes[id]
node.SetPublicKey(key)
for _, attr := range args[1:] {
kv := strings.Split(attr, ":")
if len(kv) != 2 {
return fmt.Errorf("node attributes must be in the format 'KEY:VALUE': got %q", attr)
}
node.SetAttribute(kv[0], kv[1])
}
repl.nodes[id] = node
return nil
}
func (repl *policyPlaygroundREPL) handleLoad(args []string) error {
if len(args) != 1 {
return fmt.Errorf("too few arguments for command 'add': got %d, want 1", len(args))
}
jsonNetmap := map[string]map[string]string{}
b, err := os.ReadFile(args[0])
if err != nil {
return fmt.Errorf("reading netmap file %q: %v", args[0], err)
}
if err := json.Unmarshal(b, &jsonNetmap); err != nil {
return fmt.Errorf("decoding json netmap: %v", err)
}
repl.nodes = make(map[string]netmap.NodeInfo)
for id, attrs := range jsonNetmap {
key, err := hex.DecodeString(id)
if err != nil {
return fmt.Errorf("node id must be a hex string: got %q: %v", id, err)
}
node := repl.nodes[id]
node.SetPublicKey(key)
for k, v := range attrs {
node.SetAttribute(k, v)
}
repl.nodes[id] = node
}
return nil
}
func (repl *policyPlaygroundREPL) handleRemove(args []string) error {
if len(args) == 0 {
return fmt.Errorf("too few arguments for command 'remove': got %d, want >0", len(args))
}
id := args[0]
if _, exists := repl.nodes[id]; exists {
delete(repl.nodes, id)
return nil
}
return fmt.Errorf("node not found: id=%q", id)
}
func (repl *policyPlaygroundREPL) handleEval(args []string) error {
policyStr := strings.TrimSpace(strings.Join(args, " "))
var nodes [][]netmap.NodeInfo
nm := repl.netMap()
if strings.HasPrefix(policyStr, "CBF") || strings.HasPrefix(policyStr, "SELECT") || strings.HasPrefix(policyStr, "FILTER") {
// Assume that the input is a partial SELECT-FILTER expression.
// Full inline policies always start with UNIQUE or REP keywords,
// or different prefixes when it's the case of an external file.
sfExpr, err := netmap.DecodeSelectFilterString(policyStr)
if err != nil {
return fmt.Errorf("parsing select-filter expression: %v", err)
}
nodes, err = nm.SelectFilterNodes(sfExpr)
if err != nil {
return fmt.Errorf("building select-filter nodes: %v", err)
}
} else {
// Assume that the input is a full policy or input file otherwise.
placementPolicy, err := parseContainerPolicy(repl.cmd, policyStr)
if err != nil {
return fmt.Errorf("parsing placement policy: %v", err)
}
nodes, err = nm.ContainerNodes(*placementPolicy, nil)
if err != nil {
return fmt.Errorf("building container nodes: %v", err)
}
}
for i, ns := range nodes {
var ids []string
for _, node := range ns {
ids = append(ids, hex.EncodeToString(node.PublicKey()))
}
fmt.Printf("\t%2d: %v\n", i+1, ids)
}
return nil
}
func (repl *policyPlaygroundREPL) netMap() netmap.NetMap {
var nm netmap.NetMap
var nodes []netmap.NodeInfo
for _, node := range repl.nodes {
nodes = append(nodes, node)
}
nm.SetNodes(nodes)
return nm
}
func (repl *policyPlaygroundREPL) run() error {
if len(viper.GetString(commonflags.RPC)) > 0 {
key := key.GetOrGenerate(repl.cmd)
cli := internalclient.GetSDKClientByFlag(repl.cmd, key, commonflags.RPC)
var prm internalclient.NetMapSnapshotPrm
prm.SetClient(cli)
resp, err := internalclient.NetMapSnapshot(repl.cmd.Context(), prm)
commonCmd.ExitOnErr(repl.cmd, "unable to get netmap snapshot to populate initial netmap: %w", err)
for _, node := range resp.NetMap().Nodes() {
id := hex.EncodeToString(node.PublicKey())
repl.nodes[id] = node
}
}
cmdHandlers := map[string]func([]string) error{
"list": repl.handleLs,
"ls": repl.handleLs,
"add": repl.handleAdd,
"load": repl.handleLoad,
"remove": repl.handleRemove,
"rm": repl.handleRemove,
"eval": repl.handleEval,
}
for reader := bufio.NewReader(os.Stdin); ; {
fmt.Print("> ")
line, err := reader.ReadString('\n')
if err != nil {
if err == io.EOF {
return nil
}
return fmt.Errorf("reading line: %v", err)
}
parts := strings.Fields(line)
if len(parts) == 0 {
continue
}
cmd := parts[0]
handler, exists := cmdHandlers[cmd]
if exists {
if err := handler(parts[1:]); err != nil {
fmt.Printf("error: %v\n", err)
}
} else {
fmt.Printf("error: unknown command %q\n", cmd)
}
}
}
var policyPlaygroundCmd = &cobra.Command{
Use: "policy-playground",
Short: "A REPL for testing placement policies",
Long: `A REPL for testing placement policies.
If a wallet and endpoint is provided, the initial netmap data will be loaded from the snapshot of the node. Otherwise, an empty playground is created.`,
Run: func(cmd *cobra.Command, args []string) {
repl, err := newPolicyPlaygroundREPL(cmd, args)
commonCmd.ExitOnErr(cmd, "could not create policy playground: %w", err)
commonCmd.ExitOnErr(cmd, "policy playground failed: %w", repl.run())
},
}
func initContainerPolicyPlaygroundCmd() {
commonflags.Init(policyPlaygroundCmd)
}

View file

@ -1,108 +0,0 @@
package container
import (
"bytes"
"errors"
"time"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"github.com/spf13/cobra"
)
var flagVarsSetEACL struct {
noPreCheck bool
srcPath string
}
var setExtendedACLCmd = &cobra.Command{
Use: "set-eacl",
Short: "Set new extended ACL table for container",
Long: `Set new extended ACL table for container.
Container ID in EACL table will be substituted with ID from the CLI.`,
Run: func(cmd *cobra.Command, args []string) {
id := parseContainerID(cmd)
eaclTable := common.ReadEACL(cmd, flagVarsSetEACL.srcPath)
tok := getSession(cmd)
eaclTable.SetCID(id)
pk := key.GetOrGenerate(cmd)
cli := internalclient.GetSDKClientByFlag(cmd, pk, commonflags.RPC)
if !flagVarsSetEACL.noPreCheck {
cmd.Println("Checking the ability to modify access rights in the container...")
extendable, err := internalclient.IsACLExtendable(cmd.Context(), cli, id)
commonCmd.ExitOnErr(cmd, "Extensibility check failure: %w", err)
if !extendable {
commonCmd.ExitOnErr(cmd, "", errors.New("container ACL is immutable"))
}
cmd.Println("ACL extension is enabled in the container, continue processing.")
}
setEACLPrm := internalclient.SetEACLPrm{
Client: cli,
ClientParams: client.PrmContainerSetEACL{
Table: eaclTable,
Session: tok,
},
}
_, err := internalclient.SetEACL(cmd.Context(), setEACLPrm)
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
if containerAwait {
exp, err := eaclTable.Marshal()
commonCmd.ExitOnErr(cmd, "broken EACL table: %w", err)
cmd.Println("awaiting...")
getEACLPrm := internalclient.EACLPrm{
Client: cli,
ClientParams: client.PrmContainerEACL{
ContainerID: &id,
},
}
for i := 0; i < awaitTimeout; i++ {
time.Sleep(1 * time.Second)
res, err := internalclient.EACL(cmd.Context(), getEACLPrm)
if err == nil {
// compare binary values because EACL could have been set already
table := res.EACL()
got, err := table.Marshal()
if err != nil {
continue
}
if bytes.Equal(exp, got) {
cmd.Println("EACL has been persisted on sidechain")
return
}
}
}
commonCmd.ExitOnErr(cmd, "", errSetEACLTimeout)
}
},
}
func initContainerSetEACLCmd() {
commonflags.Init(setExtendedACLCmd)
flags := setExtendedACLCmd.Flags()
flags.StringVar(&containerID, commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
flags.StringVar(&flagVarsSetEACL.srcPath, "table", "", "path to file with JSON or binary encoded EACL table")
flags.BoolVar(&containerAwait, "await", false, "block execution until EACL is persisted")
flags.BoolVar(&flagVarsSetEACL.noPreCheck, "no-precheck", false, "do not pre-check the extensibility of the container ACL")
}

View file

@ -1,58 +0,0 @@
package container
import (
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
"github.com/spf13/cobra"
)
const (
attributeDelimiter = "="
awaitTimeout = 120 // in seconds
)
var (
errCreateTimeout = errors.New("timeout: container has not been persisted on sidechain")
errDeleteTimeout = errors.New("timeout: container has not been removed from sidechain")
errSetEACLTimeout = errors.New("timeout: EACL has not been persisted on sidechain")
)
func parseContainerID(cmd *cobra.Command) cid.ID {
if containerID == "" {
commonCmd.ExitOnErr(cmd, "", errors.New("container ID is not set"))
}
var id cid.ID
err := id.DecodeString(containerID)
commonCmd.ExitOnErr(cmd, "can't decode container ID value: %w", err)
return id
}
// decodes session.Container from the file by path provided in
// commonflags.SessionToken flag. Returns nil if the path is not specified.
func getSession(cmd *cobra.Command) *session.Container {
common.PrintVerbose(cmd, "Reading container session...")
path, _ := cmd.Flags().GetString(commonflags.SessionToken)
if path == "" {
common.PrintVerbose(cmd, "Session not provided.")
return nil
}
common.PrintVerbose(cmd, "Reading container session from the file [%s]...", path)
var res session.Container
err := common.ReadBinaryOrJSON(cmd, &res, path)
commonCmd.ExitOnErr(cmd, "read container session: %v", err)
common.PrintVerbose(cmd, "Session successfully read.")
return &res
}

View file

@ -1,53 +0,0 @@
package control
import (
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"github.com/spf13/cobra"
)
const (
concurrencyFlag = "concurrency"
removeDuplicatesFlag = "remove-duplicates"
)
var doctorCmd = &cobra.Command{
Use: "doctor",
Short: "Restructure node's storage",
Long: "Restructure node's storage",
Run: doctor,
}
func doctor(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
req := &control.DoctorRequest{Body: new(control.DoctorRequest_Body)}
req.Body.Concurrency, _ = cmd.Flags().GetUint32(concurrencyFlag)
req.Body.RemoveDuplicates, _ = cmd.Flags().GetBool(removeDuplicatesFlag)
signRequest(cmd, pk, req)
cli := getClient(cmd, pk)
var resp *control.DoctorResponse
var err error
err = cli.ExecRaw(func(client *client.Client) error {
resp, err = control.Doctor(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Println("Operation has finished.")
}
func initControlDoctorCmd() {
initControlFlags(doctorCmd)
ff := doctorCmd.Flags()
ff.Uint32(concurrencyFlag, 0, "Number of parallel threads to use")
ff.Bool(removeDuplicatesFlag, false, "Remove duplicate objects")
}

View file

@ -1,56 +0,0 @@
package control
import (
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"github.com/spf13/cobra"
)
const ignoreErrorsFlag = "no-errors"
var evacuateShardCmd = &cobra.Command{
Use: "evacuate",
Short: "Evacuate objects from shard",
Long: "Evacuate objects from shard to other shards",
Run: evacuateShard,
Deprecated: "use frostfs-cli control shards evacuation start",
}
func evacuateShard(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
req := &control.EvacuateShardRequest{Body: new(control.EvacuateShardRequest_Body)}
req.Body.Shard_ID = getShardIDList(cmd)
req.Body.IgnoreErrors, _ = cmd.Flags().GetBool(ignoreErrorsFlag)
signRequest(cmd, pk, req)
cli := getClient(cmd, pk)
var resp *control.EvacuateShardResponse
var err error
err = cli.ExecRaw(func(client *client.Client) error {
resp, err = control.EvacuateShard(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
cmd.Printf("Objects moved: %d\n", resp.GetBody().GetCount())
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Println("Shard has successfully been evacuated.")
}
func initControlEvacuateShardCmd() {
initControlFlags(evacuateShardCmd)
flags := evacuateShardCmd.Flags()
flags.StringSlice(shardIDFlag, nil, "List of shard IDs in base58 encoding")
flags.Bool(shardAllFlag, false, "Process all shards")
flags.Bool(ignoreErrorsFlag, false, "Skip invalid/unreadable objects")
evacuateShardCmd.MarkFlagsMutuallyExclusive(shardIDFlag, shardAllFlag)
}

View file

@ -1,315 +0,0 @@
package control
import (
"crypto/ecdsa"
"fmt"
"strings"
"sync/atomic"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
clientSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"github.com/spf13/cobra"
)
const (
awaitFlag = "await"
noProgressFlag = "no-progress"
)
var evacuationShardCmd = &cobra.Command{
Use: "evacuation",
Short: "Objects evacuation from shard",
Long: "Objects evacuation from shard to other shards",
}
var startEvacuationShardCmd = &cobra.Command{
Use: "start",
Short: "Start evacuate objects from shard",
Long: "Start evacuate objects from shard to other shards",
Run: startEvacuateShard,
}
var getEvacuationShardStatusCmd = &cobra.Command{
Use: "status",
Short: "Get evacuate objects from shard status",
Long: "Get evacuate objects from shard to other shards status",
Run: getEvacuateShardStatus,
}
var stopEvacuationShardCmd = &cobra.Command{
Use: "stop",
Short: "Stop running evacuate process",
Long: "Stop running evacuate process from shard to other shards",
Run: stopEvacuateShardStatus,
}
func startEvacuateShard(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
ignoreErrors, _ := cmd.Flags().GetBool(ignoreErrorsFlag)
req := &control.StartShardEvacuationRequest{
Body: &control.StartShardEvacuationRequest_Body{
Shard_ID: getShardIDList(cmd),
IgnoreErrors: ignoreErrors,
},
}
signRequest(cmd, pk, req)
cli := getClient(cmd, pk)
var resp *control.StartShardEvacuationResponse
var err error
err = cli.ExecRaw(func(client *client.Client) error {
resp, err = control.StartShardEvacuation(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "Start evacuate shards failed, rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Println("Shard evacuation has been successfully started.")
if awaitCompletion, _ := cmd.Flags().GetBool(awaitFlag); awaitCompletion {
noProgress, _ := cmd.Flags().GetBool(noProgressFlag)
waitEvacuateCompletion(cmd, pk, cli, !noProgress, true)
}
}
func getEvacuateShardStatus(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
req := &control.GetShardEvacuationStatusRequest{
Body: &control.GetShardEvacuationStatusRequest_Body{},
}
signRequest(cmd, pk, req)
cli := getClient(cmd, pk)
var resp *control.GetShardEvacuationStatusResponse
var err error
err = cli.ExecRaw(func(client *client.Client) error {
resp, err = control.GetShardEvacuationStatus(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "Get evacuate shards status failed, rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
printStatus(cmd, resp)
}
func stopEvacuateShardStatus(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
req := &control.StopShardEvacuationRequest{
Body: &control.StopShardEvacuationRequest_Body{},
}
signRequest(cmd, pk, req)
cli := getClient(cmd, pk)
var resp *control.StopShardEvacuationResponse
var err error
err = cli.ExecRaw(func(client *client.Client) error {
resp, err = control.StopShardEvacuation(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "Stop evacuate shards failed, rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
waitEvacuateCompletion(cmd, pk, cli, false, false)
cmd.Println("Evacuation stopped.")
}
func waitEvacuateCompletion(cmd *cobra.Command, pk *ecdsa.PrivateKey, cli *clientSDK.Client, printProgress, printCompleted bool) {
const statusPollingInterval = 1 * time.Second
const reportIntervalSeconds = 5
var resp *control.GetShardEvacuationStatusResponse
reportResponse := new(atomic.Pointer[control.GetShardEvacuationStatusResponse])
pollingCompleted := make(chan struct{})
progressReportCompleted := make(chan struct{})
go func() {
defer close(progressReportCompleted)
if !printProgress {
return
}
cmd.Printf("Progress will be reported every %d seconds.\n", reportIntervalSeconds)
for {
select {
case <-pollingCompleted:
return
case <-time.After(reportIntervalSeconds * time.Second):
r := reportResponse.Load()
if r == nil || r.GetBody().GetStatus() == control.GetShardEvacuationStatusResponse_Body_COMPLETED {
continue
}
printStatus(cmd, r)
}
}
}()
for {
req := &control.GetShardEvacuationStatusRequest{
Body: &control.GetShardEvacuationStatusRequest_Body{},
}
signRequest(cmd, pk, req)
var err error
err = cli.ExecRaw(func(client *client.Client) error {
resp, err = control.GetShardEvacuationStatus(client, req)
return err
})
reportResponse.Store(resp)
if err != nil {
commonCmd.ExitOnErr(cmd, "Failed to get evacuate status, rpc error: %w", err)
return
}
if resp.GetBody().GetStatus() != control.GetShardEvacuationStatusResponse_Body_RUNNING {
break
}
time.Sleep(statusPollingInterval)
}
close(pollingCompleted)
<-progressReportCompleted
if printCompleted {
printCompletedStatusMessage(cmd, resp)
}
}
func printCompletedStatusMessage(cmd *cobra.Command, resp *control.GetShardEvacuationStatusResponse) {
cmd.Println("Shard evacuation has been completed.")
sb := &strings.Builder{}
appendShardIDs(sb, resp)
appendCounts(sb, resp)
appendError(sb, resp)
appendStartedAt(sb, resp)
appendDuration(sb, resp)
cmd.Println(sb.String())
}
func printStatus(cmd *cobra.Command, resp *control.GetShardEvacuationStatusResponse) {
if resp.GetBody().GetStatus() == control.GetShardEvacuationStatusResponse_Body_EVACUATE_SHARD_STATUS_UNDEFINED {
cmd.Println("There is no running or completed evacuation.")
return
}
sb := &strings.Builder{}
appendShardIDs(sb, resp)
appendStatus(sb, resp)
appendCounts(sb, resp)
appendError(sb, resp)
appendStartedAt(sb, resp)
appendDuration(sb, resp)
appendEstimation(sb, resp)
cmd.Println(sb.String())
}
func appendEstimation(sb *strings.Builder, resp *control.GetShardEvacuationStatusResponse) {
if resp.GetBody().GetStatus() != control.GetShardEvacuationStatusResponse_Body_RUNNING ||
resp.GetBody().GetDuration() == nil ||
resp.GetBody().GetTotal() == 0 ||
resp.GetBody().GetEvacuated()+resp.GetBody().GetFailed() == 0 {
return
}
durationSeconds := float64(resp.GetBody().GetDuration().GetSeconds())
evacuated := float64(resp.GetBody().GetEvacuated() + resp.GetBody().GetFailed())
avgObjEvacuationTimeSeconds := durationSeconds / evacuated
objectsLeft := float64(resp.GetBody().GetTotal()) - evacuated
leftSeconds := avgObjEvacuationTimeSeconds * objectsLeft
leftMinutes := int(leftSeconds / 60)
sb.WriteString(fmt.Sprintf(" Estimated time left: %d minutes.", leftMinutes))
}
func appendDuration(sb *strings.Builder, resp *control.GetShardEvacuationStatusResponse) {
if resp.GetBody().GetDuration() != nil {
duration := time.Second * time.Duration(resp.GetBody().GetDuration().GetSeconds())
hour := int(duration.Seconds() / 3600)
minute := int(duration.Seconds()/60) % 60
second := int(duration.Seconds()) % 60
sb.WriteString(fmt.Sprintf(" Duration: %02d:%02d:%02d.", hour, minute, second))
}
}
func appendStartedAt(sb *strings.Builder, resp *control.GetShardEvacuationStatusResponse) {
if resp.GetBody().GetStartedAt() != nil {
startedAt := time.Unix(resp.GetBody().GetStartedAt().GetValue(), 0).UTC()
sb.WriteString(fmt.Sprintf(" Started at: %s UTC.", startedAt.Format(time.RFC3339)))
}
}
func appendError(sb *strings.Builder, resp *control.GetShardEvacuationStatusResponse) {
if len(resp.Body.GetErrorMessage()) > 0 {
sb.WriteString(fmt.Sprintf(" Error: %s.", resp.Body.GetErrorMessage()))
}
}
func appendStatus(sb *strings.Builder, resp *control.GetShardEvacuationStatusResponse) {
var status string
switch resp.GetBody().GetStatus() {
case control.GetShardEvacuationStatusResponse_Body_COMPLETED:
status = "completed"
case control.GetShardEvacuationStatusResponse_Body_RUNNING:
status = "running"
default:
status = "undefined"
}
sb.WriteString(fmt.Sprintf(" Status: %s.", status))
}
func appendShardIDs(sb *strings.Builder, resp *control.GetShardEvacuationStatusResponse) {
sb.WriteString("Shard IDs: ")
for idx, shardID := range resp.GetBody().GetShard_ID() {
shardIDStr := shard.NewIDFromBytes(shardID).String()
if idx > 0 {
sb.WriteString(", ")
}
sb.WriteString(shardIDStr)
if idx == len(resp.GetBody().GetShard_ID())-1 {
sb.WriteString(".")
}
}
}
func appendCounts(sb *strings.Builder, resp *control.GetShardEvacuationStatusResponse) {
sb.WriteString(fmt.Sprintf(" Evacuated %d object out of %d, failed to evacuate %d objects.",
resp.GetBody().GetEvacuated(),
resp.Body.GetTotal(),
resp.Body.GetFailed()))
}
func initControlEvacuationShardCmd() {
evacuationShardCmd.AddCommand(startEvacuationShardCmd)
evacuationShardCmd.AddCommand(getEvacuationShardStatusCmd)
evacuationShardCmd.AddCommand(stopEvacuationShardCmd)
initControlStartEvacuationShardCmd()
initControlFlags(getEvacuationShardStatusCmd)
initControlFlags(stopEvacuationShardCmd)
}
func initControlStartEvacuationShardCmd() {
initControlFlags(startEvacuationShardCmd)
flags := startEvacuationShardCmd.Flags()
flags.StringSlice(shardIDFlag, nil, "List of shard IDs in base58 encoding")
flags.Bool(shardAllFlag, false, "Process all shards")
flags.Bool(ignoreErrorsFlag, true, "Skip invalid/unreadable objects")
flags.Bool(awaitFlag, false, "Block execution until evacuation is completed")
flags.Bool(noProgressFlag, false, fmt.Sprintf("Print progress if %s provided", awaitFlag))
startEvacuationShardCmd.MarkFlagsMutuallyExclusive(shardIDFlag, shardAllFlag)
}

View file

@ -1,49 +0,0 @@
package control
import (
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"github.com/spf13/cobra"
)
var flushCacheCmd = &cobra.Command{
Use: "flush-cache",
Short: "Flush objects from the write-cache to the main storage",
Long: "Flush objects from the write-cache to the main storage",
Run: flushCache,
}
func flushCache(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
req := &control.FlushCacheRequest{Body: new(control.FlushCacheRequest_Body)}
req.Body.Shard_ID = getShardIDList(cmd)
signRequest(cmd, pk, req)
cli := getClient(cmd, pk)
var resp *control.FlushCacheResponse
var err error
err = cli.ExecRaw(func(client *client.Client) error {
resp, err = control.FlushCache(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Println("Write-cache has been flushed.")
}
func initControlFlushCacheCmd() {
initControlFlags(flushCacheCmd)
ff := flushCacheCmd.Flags()
ff.StringSlice(shardIDFlag, nil, "List of shard IDs in base58 encoding")
ff.Bool(shardAllFlag, false, "Process all shards")
flushCacheCmd.MarkFlagsMutuallyExclusive(shardIDFlag, shardAllFlag)
}

View file

@ -1,56 +0,0 @@
package control
import (
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"github.com/spf13/cobra"
)
const (
healthcheckIRFlag = "ir"
)
var healthCheckCmd = &cobra.Command{
Use: "healthcheck",
Short: "Health check for FrostFS storage nodes",
Long: "Health check for FrostFS storage nodes.",
Run: healthCheck,
}
func initControlHealthCheckCmd() {
initControlFlags(healthCheckCmd)
flags := healthCheckCmd.Flags()
flags.Bool(healthcheckIRFlag, false, "Communicate with IR node")
_ = flags.MarkDeprecated(healthcheckIRFlag, "for health check of inner ring nodes, use the 'control ir healthcheck' command instead.")
}
func healthCheck(cmd *cobra.Command, args []string) {
if isIR, _ := cmd.Flags().GetBool(healthcheckIRFlag); isIR {
irHealthCheck(cmd, args)
return
}
pk := key.Get(cmd)
cli := getClient(cmd, pk)
req := new(control.HealthCheckRequest)
req.SetBody(new(control.HealthCheckRequest_Body))
signRequest(cmd, pk, req)
var resp *control.HealthCheckResponse
var err error
err = cli.ExecRaw(func(client *rawclient.Client) error {
resp, err = control.HealthCheck(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Printf("Network status: %s\n", resp.GetBody().GetNetmapStatus())
cmd.Printf("Health status: %s\n", resp.GetBody().GetHealthStatus())
}

View file

@ -1,19 +0,0 @@
package control
import "github.com/spf13/cobra"
var irCmd = &cobra.Command{
Use: "ir",
Short: "Operations with inner ring nodes",
Long: "Operations with inner ring nodes",
}
func initControlIRCmd() {
irCmd.AddCommand(tickEpochCmd)
irCmd.AddCommand(removeNodeCmd)
irCmd.AddCommand(irHealthCheckCmd)
initControlIRTickEpochCmd()
initControlIRRemoveNodeCmd()
initControlIRHealthCheckCmd()
}

View file

@ -1,44 +0,0 @@
package control
import (
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
ircontrol "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir"
ircontrolsrv "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir/server"
"github.com/spf13/cobra"
)
var irHealthCheckCmd = &cobra.Command{
Use: "healthcheck",
Short: "Health check for FrostFS inner ring nodes",
Long: "Health check for FrostFS inner ring nodes.",
Run: irHealthCheck,
}
func initControlIRHealthCheckCmd() {
initControlFlags(irHealthCheckCmd)
}
func irHealthCheck(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
cli := getClient(cmd, pk)
req := new(ircontrol.HealthCheckRequest)
req.SetBody(new(ircontrol.HealthCheckRequest_Body))
err := ircontrolsrv.SignMessage(pk, req)
commonCmd.ExitOnErr(cmd, "could not sign request: %w", err)
var resp *ircontrol.HealthCheckResponse
err = cli.ExecRaw(func(client *rawclient.Client) error {
resp, err = ircontrol.HealthCheck(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Printf("Health status: %s\n", resp.GetBody().GetHealthStatus())
}

View file

@ -1,58 +0,0 @@
package control
import (
"encoding/hex"
"errors"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
ircontrol "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir"
ircontrolsrv "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir/server"
"github.com/spf13/cobra"
)
var removeNodeCmd = &cobra.Command{
Use: "remove-node",
Short: "Forces a node removal from netmap",
Long: "Forces a node removal from netmap via a notary request. It should be executed on other IR nodes as well.",
Run: removeNode,
}
func initControlIRRemoveNodeCmd() {
initControlFlags(removeNodeCmd)
flags := removeNodeCmd.Flags()
flags.String("node", "", "Node public key as a hex string")
_ = removeNodeCmd.MarkFlagRequired("node")
}
func removeNode(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
c := getClient(cmd, pk)
nodeKeyStr, _ := cmd.Flags().GetString("node")
if len(nodeKeyStr) == 0 {
commonCmd.ExitOnErr(cmd, "parsing node public key: ", errors.New("key cannot be empty"))
}
nodeKey, err := hex.DecodeString(nodeKeyStr)
commonCmd.ExitOnErr(cmd, "can't decode node public key: %w", err)
req := new(ircontrol.RemoveNodeRequest)
req.SetBody(&ircontrol.RemoveNodeRequest_Body{
Key: nodeKey,
})
commonCmd.ExitOnErr(cmd, "could not sign request: %w", ircontrolsrv.SignMessage(pk, req))
var resp *ircontrol.RemoveNodeResponse
err = c.ExecRaw(func(client *rawclient.Client) error {
resp, err = ircontrol.RemoveNode(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Println("Node removed")
}

View file

@ -1,43 +0,0 @@
package control
import (
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
ircontrol "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir"
ircontrolsrv "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir/server"
"github.com/spf13/cobra"
)
var tickEpochCmd = &cobra.Command{
Use: "tick-epoch",
Short: "Forces a new epoch",
Long: "Forces a new epoch via a notary request. It should be executed on other IR nodes as well.",
Run: tickEpoch,
}
func initControlIRTickEpochCmd() {
initControlFlags(tickEpochCmd)
}
func tickEpoch(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
c := getClient(cmd, pk)
req := new(ircontrol.TickEpochRequest)
req.SetBody(new(ircontrol.TickEpochRequest_Body))
err := ircontrolsrv.SignMessage(pk, req)
commonCmd.ExitOnErr(cmd, "could not sign request: %w", err)
var resp *ircontrol.TickEpochResponse
err = c.ExecRaw(func(client *rawclient.Client) error {
resp, err = ircontrol.TickEpoch(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Println("Epoch tick requested")
}

View file

@ -1,95 +0,0 @@
package control
import (
"fmt"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"github.com/spf13/cobra"
)
const (
netmapStatusFlag = "status"
netmapStatusOnline = "online"
netmapStatusOffline = "offline"
netmapStatusMaintenance = "maintenance"
)
var setNetmapStatusCmd = &cobra.Command{
Use: "set-status",
Short: "Set status of the storage node in FrostFS network map",
Long: "Set status of the storage node in FrostFS network map",
Run: setNetmapStatus,
}
func initControlSetNetmapStatusCmd() {
initControlFlags(setNetmapStatusCmd)
flags := setNetmapStatusCmd.Flags()
flags.String(netmapStatusFlag, "",
fmt.Sprintf("New netmap status keyword ('%s', '%s', '%s')",
netmapStatusOnline,
netmapStatusOffline,
netmapStatusMaintenance,
),
)
_ = setNetmapStatusCmd.MarkFlagRequired(netmapStatusFlag)
flags.BoolP(commonflags.ForceFlag, commonflags.ForceFlagShorthand, false,
"Force turning to local maintenance")
}
func setNetmapStatus(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
body := new(control.SetNetmapStatusRequest_Body)
force, _ := cmd.Flags().GetBool(commonflags.ForceFlag)
printIgnoreForce := func(st control.NetmapStatus) {
if force {
common.PrintVerbose(cmd, "Ignore --%s flag for %s state.", commonflags.ForceFlag, st)
}
}
switch st, _ := cmd.Flags().GetString(netmapStatusFlag); st {
default:
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("unsupported status %s", st))
case netmapStatusOnline:
body.SetStatus(control.NetmapStatus_ONLINE)
printIgnoreForce(control.NetmapStatus_ONLINE)
case netmapStatusOffline:
body.SetStatus(control.NetmapStatus_OFFLINE)
printIgnoreForce(control.NetmapStatus_OFFLINE)
case netmapStatusMaintenance:
body.SetStatus(control.NetmapStatus_MAINTENANCE)
if force {
body.SetForceMaintenance()
common.PrintVerbose(cmd, "Local maintenance will be forced.")
}
}
req := new(control.SetNetmapStatusRequest)
req.SetBody(body)
signRequest(cmd, pk, req)
cli := getClient(cmd, pk)
var resp *control.SetNetmapStatusResponse
var err error
err = cli.ExecRaw(func(client *rawclient.Client) error {
resp, err = control.SetNetmapStatus(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Println("Network status update request successfully sent.")
}

View file

@ -1,171 +0,0 @@
package control
import (
"fmt"
"strings"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"github.com/mr-tron/base58"
"github.com/spf13/cobra"
)
const (
shardModeFlag = "mode"
shardIDFlag = "id"
shardAllFlag = "all"
shardClearErrorsFlag = "clear-errors"
)
// maps string command input to control.ShardMode. To support new mode, it's
// enough to add the map entry. Modes are automatically printed in command help
// messages.
var mShardModes = map[string]struct {
val control.ShardMode
// flag to support shard mode implicitly without help message. The flag is set
// for values which are not expected to be set by users but still supported
// for developers.
unsafe bool
}{
"read-only": {val: control.ShardMode_READ_ONLY},
"read-write": {val: control.ShardMode_READ_WRITE},
"degraded-read-write": {val: control.ShardMode_DEGRADED, unsafe: true},
"degraded-read-only": {val: control.ShardMode_DEGRADED_READ_ONLY},
}
// iterates over string representations of safe supported shard modes. Safe means
// modes which are expected to be used by any user. All other supported modes
// are for developers only.
func iterateSafeShardModes(f func(string)) {
for strMode, mode := range mShardModes {
if !mode.unsafe {
f(strMode)
}
}
}
// looks up for supported control.ShardMode represented by the given string.
// Returns false if no corresponding mode exists.
func lookUpShardModeFromString(str string) (control.ShardMode, bool) {
mode, ok := mShardModes[str]
if !ok {
return control.ShardMode_SHARD_MODE_UNDEFINED, false
}
return mode.val, true
}
// looks up for string representation of supported shard mode. Returns false
// if mode is not supported.
func lookUpShardModeString(m control.ShardMode) (string, bool) {
for strMode, mode := range mShardModes {
if mode.val == m {
return strMode, true
}
}
return "", false
}
var setShardModeCmd = &cobra.Command{
Use: "set-mode",
Short: "Set work mode of the shard",
Long: "Set work mode of the shard",
Run: setShardMode,
}
func initControlSetShardModeCmd() {
initControlFlags(setShardModeCmd)
flags := setShardModeCmd.Flags()
flags.StringSlice(shardIDFlag, nil, "List of shard IDs in base58 encoding")
flags.Bool(shardAllFlag, false, "Process all shards")
modes := make([]string, 0)
iterateSafeShardModes(func(strMode string) {
modes = append(modes, "'"+strMode+"'")
})
flags.String(shardModeFlag, "",
fmt.Sprintf("New shard mode (%s)", strings.Join(modes, ", ")),
)
_ = setShardModeCmd.MarkFlagRequired(shardModeFlag)
flags.Bool(shardClearErrorsFlag, false, "Set shard error count to 0")
setShardModeCmd.MarkFlagsMutuallyExclusive(shardIDFlag, shardAllFlag)
}
func setShardMode(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
strMode, _ := cmd.Flags().GetString(shardModeFlag)
mode, ok := lookUpShardModeFromString(strMode)
if !ok {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("unsupported mode %s", strMode))
}
req := new(control.SetShardModeRequest)
body := new(control.SetShardModeRequest_Body)
req.SetBody(body)
body.SetMode(mode)
body.SetShardIDList(getShardIDList(cmd))
reset, _ := cmd.Flags().GetBool(shardClearErrorsFlag)
body.ClearErrorCounter(reset)
signRequest(cmd, pk, req)
cli := getClient(cmd, pk)
var resp *control.SetShardModeResponse
var err error
err = cli.ExecRaw(func(client *rawclient.Client) error {
resp, err = control.SetShardMode(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Println("Shard mode update request successfully sent.")
}
func getShardIDList(cmd *cobra.Command) [][]byte {
all, _ := cmd.Flags().GetBool(shardAllFlag)
if all {
return nil
}
sidList, _ := cmd.Flags().GetStringSlice(shardIDFlag)
if len(sidList) == 0 {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("either --%s or --%s flag must be provided", shardIDFlag, shardAllFlag))
}
// We can sort the ID list and perform this check without additional allocations,
// but preserving the user order is a nice thing to have.
// Also, this is a CLI, we don't care too much about this.
seen := make(map[string]struct{})
for i := range sidList {
if _, ok := seen[sidList[i]]; ok {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("duplicated shard IDs: %s", sidList[i]))
}
seen[sidList[i]] = struct{}{}
}
res := make([][]byte, 0, len(sidList))
for i := range sidList {
raw, err := base58.Decode(sidList[i])
commonCmd.ExitOnErr(cmd, "incorrect shard ID encoding: %w", err)
res = append(res, raw)
}
return res
}

View file

@ -1,59 +0,0 @@
package control
import (
"crypto/ecdsa"
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/refs"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
controlSvc "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/server"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
frostfscrypto "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/crypto"
"github.com/spf13/cobra"
)
func initControlFlags(cmd *cobra.Command) {
ff := cmd.Flags()
ff.StringP(commonflags.WalletPath, commonflags.WalletPathShorthand, commonflags.WalletPathDefault, commonflags.WalletPathUsage)
ff.StringP(commonflags.Account, commonflags.AccountShorthand, commonflags.AccountDefault, commonflags.AccountUsage)
ff.String(controlRPC, controlRPCDefault, controlRPCUsage)
ff.DurationP(commonflags.Timeout, commonflags.TimeoutShorthand, commonflags.TimeoutDefault, commonflags.TimeoutUsage)
}
func signRequest(cmd *cobra.Command, pk *ecdsa.PrivateKey, req controlSvc.SignedMessage) {
err := controlSvc.SignMessage(pk, req)
commonCmd.ExitOnErr(cmd, "could not sign request: %w", err)
}
func verifyResponse(cmd *cobra.Command,
sigControl interface {
GetKey() []byte
GetSign() []byte
},
body interface {
StableMarshal([]byte) []byte
},
) {
if sigControl == nil {
commonCmd.ExitOnErr(cmd, "", errors.New("missing response signature"))
}
// TODO(@cthulhu-rider): #468 use Signature message from FrostFS API to avoid conversion
var sigV2 refs.Signature
sigV2.SetScheme(refs.ECDSA_SHA512)
sigV2.SetKey(sigControl.GetKey())
sigV2.SetSign(sigControl.GetSign())
var sig frostfscrypto.Signature
commonCmd.ExitOnErr(cmd, "can't read signature: %w", sig.ReadFromV2(sigV2))
if !sig.Verify(body.StableMarshal(nil)) {
commonCmd.ExitOnErr(cmd, "", errors.New("invalid response signature"))
}
}
func getClient(cmd *cobra.Command, pk *ecdsa.PrivateKey) *client.Client {
return internalclient.GetSDKClientByFlag(cmd, pk, controlRPC)
}

View file

@ -1,35 +0,0 @@
package netmap
import (
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/spf13/cobra"
)
var getEpochCmd = &cobra.Command{
Use: "epoch",
Short: "Get current epoch number",
Long: "Get current epoch number",
Run: func(cmd *cobra.Command, args []string) {
p := key.GetOrGenerate(cmd)
cli := internalclient.GetSDKClientByFlag(cmd, p, commonflags.RPC)
prm := internalclient.NetworkInfoPrm{
Client: cli,
}
res, err := internalclient.NetworkInfo(cmd.Context(), prm)
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
netInfo := res.NetworkInfo()
cmd.Println(netInfo.CurrentEpoch())
},
}
func initGetEpochCmd() {
commonflags.Init(getEpochCmd)
commonflags.InitAPI(getEpochCmd)
}

View file

@ -1,32 +0,0 @@
package netmap
import (
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/spf13/cobra"
)
var snapshotCmd = &cobra.Command{
Use: "snapshot",
Short: "Request current local snapshot of the network map",
Long: `Request current local snapshot of the network map`,
Run: func(cmd *cobra.Command, args []string) {
p := key.GetOrGenerate(cmd)
cli := internalclient.GetSDKClientByFlag(cmd, p, commonflags.RPC)
var prm internalclient.NetMapSnapshotPrm
prm.SetClient(cli)
res, err := internalclient.NetMapSnapshot(cmd.Context(), prm)
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
commonCmd.PrettyPrintNetMap(cmd, res.NetMap(), false)
},
}
func initSnapshotCmd() {
commonflags.Init(snapshotCmd)
commonflags.InitAPI(snapshotCmd)
}

View file

@ -1,75 +0,0 @@
package object
import (
"fmt"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/spf13/cobra"
)
var objectDelCmd = &cobra.Command{
Use: "delete",
Aliases: []string{"del"},
Short: "Delete object from FrostFS",
Long: "Delete object from FrostFS",
Run: deleteObject,
}
func initObjectDeleteCmd() {
commonflags.Init(objectDelCmd)
initFlagSession(objectDelCmd, "DELETE")
flags := objectDelCmd.Flags()
flags.String(commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
flags.String(commonflags.OIDFlag, "", commonflags.OIDFlagUsage)
flags.Bool(binaryFlag, false, "Deserialize object structure from given file.")
flags.String(fileFlag, "", "File with object payload")
}
func deleteObject(cmd *cobra.Command, _ []string) {
var cnr cid.ID
var obj oid.ID
var objAddr oid.Address
binary, _ := cmd.Flags().GetBool(binaryFlag)
if binary {
filename, _ := cmd.Flags().GetString(fileFlag)
if filename == "" {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("required flag \"%s\" not set", fileFlag))
}
objAddr = readObjectAddressBin(cmd, &cnr, &obj, filename)
} else {
cidVal, _ := cmd.Flags().GetString(commonflags.CIDFlag)
if cidVal == "" {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("required flag \"%s\" not set", commonflags.CIDFlag))
}
oidVal, _ := cmd.Flags().GetString(commonflags.OIDFlag)
if oidVal == "" {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("required flag \"%s\" not set", commonflags.OIDFlag))
}
objAddr = readObjectAddress(cmd, &cnr, &obj)
}
pk := key.GetOrGenerate(cmd)
var prm internalclient.DeleteObjectPrm
ReadOrOpenSession(cmd, &prm, pk, cnr, &obj)
Prepare(cmd, &prm)
prm.SetAddress(objAddr)
res, err := internalclient.DeleteObject(cmd.Context(), prm)
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
tomb := res.Tombstone()
cmd.Println("Object removed successfully.")
cmd.Printf(" ID: %s\n CID: %s\n", tomb, cnr)
}

View file

@ -1,152 +0,0 @@
package object
import (
"bytes"
"fmt"
"io"
"os"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/cheggaaa/pb"
"github.com/spf13/cobra"
)
var objectGetCmd = &cobra.Command{
Use: "get",
Short: "Get object from FrostFS",
Long: "Get object from FrostFS",
Run: getObject,
}
func initObjectGetCmd() {
commonflags.Init(objectGetCmd)
initFlagSession(objectGetCmd, "GET")
flags := objectGetCmd.Flags()
flags.String(commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
_ = objectGetCmd.MarkFlagRequired(commonflags.CIDFlag)
flags.String(commonflags.OIDFlag, "", commonflags.OIDFlagUsage)
_ = objectGetCmd.MarkFlagRequired(commonflags.OIDFlag)
flags.String(fileFlag, "", "File to write object payload to(with -b together with signature and header). Default: stdout.")
flags.Bool(rawFlag, false, rawFlagDesc)
flags.Bool(noProgressFlag, false, "Do not show progress bar")
flags.Bool(binaryFlag, false, "Serialize whole object structure into given file(id + signature + header + payload).")
}
func getObject(cmd *cobra.Command, _ []string) {
var cnr cid.ID
var obj oid.ID
objAddr := readObjectAddress(cmd, &cnr, &obj)
filename := cmd.Flag(fileFlag).Value.String()
out, closer := createOutWriter(cmd, filename)
defer closer()
pk := key.GetOrGenerate(cmd)
cli := internalclient.GetSDKClientByFlag(cmd, pk, commonflags.RPC)
var prm internalclient.GetObjectPrm
prm.SetClient(cli)
Prepare(cmd, &prm)
readSession(cmd, &prm, pk, cnr, obj)
raw, _ := cmd.Flags().GetBool(rawFlag)
prm.SetRawFlag(raw)
prm.SetAddress(objAddr)
var p *pb.ProgressBar
noProgress, _ := cmd.Flags().GetBool(noProgressFlag)
var payloadWriter io.Writer
var payloadBuffer *bytes.Buffer
binary, _ := cmd.Flags().GetBool(binaryFlag)
if binary {
payloadBuffer = new(bytes.Buffer)
payloadWriter = payloadBuffer
} else {
payloadWriter = out
}
if filename == "" || noProgress {
prm.SetPayloadWriter(payloadWriter)
} else {
p = pb.New64(0)
p.Output = cmd.OutOrStdout()
prm.SetPayloadWriter(p.NewProxyWriter(payloadWriter))
prm.SetHeaderCallback(func(o *objectSDK.Object) {
p.SetTotal64(int64(o.PayloadSize()))
p.Start()
})
}
res, err := internalclient.GetObject(cmd.Context(), prm)
if p != nil {
p.Finish()
}
if err != nil {
if ok := printSplitInfoErr(cmd, err); ok {
return
}
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
}
processResult(cmd, res, binary, payloadBuffer, out, filename)
}
func processResult(cmd *cobra.Command, res *internalclient.GetObjectRes, binary bool, payloadBuffer *bytes.Buffer, out io.Writer, filename string) {
if binary {
objToStore := res.Header()
// TODO(@acid-ant): #1932 Use streams to marshal/unmarshal payload
objToStore.SetPayload(payloadBuffer.Bytes())
objBytes, err := objToStore.Marshal()
commonCmd.ExitOnErr(cmd, "", err)
_, err = out.Write(objBytes)
commonCmd.ExitOnErr(cmd, "unable to write binary object in out: %w ", err)
}
if filename != "" && !strictOutput(cmd) {
cmd.Printf("[%s] Object successfully saved\n", filename)
}
// Print header only if file is not streamed to stdout.
if filename != "" {
err := printHeader(cmd, res.Header())
commonCmd.ExitOnErr(cmd, "", err)
}
}
func createOutWriter(cmd *cobra.Command, filename string) (out io.Writer, closer func()) {
if filename == "" {
out = os.Stdout
closer = func() {}
} else {
f, err := os.OpenFile(filename, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0644)
if err != nil {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("can't open file '%s': %w", filename, err))
}
out = f
closer = func() {
f.Close()
}
}
return
}
func strictOutput(cmd *cobra.Command) bool {
toJSON, _ := cmd.Flags().GetBool(commonflags.JSON)
toProto, _ := cmd.Flags().GetBool("proto")
return toJSON || toProto
}

View file

@ -1,131 +0,0 @@
package object
import (
"context"
"errors"
"fmt"
"strconv"
"time"
objectV2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/spf13/cobra"
)
// object lock command.
var objectLockCmd = &cobra.Command{
Use: "lock",
Short: "Lock object in container",
Long: "Lock object in container",
Run: func(cmd *cobra.Command, _ []string) {
cidRaw, _ := cmd.Flags().GetString(commonflags.CIDFlag)
var cnr cid.ID
err := cnr.DecodeString(cidRaw)
commonCmd.ExitOnErr(cmd, "Incorrect container arg: %v", err)
key := key.GetOrGenerate(cmd)
cli := internalclient.GetSDKClientByFlag(cmd, key, commonflags.RPC)
oidsRaw, _ := cmd.Flags().GetStringSlice(commonflags.OIDFlag)
lockList := make([]oid.ID, 0, len(oidsRaw))
oidM := make(map[oid.ID]struct{})
for i, oidRaw := range oidsRaw {
var oID oid.ID
err = oID.DecodeString(oidRaw)
commonCmd.ExitOnErr(cmd, fmt.Sprintf("Incorrect object arg #%d: %%v", i+1), err)
if _, ok := oidM[oID]; ok {
continue
}
lockList = append(lockList, oID)
oidM[oID] = struct{}{}
for _, relative := range collectObjectRelatives(cmd, cli, cnr, oID) {
if _, ok := oidM[relative]; ok {
continue
}
lockList = append(lockList, relative)
oidM[relative] = struct{}{}
}
}
var idOwner user.ID
user.IDFromKey(&idOwner, key.PublicKey)
var lock objectSDK.Lock
lock.WriteMembers(lockList)
exp, _ := cmd.Flags().GetUint64(commonflags.ExpireAt)
lifetime, _ := cmd.Flags().GetUint64(commonflags.Lifetime)
if exp == 0 && lifetime == 0 { // mutual exclusion is ensured by cobra
commonCmd.ExitOnErr(cmd, "", errors.New("either expiration epoch of a lifetime is required"))
}
if lifetime != 0 {
ctx, cancel := context.WithTimeout(context.Background(), time.Second*30)
defer cancel()
endpoint, _ := cmd.Flags().GetString(commonflags.RPC)
currEpoch, err := internalclient.GetCurrentEpoch(ctx, cmd, endpoint)
commonCmd.ExitOnErr(cmd, "Request current epoch: %w", err)
exp = currEpoch + lifetime
}
common.PrintVerbose(cmd, "Lock object will expire after %d epoch", exp)
var expirationAttr objectSDK.Attribute
expirationAttr.SetKey(objectV2.SysAttributeExpEpoch)
expirationAttr.SetValue(strconv.FormatUint(exp, 10))
obj := objectSDK.New()
obj.SetContainerID(cnr)
obj.SetOwnerID(&idOwner)
obj.SetType(objectSDK.TypeLock)
obj.SetAttributes(expirationAttr)
obj.SetPayload(lock.Marshal())
var prm internalclient.PutObjectPrm
ReadOrOpenSessionViaClient(cmd, &prm, cli, key, cnr, nil)
Prepare(cmd, &prm)
prm.SetHeader(obj)
res, err := internalclient.PutObject(cmd.Context(), prm)
commonCmd.ExitOnErr(cmd, "Store lock object in FrostFS: %w", err)
cmd.Printf("Lock object ID: %s\n", res.ID())
cmd.Println("Objects successfully locked.")
},
}
func initCommandObjectLock() {
commonflags.Init(objectLockCmd)
initFlagSession(objectLockCmd, "PUT")
ff := objectLockCmd.Flags()
ff.String(commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
_ = objectLockCmd.MarkFlagRequired(commonflags.CIDFlag)
ff.StringSlice(commonflags.OIDFlag, nil, commonflags.OIDFlagUsage)
_ = objectLockCmd.MarkFlagRequired(commonflags.OIDFlag)
ff.Uint64P(commonflags.ExpireAt, "e", 0, "The last active epoch for the lock")
ff.Uint64(commonflags.Lifetime, 0, "Lock lifetime")
objectLockCmd.MarkFlagsMutuallyExclusive(commonflags.ExpireAt, commonflags.Lifetime)
}

View file

@ -1,351 +0,0 @@
package object
import (
"context"
"crypto/ecdsa"
"encoding/hex"
"errors"
"fmt"
"strconv"
"sync"
"text/tabwriter"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object_manager/placement"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
netmapSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
const (
verifyPresenceAllFlag = "verify-presence-all"
)
type objectNodesInfo struct {
containerID cid.ID
objectID oid.ID
relatedObjectIDs []oid.ID
isLock bool
}
type boolError struct {
value bool
err error
}
var objectNodesCmd = &cobra.Command{
Use: "nodes",
Short: "List of nodes where the object is stored",
Long: `List of nodes where the object should be stored and where it is actually stored.
Lock objects must exist on all nodes of the container.
For complex objects, a node is considered to store an object if the node stores at least one part of the complex object.
By default, the actual storage of the object is checked only on the nodes that should store the object. To check all nodes, use the flag --verify-presence-all.`,
Run: objectNodes,
}
func initObjectNodesCmd() {
commonflags.Init(objectNodesCmd)
flags := objectNodesCmd.Flags()
flags.String(commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
_ = objectGetCmd.MarkFlagRequired(commonflags.CIDFlag)
flags.String(commonflags.OIDFlag, "", commonflags.OIDFlagUsage)
_ = objectGetCmd.MarkFlagRequired(commonflags.OIDFlag)
flags.Bool("verify-presence-all", false, "Verify the actual presence of the object on all netmap nodes")
}
func objectNodes(cmd *cobra.Command, _ []string) {
var cnrID cid.ID
var objID oid.ID
readObjectAddress(cmd, &cnrID, &objID)
pk := key.GetOrGenerate(cmd)
cli := internalclient.GetSDKClientByFlag(cmd, pk, commonflags.RPC)
objectInfo := getObjectInfo(cmd, cnrID, objID, cli, pk)
placementPolicy, netmap := getPlacementPolicyAndNetmap(cmd, cnrID, cli)
requiredPlacement := getRequiredPlacement(cmd, objectInfo, placementPolicy, netmap)
actualPlacement := getActualPlacement(cmd, netmap, requiredPlacement, pk, objectInfo)
printPlacement(cmd, netmap, requiredPlacement, actualPlacement)
}
func getObjectInfo(cmd *cobra.Command, cnrID cid.ID, objID oid.ID, cli *client.Client, pk *ecdsa.PrivateKey) *objectNodesInfo {
var addrObj oid.Address
addrObj.SetContainer(cnrID)
addrObj.SetObject(objID)
var prmHead internalclient.HeadObjectPrm
prmHead.SetClient(cli)
prmHead.SetAddress(addrObj)
prmHead.SetRawFlag(true)
Prepare(cmd, &prmHead)
readSession(cmd, &prmHead, pk, cnrID, objID)
res, err := internalclient.HeadObject(cmd.Context(), prmHead)
if err == nil {
return &objectNodesInfo{
containerID: cnrID,
objectID: objID,
isLock: res.Header().Type() == objectSDK.TypeLock,
}
}
var errSplitInfo *objectSDK.SplitInfoError
if !errors.As(err, &errSplitInfo) {
commonCmd.ExitOnErr(cmd, "failed to get object info: %w", err)
return nil
}
splitInfo := errSplitInfo.SplitInfo()
if members, ok := tryGetSplitMembersByLinkingObject(cmd, splitInfo, prmHead, cnrID); ok {
return &objectNodesInfo{
containerID: cnrID,
objectID: objID,
relatedObjectIDs: members,
}
}
if members, ok := tryGetSplitMembersBySplitID(cmd, splitInfo, cli, cnrID); ok {
return &objectNodesInfo{
containerID: cnrID,
objectID: objID,
relatedObjectIDs: members,
}
}
members := tryRestoreChainInReverse(cmd, splitInfo, prmHead, cli, cnrID, objID)
return &objectNodesInfo{
containerID: cnrID,
objectID: objID,
relatedObjectIDs: members,
}
}
func getPlacementPolicyAndNetmap(cmd *cobra.Command, cnrID cid.ID, cli *client.Client) (placementPolicy netmapSDK.PlacementPolicy, netmap *netmapSDK.NetMap) {
eg, egCtx := errgroup.WithContext(cmd.Context())
eg.Go(func() (e error) {
placementPolicy, e = getPlacementPolicy(egCtx, cnrID, cli)
return
})
eg.Go(func() (e error) {
netmap, e = getNetMap(egCtx, cli)
return
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", eg.Wait())
return
}
func getPlacementPolicy(ctx context.Context, cnrID cid.ID, cli *client.Client) (netmapSDK.PlacementPolicy, error) {
prm := internalclient.GetContainerPrm{
Client: cli,
ClientParams: client.PrmContainerGet{
ContainerID: &cnrID,
},
}
res, err := internalclient.GetContainer(ctx, prm)
if err != nil {
return netmapSDK.PlacementPolicy{}, err
}
return res.Container().PlacementPolicy(), nil
}
func getNetMap(ctx context.Context, cli *client.Client) (*netmapSDK.NetMap, error) {
var prm internalclient.NetMapSnapshotPrm
prm.SetClient(cli)
res, err := internalclient.NetMapSnapshot(ctx, prm)
if err != nil {
return nil, err
}
nm := res.NetMap()
return &nm, nil
}
func getRequiredPlacement(cmd *cobra.Command, objInfo *objectNodesInfo, placementPolicy netmapSDK.PlacementPolicy, netmap *netmapSDK.NetMap) map[uint64]netmapSDK.NodeInfo {
nodes := make(map[uint64]netmapSDK.NodeInfo)
placementBuilder := placement.NewNetworkMapBuilder(netmap)
placement, err := placementBuilder.BuildPlacement(objInfo.containerID, &objInfo.objectID, placementPolicy)
commonCmd.ExitOnErr(cmd, "failed to get required placement: %w", err)
for repIdx, rep := range placement {
numOfReplicas := placementPolicy.ReplicaNumberByIndex(repIdx)
var nodeIdx uint32
for _, n := range rep {
if !objInfo.isLock && nodeIdx == numOfReplicas { //lock object should be on all container nodes
break
}
nodes[n.Hash()] = n
nodeIdx++
}
}
for _, relatedObjID := range objInfo.relatedObjectIDs {
placement, err = placementBuilder.BuildPlacement(objInfo.containerID, &relatedObjID, placementPolicy)
commonCmd.ExitOnErr(cmd, "failed to get required placement for related object: %w", err)
for _, rep := range placement {
for _, n := range rep {
nodes[n.Hash()] = n
}
}
}
return nodes
}
func getActualPlacement(cmd *cobra.Command, netmap *netmapSDK.NetMap, requiredPlacement map[uint64]netmapSDK.NodeInfo,
pk *ecdsa.PrivateKey, objInfo *objectNodesInfo) map[uint64]boolError {
result := make(map[uint64]boolError)
resultMtx := &sync.Mutex{}
var candidates []netmapSDK.NodeInfo
checkAllNodes, _ := cmd.Flags().GetBool(verifyPresenceAllFlag)
if checkAllNodes {
candidates = netmap.Nodes()
} else {
for _, n := range requiredPlacement {
candidates = append(candidates, n)
}
}
eg, egCtx := errgroup.WithContext(cmd.Context())
for _, cand := range candidates {
cand := cand
eg.Go(func() error {
cli, err := createClient(egCtx, cmd, cand, pk)
if err != nil {
resultMtx.Lock()
defer resultMtx.Unlock()
result[cand.Hash()] = boolError{err: err}
return nil
}
eg.Go(func() error {
var v boolError
v.value, v.err = isObjectStoredOnNode(egCtx, cmd, objInfo.containerID, objInfo.objectID, cli, pk)
resultMtx.Lock()
defer resultMtx.Unlock()
if prev, exists := result[cand.Hash()]; exists && (prev.err != nil || prev.value) {
return nil
}
result[cand.Hash()] = v
return nil
})
for _, rObjID := range objInfo.relatedObjectIDs {
rObjID := rObjID
eg.Go(func() error {
var v boolError
v.value, v.err = isObjectStoredOnNode(egCtx, cmd, objInfo.containerID, rObjID, cli, pk)
resultMtx.Lock()
defer resultMtx.Unlock()
if prev, exists := result[cand.Hash()]; exists && (prev.err != nil || prev.value) {
return nil
}
result[cand.Hash()] = v
return nil
})
}
return nil
})
}
commonCmd.ExitOnErr(cmd, "failed to get actual placement: %w", eg.Wait())
return result
}
func createClient(ctx context.Context, cmd *cobra.Command, candidate netmapSDK.NodeInfo, pk *ecdsa.PrivateKey) (*client.Client, error) {
var cli *client.Client
var addresses []string
candidate.IterateNetworkEndpoints(func(s string) bool {
addresses = append(addresses, s)
return false
})
addresses = append(addresses, candidate.ExternalAddresses()...)
var lastErr error
for _, address := range addresses {
var networkAddr network.Address
lastErr = networkAddr.FromString(address)
if lastErr != nil {
continue
}
cli, lastErr = internalclient.GetSDKClient(ctx, cmd, pk, networkAddr)
if lastErr == nil {
break
}
}
if lastErr != nil {
return nil, lastErr
}
if cli == nil {
return nil, fmt.Errorf("failed to create client: no available endpoint")
}
return cli, nil
}
func isObjectStoredOnNode(ctx context.Context, cmd *cobra.Command, cnrID cid.ID, objID oid.ID, cli *client.Client, pk *ecdsa.PrivateKey) (bool, error) {
var addrObj oid.Address
addrObj.SetContainer(cnrID)
addrObj.SetObject(objID)
var prmHead internalclient.HeadObjectPrm
prmHead.SetClient(cli)
prmHead.SetAddress(addrObj)
Prepare(cmd, &prmHead)
prmHead.SetTTL(1)
readSession(cmd, &prmHead, pk, cnrID, objID)
res, err := internalclient.HeadObject(ctx, prmHead)
if err == nil && res != nil {
return true, nil
}
var notFound *apistatus.ObjectNotFound
var removed *apistatus.ObjectAlreadyRemoved
if errors.As(err, &notFound) || errors.As(err, &removed) {
return false, nil
}
return false, err
}
func printPlacement(cmd *cobra.Command, netmap *netmapSDK.NetMap, requiredPlacement map[uint64]netmapSDK.NodeInfo, actualPlacement map[uint64]boolError) {
w := tabwriter.NewWriter(cmd.OutOrStdout(), 0, 0, 1, ' ', tabwriter.AlignRight|tabwriter.Debug)
defer func() {
commonCmd.ExitOnErr(cmd, "failed to print placement info: %w", w.Flush())
}()
fmt.Fprintln(w, "Node ID\tShould contain object\tActually contains object\t")
for _, n := range netmap.Nodes() {
nodeID := hex.EncodeToString(n.PublicKey())
_, required := requiredPlacement[n.Hash()]
actual, actualExists := actualPlacement[n.Hash()]
actualStr := ""
if actualExists {
if actual.err != nil {
actualStr = fmt.Sprintf("error: %v", actual.err)
} else {
actualStr = strconv.FormatBool(actual.value)
}
}
fmt.Fprintf(w, "%s\t%s\t%s\t\n", nodeID, strconv.FormatBool(required), actualStr)
}
}

View file

@ -1,288 +0,0 @@
package object
import (
"bytes"
"fmt"
"io"
"os"
"path/filepath"
"strconv"
"strings"
"time"
objectV2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/cheggaaa/pb"
"github.com/spf13/cobra"
)
const (
noProgressFlag = "no-progress"
notificationFlag = "notify"
copiesNumberFlag = "copies-number"
prepareLocallyFlag = "prepare-locally"
)
var putExpiredOn uint64
var objectPutCmd = &cobra.Command{
Use: "put",
Short: "Put object to FrostFS",
Long: "Put object to FrostFS",
Run: putObject,
}
func initObjectPutCmd() {
commonflags.Init(objectPutCmd)
initFlagSession(objectPutCmd, "PUT")
flags := objectPutCmd.Flags()
flags.String(fileFlag, "", "File with object payload")
_ = objectPutCmd.MarkFlagFilename(fileFlag)
_ = objectPutCmd.MarkFlagRequired(fileFlag)
flags.String(commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
flags.String("attributes", "", "User attributes in form of Key1=Value1,Key2=Value2")
flags.Bool("disable-filename", false, "Do not set well-known filename attribute")
flags.Bool("disable-timestamp", false, "Do not set well-known timestamp attribute")
flags.Uint64VarP(&putExpiredOn, commonflags.ExpireAt, "e", 0, "The last active epoch in the life of the object")
flags.Bool(noProgressFlag, false, "Do not show progress bar")
flags.Bool(prepareLocallyFlag, false, "Generate object header on the client side (for big object - split locally too)")
flags.String(notificationFlag, "", "Object notification in the form of *epoch*:*topic*; '-' topic means using default")
flags.Bool(binaryFlag, false, "Deserialize object structure from given file.")
flags.String(copiesNumberFlag, "", "Number of copies of the object to store within the RPC call")
}
func putObject(cmd *cobra.Command, _ []string) {
binary, _ := cmd.Flags().GetBool(binaryFlag)
cidVal, _ := cmd.Flags().GetString(commonflags.CIDFlag)
if !binary && cidVal == "" {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("required flag \"%s\" not set", commonflags.CIDFlag))
}
pk := key.GetOrGenerate(cmd)
var ownerID user.ID
var cnr cid.ID
filename, _ := cmd.Flags().GetString(fileFlag)
f, err := os.OpenFile(filename, os.O_RDONLY, os.ModePerm)
if err != nil {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("can't open file '%s': %w", filename, err))
}
var payloadReader io.Reader = f
obj := objectSDK.New()
if binary {
payloadReader, cnr, ownerID = readFilePayload(filename, cmd)
} else {
readCID(cmd, &cnr)
user.IDFromKey(&ownerID, pk.PublicKey)
}
attrs := getAllObjectAttributes(cmd)
obj.SetContainerID(cnr)
obj.SetOwnerID(&ownerID)
obj.SetAttributes(attrs...)
notificationInfo, err := parseObjectNotifications(cmd)
commonCmd.ExitOnErr(cmd, "can't parse object notification information: %w", err)
if notificationInfo != nil {
obj.SetNotification(*notificationInfo)
}
var prm internalclient.PutObjectPrm
if prepareLocally, _ := cmd.Flags().GetBool(prepareLocallyFlag); prepareLocally {
prm.SetClient(internalclient.GetSDKClientByFlag(cmd, pk, commonflags.RPC))
prm.PrepareLocally()
} else {
ReadOrOpenSession(cmd, &prm, pk, cnr, nil)
}
Prepare(cmd, &prm)
prm.SetHeader(obj)
var p *pb.ProgressBar
noProgress, _ := cmd.Flags().GetBool(noProgressFlag)
if noProgress {
prm.SetPayloadReader(payloadReader)
} else {
if binary {
p = setBinaryPayloadReader(cmd, obj, &prm, payloadReader)
} else {
p = setFilePayloadReader(cmd, f, &prm)
}
}
copyNum, err := cmd.Flags().GetString(copiesNumberFlag)
commonCmd.ExitOnErr(cmd, "can't parse object copies numbers information: %w", err)
prm.SetCopiesNumberByVectors(parseCopyNumber(cmd, copyNum))
res, err := internalclient.PutObject(cmd.Context(), prm)
if p != nil {
p.Finish()
}
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
cmd.Printf("[%s] Object successfully stored\n", filename)
cmd.Printf(" OID: %s\n CID: %s\n", res.ID(), cnr)
}
func parseCopyNumber(cmd *cobra.Command, copyNum string) []uint32 {
var cn []uint32
if len(copyNum) > 0 {
for _, num := range strings.Split(copyNum, ",") {
val, err := strconv.ParseUint(num, 10, 32)
commonCmd.ExitOnErr(cmd, "can't parse object copies numbers information: %w", err)
cn = append(cn, uint32(val))
}
}
return cn
}
func readFilePayload(filename string, cmd *cobra.Command) (io.Reader, cid.ID, user.ID) {
buf, err := os.ReadFile(filename)
commonCmd.ExitOnErr(cmd, "unable to read given file: %w", err)
objTemp := objectSDK.New()
// TODO(@acid-ant): #1932 Use streams to marshal/unmarshal payload
commonCmd.ExitOnErr(cmd, "can't unmarshal object from given file: %w", objTemp.Unmarshal(buf))
payloadReader := bytes.NewReader(objTemp.Payload())
cnr, _ := objTemp.ContainerID()
ownerID := *objTemp.OwnerID()
return payloadReader, cnr, ownerID
}
func setFilePayloadReader(cmd *cobra.Command, f *os.File, prm *internalclient.PutObjectPrm) *pb.ProgressBar {
fi, err := f.Stat()
if err != nil {
cmd.PrintErrf("Failed to get file size, progress bar is disabled: %v\n", err)
prm.SetPayloadReader(f)
return nil
}
p := pb.New64(fi.Size())
p.Output = cmd.OutOrStdout()
prm.SetPayloadReader(p.NewProxyReader(f))
prm.SetHeaderCallback(func(o *objectSDK.Object) { p.Start() })
return p
}
func setBinaryPayloadReader(cmd *cobra.Command, obj *objectSDK.Object, prm *internalclient.PutObjectPrm, payloadReader io.Reader) *pb.ProgressBar {
p := pb.New(len(obj.Payload()))
p.Output = cmd.OutOrStdout()
prm.SetPayloadReader(p.NewProxyReader(payloadReader))
prm.SetHeaderCallback(func(o *objectSDK.Object) { p.Start() })
return p
}
func getAllObjectAttributes(cmd *cobra.Command) []objectSDK.Attribute {
attrs, err := parseObjectAttrs(cmd)
commonCmd.ExitOnErr(cmd, "can't parse object attributes: %w", err)
expiresOn, _ := cmd.Flags().GetUint64(commonflags.ExpireAt)
if expiresOn > 0 {
var expAttrFound bool
expAttrValue := strconv.FormatUint(expiresOn, 10)
for i := range attrs {
if attrs[i].Key() == objectV2.SysAttributeExpEpoch {
attrs[i].SetValue(expAttrValue)
expAttrFound = true
break
}
}
if !expAttrFound {
index := len(attrs)
attrs = append(attrs, objectSDK.Attribute{})
attrs[index].SetKey(objectV2.SysAttributeExpEpoch)
attrs[index].SetValue(expAttrValue)
}
}
return attrs
}
func parseObjectAttrs(cmd *cobra.Command) ([]objectSDK.Attribute, error) {
var rawAttrs []string
raw := cmd.Flag("attributes").Value.String()
if len(raw) != 0 {
rawAttrs = strings.Split(raw, ",")
}
attrs := make([]objectSDK.Attribute, len(rawAttrs), len(rawAttrs)+2) // name + timestamp attributes
for i := range rawAttrs {
k, v, found := strings.Cut(rawAttrs[i], "=")
if !found {
return nil, fmt.Errorf("invalid attribute format: %s", rawAttrs[i])
}
attrs[i].SetKey(k)
attrs[i].SetValue(v)
}
disableFilename, _ := cmd.Flags().GetBool("disable-filename")
if !disableFilename {
filename := filepath.Base(cmd.Flag(fileFlag).Value.String())
index := len(attrs)
attrs = append(attrs, objectSDK.Attribute{})
attrs[index].SetKey(objectSDK.AttributeFileName)
attrs[index].SetValue(filename)
}
disableTime, _ := cmd.Flags().GetBool("disable-timestamp")
if !disableTime {
index := len(attrs)
attrs = append(attrs, objectSDK.Attribute{})
attrs[index].SetKey(objectSDK.AttributeTimestamp)
attrs[index].SetValue(strconv.FormatInt(time.Now().Unix(), 10))
}
return attrs, nil
}
func parseObjectNotifications(cmd *cobra.Command) (*objectSDK.NotificationInfo, error) {
const (
separator = ":"
useDefaultTopic = "-"
)
raw := cmd.Flag(notificationFlag).Value.String()
if raw == "" {
return nil, nil
}
before, after, found := strings.Cut(raw, separator)
if !found {
return nil, fmt.Errorf("notification must be in the form of: *epoch*%s*topic*, got %s", separator, raw)
}
ni := new(objectSDK.NotificationInfo)
epoch, err := strconv.ParseUint(before, 10, 64)
if err != nil {
return nil, fmt.Errorf("could not parse notification epoch %s: %w", before, err)
}
ni.SetEpoch(epoch)
if after == "" {
return nil, fmt.Errorf("incorrect empty topic: use %s to force using default topic", useDefaultTopic)
}
if after != useDefaultTopic {
ni.SetTopic(after)
}
return ni, nil
}

View file

@ -1,506 +0,0 @@
package object
import (
"crypto/ecdsa"
"errors"
"fmt"
"os"
"strings"
internal "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
sessionCli "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/session"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
frostfsecdsa "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/crypto/ecdsa"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
const (
bearerTokenFlag = "bearer"
rawFlag = "raw"
rawFlagDesc = "Set raw request option"
fileFlag = "file"
binaryFlag = "binary"
)
type RPCParameters interface {
SetBearerToken(prm *bearer.Token)
SetTTL(uint32)
SetXHeaders([]string)
}
// InitBearer adds bearer token flag to a command.
func InitBearer(cmd *cobra.Command) {
flags := cmd.Flags()
flags.String(bearerTokenFlag, "", "File with signed JSON or binary encoded bearer token")
}
// Prepare prepares object-related parameters for a command.
func Prepare(cmd *cobra.Command, prms ...RPCParameters) {
ttl := viper.GetUint32(commonflags.TTL)
common.PrintVerbose(cmd, "TTL: %d", ttl)
for i := range prms {
btok := common.ReadBearerToken(cmd, bearerTokenFlag)
prms[i].SetBearerToken(btok)
prms[i].SetTTL(ttl)
prms[i].SetXHeaders(parseXHeaders(cmd))
}
}
func parseXHeaders(cmd *cobra.Command) []string {
xHeaders, _ := cmd.Flags().GetStringSlice(commonflags.XHeadersKey)
xs := make([]string, 0, 2*len(xHeaders))
for i := range xHeaders {
k, v, found := strings.Cut(xHeaders[i], "=")
if !found {
panic(fmt.Errorf("invalid X-Header format: %s", xHeaders[i]))
}
xs = append(xs, k, v)
}
return xs
}
func readObjectAddress(cmd *cobra.Command, cnr *cid.ID, obj *oid.ID) oid.Address {
readCID(cmd, cnr)
readOID(cmd, obj)
var addr oid.Address
addr.SetContainer(*cnr)
addr.SetObject(*obj)
return addr
}
func readObjectAddressBin(cmd *cobra.Command, cnr *cid.ID, obj *oid.ID, filename string) oid.Address {
buf, err := os.ReadFile(filename)
commonCmd.ExitOnErr(cmd, "unable to read given file: %w", err)
objTemp := objectSDK.New()
commonCmd.ExitOnErr(cmd, "can't unmarshal object from given file: %w", objTemp.Unmarshal(buf))
var addr oid.Address
*cnr, _ = objTemp.ContainerID()
*obj, _ = objTemp.ID()
addr.SetContainer(*cnr)
addr.SetObject(*obj)
return addr
}
func readCID(cmd *cobra.Command, id *cid.ID) {
err := id.DecodeString(cmd.Flag(commonflags.CIDFlag).Value.String())
commonCmd.ExitOnErr(cmd, "decode container ID string: %w", err)
}
func readOID(cmd *cobra.Command, id *oid.ID) {
err := id.DecodeString(cmd.Flag(commonflags.OIDFlag).Value.String())
commonCmd.ExitOnErr(cmd, "decode object ID string: %w", err)
}
// SessionPrm is a common interface of object operation's input which supports
// sessions.
type SessionPrm interface {
SetSessionToken(*session.Object)
SetClient(*client.Client)
}
// forwards all parameters to _readVerifiedSession and object as nil.
func readSessionGlobal(cmd *cobra.Command, dst SessionPrm, key *ecdsa.PrivateKey, cnr cid.ID) {
_readVerifiedSession(cmd, dst, key, cnr, nil)
}
// forwards all parameters to _readVerifiedSession.
func readSession(cmd *cobra.Command, dst SessionPrm, key *ecdsa.PrivateKey, cnr cid.ID, obj oid.ID) {
_readVerifiedSession(cmd, dst, key, cnr, &obj)
}
// decodes session.Object from the file by path specified in the
// commonflags.SessionToken flag. Returns nil if flag is not set.
func getSession(cmd *cobra.Command) *session.Object {
common.PrintVerbose(cmd, "Trying to read session from the file...")
path, _ := cmd.Flags().GetString(commonflags.SessionToken)
if path == "" {
common.PrintVerbose(cmd, "File with session token is not provided.")
return nil
}
common.PrintVerbose(cmd, "Reading session from the file [%s]...", path)
var tok session.Object
err := common.ReadBinaryOrJSON(cmd, &tok, path)
commonCmd.ExitOnErr(cmd, "read session: %v", err)
return &tok
}
// decodes object session from JSON file from commonflags.SessionToken command
// flag if it is provided, and writes resulting session into the provided SessionPrm.
// Returns flag presence. Checks:
//
// - if session verb corresponds to given SessionPrm according to its type
// - relation to the given container
// - relation to the given object if non-nil
// - relation to the given private key used within the command
// - session signature
//
// SessionPrm MUST be one of:
//
// *internal.GetObjectPrm
// *internal.HeadObjectPrm
// *internal.SearchObjectsPrm
// *internal.PayloadRangePrm
// *internal.HashPayloadRangesPrm
func _readVerifiedSession(cmd *cobra.Command, dst SessionPrm, key *ecdsa.PrivateKey, cnr cid.ID, obj *oid.ID) {
var cmdVerb session.ObjectVerb
switch dst.(type) {
default:
panic(fmt.Sprintf("unsupported op parameters %T", dst))
case *internal.GetObjectPrm:
cmdVerb = session.VerbObjectGet
case *internal.HeadObjectPrm:
cmdVerb = session.VerbObjectHead
case *internal.SearchObjectsPrm:
cmdVerb = session.VerbObjectSearch
case *internal.PayloadRangePrm:
cmdVerb = session.VerbObjectRange
case *internal.HashPayloadRangesPrm:
cmdVerb = session.VerbObjectRangeHash
}
tok := getSession(cmd)
if tok == nil {
return
}
common.PrintVerbose(cmd, "Checking session correctness...")
switch false {
case tok.AssertContainer(cnr):
commonCmd.ExitOnErr(cmd, "", errors.New("unrelated container in the session"))
case obj == nil || tok.AssertObject(*obj):
commonCmd.ExitOnErr(cmd, "", errors.New("unrelated object in the session"))
case tok.AssertVerb(cmdVerb):
commonCmd.ExitOnErr(cmd, "", errors.New("wrong verb of the session"))
case tok.AssertAuthKey((*frostfsecdsa.PublicKey)(&key.PublicKey)):
commonCmd.ExitOnErr(cmd, "", errors.New("unrelated key in the session"))
case tok.VerifySignature():
commonCmd.ExitOnErr(cmd, "", errors.New("invalid signature of the session data"))
}
common.PrintVerbose(cmd, "Session is correct.")
dst.SetSessionToken(tok)
}
// ReadOrOpenSession opens client connection and calls ReadOrOpenSessionViaClient with it.
func ReadOrOpenSession(cmd *cobra.Command, dst SessionPrm, key *ecdsa.PrivateKey, cnr cid.ID, obj *oid.ID) {
cli := internal.GetSDKClientByFlag(cmd, key, commonflags.RPC)
ReadOrOpenSessionViaClient(cmd, dst, cli, key, cnr, obj)
}
// ReadOrOpenSessionViaClient tries to read session from the file specified in
// commonflags.SessionToken flag, finalizes structures of the decoded token
// and write the result into provided SessionPrm. If file is missing,
// ReadOrOpenSessionViaClient calls OpenSessionViaClient.
func ReadOrOpenSessionViaClient(cmd *cobra.Command, dst SessionPrm, cli *client.Client, key *ecdsa.PrivateKey, cnr cid.ID, obj *oid.ID) {
tok := getSession(cmd)
if tok == nil {
OpenSessionViaClient(cmd, dst, cli, key, cnr, obj)
return
}
var objs []oid.ID
if obj != nil {
objs = []oid.ID{*obj}
if _, ok := dst.(*internal.DeleteObjectPrm); ok {
common.PrintVerbose(cmd, "Collecting relatives of the removal object...")
objs = append(objs, collectObjectRelatives(cmd, cli, cnr, *obj)...)
}
}
finalizeSession(cmd, dst, tok, key, cnr, objs...)
dst.SetClient(cli)
}
// OpenSession opens client connection and calls OpenSessionViaClient with it.
func OpenSession(cmd *cobra.Command, dst SessionPrm, key *ecdsa.PrivateKey, cnr cid.ID, obj *oid.ID) {
cli := internal.GetSDKClientByFlag(cmd, key, commonflags.RPC)
OpenSessionViaClient(cmd, dst, cli, key, cnr, obj)
}
// OpenSessionViaClient opens object session with the remote node, finalizes
// structure of the session token and writes the result into the provided
// SessionPrm. Also writes provided client connection to the SessionPrm.
//
// SessionPrm MUST be one of:
//
// *internal.PutObjectPrm
// *internal.DeleteObjectPrm
//
// If provided SessionPrm is of type internal.DeleteObjectPrm, OpenSessionViaClient
// spreads the session to all object's relatives.
func OpenSessionViaClient(cmd *cobra.Command, dst SessionPrm, cli *client.Client, key *ecdsa.PrivateKey, cnr cid.ID, obj *oid.ID) {
var objs []oid.ID
if obj != nil {
if _, ok := dst.(*internal.DeleteObjectPrm); ok {
common.PrintVerbose(cmd, "Collecting relatives of the removal object...")
rels := collectObjectRelatives(cmd, cli, cnr, *obj)
if len(rels) == 0 {
objs = []oid.ID{*obj}
} else {
objs = append(rels, *obj)
}
}
}
var tok session.Object
const sessionLifetime = 10 // in FrostFS epochs
common.PrintVerbose(cmd, "Opening remote session with the node...")
err := sessionCli.CreateSession(cmd.Context(), &tok, cli, sessionLifetime)
commonCmd.ExitOnErr(cmd, "open remote session: %w", err)
common.PrintVerbose(cmd, "Session successfully opened.")
finalizeSession(cmd, dst, &tok, key, cnr, objs...)
dst.SetClient(cli)
}
// specifies session verb, binds the session to the given container and limits
// the session by the given objects (if specified). After all data is written,
// signs session using provided private key and writes the session into the
// given SessionPrm.
//
// SessionPrm MUST be one of:
//
// *internal.PutObjectPrm
// *internal.DeleteObjectPrm
func finalizeSession(cmd *cobra.Command, dst SessionPrm, tok *session.Object, key *ecdsa.PrivateKey, cnr cid.ID, objs ...oid.ID) {
common.PrintVerbose(cmd, "Finalizing session token...")
switch dst.(type) {
default:
panic(fmt.Sprintf("unsupported op parameters %T", dst))
case *internal.PutObjectPrm:
common.PrintVerbose(cmd, "Binding session to object PUT...")
tok.ForVerb(session.VerbObjectPut)
case *internal.DeleteObjectPrm:
common.PrintVerbose(cmd, "Binding session to object DELETE...")
tok.ForVerb(session.VerbObjectDelete)
}
common.PrintVerbose(cmd, "Binding session to container %s...", cnr)
tok.BindContainer(cnr)
if len(objs) > 0 {
common.PrintVerbose(cmd, "Limiting session by the objects %v...", objs)
tok.LimitByObjects(objs...)
}
common.PrintVerbose(cmd, "Signing session...")
err := tok.Sign(*key)
commonCmd.ExitOnErr(cmd, "sign session: %w", err)
common.PrintVerbose(cmd, "Session token successfully formed and attached to the request.")
dst.SetSessionToken(tok)
}
// calls commonflags.InitSession with "object <verb>" name.
func initFlagSession(cmd *cobra.Command, verb string) {
commonflags.InitSession(cmd, "object "+verb)
}
// collects and returns all relatives of the given object stored in the specified
// container. Empty result without an error means lack of relationship in the
// container.
//
// The object itself is not included in the result.
func collectObjectRelatives(cmd *cobra.Command, cli *client.Client, cnr cid.ID, obj oid.ID) []oid.ID {
common.PrintVerbose(cmd, "Fetching raw object header...")
// request raw header first
var addrObj oid.Address
addrObj.SetContainer(cnr)
addrObj.SetObject(obj)
var prmHead internal.HeadObjectPrm
prmHead.SetClient(cli)
prmHead.SetAddress(addrObj)
prmHead.SetRawFlag(true)
Prepare(cmd, &prmHead)
_, err := internal.HeadObject(cmd.Context(), prmHead)
var errSplit *objectSDK.SplitInfoError
switch {
default:
commonCmd.ExitOnErr(cmd, "failed to get raw object header: %w", err)
case err == nil:
common.PrintVerbose(cmd, "Raw header received - object is singular.")
return nil
case errors.As(err, &errSplit):
common.PrintVerbose(cmd, "Split information received - object is virtual.")
}
splitInfo := errSplit.SplitInfo()
if members, ok := tryGetSplitMembersByLinkingObject(cmd, splitInfo, prmHead, cnr); ok {
return members
}
if members, ok := tryGetSplitMembersBySplitID(cmd, splitInfo, cli, cnr); ok {
return members
}
return tryRestoreChainInReverse(cmd, splitInfo, prmHead, cli, cnr, obj)
}
func tryGetSplitMembersByLinkingObject(cmd *cobra.Command, splitInfo *objectSDK.SplitInfo, prmHead internal.HeadObjectPrm, cnr cid.ID) ([]oid.ID, bool) {
// collect split chain by the descending ease of operations (ease is evaluated heuristically).
// If any approach fails, we don't try the next since we assume that it will fail too.
if idLinking, ok := splitInfo.Link(); ok {
common.PrintVerbose(cmd, "Collecting split members using linking object %s...", idLinking)
var addrObj oid.Address
addrObj.SetContainer(cnr)
addrObj.SetObject(idLinking)
prmHead.SetAddress(addrObj)
prmHead.SetRawFlag(false)
// client is already set
res, err := internal.HeadObject(cmd.Context(), prmHead)
if err == nil {
children := res.Header().Children()
common.PrintVerbose(cmd, "Received split members from the linking object: %v", children)
// include linking object
return append(children, idLinking), true
}
// linking object is not required for
// object collecting
common.PrintVerbose(cmd, "failed to get linking object's header: %w", err)
}
return nil, false
}
func tryGetSplitMembersBySplitID(cmd *cobra.Command, splitInfo *objectSDK.SplitInfo, cli *client.Client, cnr cid.ID) ([]oid.ID, bool) {
if idSplit := splitInfo.SplitID(); idSplit != nil {
common.PrintVerbose(cmd, "Collecting split members by split ID...")
var query objectSDK.SearchFilters
query.AddSplitIDFilter(objectSDK.MatchStringEqual, idSplit)
var prm internal.SearchObjectsPrm
prm.SetContainerID(cnr)
prm.SetClient(cli)
prm.SetFilters(query)
res, err := internal.SearchObjects(cmd.Context(), prm)
commonCmd.ExitOnErr(cmd, "failed to search objects by split ID: %w", err)
parts := res.IDList()
common.PrintVerbose(cmd, "Found objects by split ID: %v", res.IDList())
return parts, true
}
return nil, false
}
func tryRestoreChainInReverse(cmd *cobra.Command, splitInfo *objectSDK.SplitInfo, prmHead internal.HeadObjectPrm, cli *client.Client, cnr cid.ID, obj oid.ID) []oid.ID {
var addrObj oid.Address
addrObj.SetContainer(cnr)
idMember, ok := splitInfo.LastPart()
if !ok {
commonCmd.ExitOnErr(cmd, "", errors.New("missing any data in received object split information"))
}
common.PrintVerbose(cmd, "Traverse the object split chain in reverse...", idMember)
var res *internal.HeadObjectRes
var err error
chain := []oid.ID{idMember}
chainSet := map[oid.ID]struct{}{idMember: {}}
prmHead.SetRawFlag(false)
// split members are almost definitely singular, but don't get hung up on it
for {
common.PrintVerbose(cmd, "Reading previous element of the split chain member %s...", idMember)
addrObj.SetObject(idMember)
prmHead.SetAddress(addrObj)
res, err = internal.HeadObject(cmd.Context(), prmHead)
commonCmd.ExitOnErr(cmd, "failed to read split chain member's header: %w", err)
idMember, ok = res.Header().PreviousID()
if !ok {
common.PrintVerbose(cmd, "Chain ended.")
break
}
if _, ok = chainSet[idMember]; ok {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("duplicated member in the split chain %s", idMember))
}
chain = append(chain, idMember)
chainSet[idMember] = struct{}{}
}
common.PrintVerbose(cmd, "Looking for a linking object...")
var query objectSDK.SearchFilters
query.AddParentIDFilter(objectSDK.MatchStringEqual, obj)
var prmSearch internal.SearchObjectsPrm
prmSearch.SetClient(cli)
prmSearch.SetContainerID(cnr)
prmSearch.SetFilters(query)
resSearch, err := internal.SearchObjects(cmd.Context(), prmSearch)
commonCmd.ExitOnErr(cmd, "failed to find object children: %w", err)
list := resSearch.IDList()
for i := range list {
if _, ok = chainSet[list[i]]; !ok {
common.PrintVerbose(cmd, "Found one more related object %s.", list[i])
chain = append(chain, list[i])
}
}
return chain
}

View file

@ -1,131 +0,0 @@
package cmd
import (
"os"
"path/filepath"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
accountingCli "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/accounting"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/acl"
bearerCli "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/bearer"
containerCli "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/container"
controlCli "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/control"
netmapCli "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/netmap"
objectCli "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/object"
sessionCli "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/session"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/tree"
utilCli "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/misc"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/gendoc"
"github.com/mitchellh/go-homedir"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
const (
envPrefix = "FROSTFS_CLI"
)
// Global scope flags.
var (
cfgFile string
cfgDir string
)
// rootCmd represents the base command when called without any subcommands.
var rootCmd = &cobra.Command{
Use: "frostfs-cli",
Short: "Command Line Tool to work with FrostFS",
Long: `FrostFS CLI provides all basic interactions with FrostFS and it's services.
It contains commands for interaction with FrostFS nodes using different versions
of frostfs-api and some useful utilities for compiling ACL rules from JSON
notation, managing container access through protocol gates, querying network map
and much more!`,
Run: entryPoint,
}
// Execute adds all child commands to the root command and sets flags appropriately.
// This is called by main.main(). It only needs to happen once to the rootCmd.
func Execute() {
err := rootCmd.Execute()
commonCmd.ExitOnErr(rootCmd, "", err)
}
func init() {
cobra.OnInitialize(initConfig)
// use stdout as default output for cmd.Print()
rootCmd.SetOut(os.Stdout)
// Here you will define your flags and configuration settings.
// Cobra supports persistent flags, which, if defined here,
// will be global for your application.
rootCmd.PersistentFlags().StringVarP(&cfgFile, "config", "c", "", "Config file (default is $HOME/.config/frostfs-cli/config.yaml)")
rootCmd.PersistentFlags().StringVar(&cfgDir, "config-dir", "", "Config directory")
rootCmd.PersistentFlags().BoolP(commonflags.Verbose, commonflags.VerboseShorthand,
false, commonflags.VerboseUsage)
_ = viper.BindPFlag(commonflags.Verbose, rootCmd.PersistentFlags().Lookup(commonflags.Verbose))
// Cobra also supports local flags, which will only run
// when this action is called directly.
rootCmd.Flags().Bool("version", false, "Application version and FrostFS API compatibility")
rootCmd.AddCommand(acl.Cmd)
rootCmd.AddCommand(bearerCli.Cmd)
rootCmd.AddCommand(sessionCli.Cmd)
rootCmd.AddCommand(accountingCli.Cmd)
rootCmd.AddCommand(controlCli.Cmd)
rootCmd.AddCommand(utilCli.Cmd)
rootCmd.AddCommand(netmapCli.Cmd)
rootCmd.AddCommand(objectCli.Cmd)
rootCmd.AddCommand(containerCli.Cmd)
rootCmd.AddCommand(tree.Cmd)
rootCmd.AddCommand(gendoc.Command(rootCmd, gendoc.Options{}))
}
func entryPoint(cmd *cobra.Command, _ []string) {
printVersion, _ := cmd.Flags().GetBool("version")
if printVersion {
cmd.Print(misc.BuildInfo("FrostFS CLI"))
return
}
_ = cmd.Usage()
}
// initConfig reads in config file and ENV variables if set.
func initConfig() {
if cfgFile != "" {
// Use config file from the flag.
viper.SetConfigFile(cfgFile)
} else {
// Find home directory.
home, err := homedir.Dir()
commonCmd.ExitOnErr(rootCmd, "", err)
// Search config in `$HOME/.config/frostfs-cli/` with name "config.yaml"
viper.AddConfigPath(filepath.Join(home, ".config", "frostfs-cli"))
viper.SetConfigName("config")
viper.SetConfigType("yaml")
}
viper.SetEnvPrefix(envPrefix)
viper.AutomaticEnv() // read in environment variables that match
// If a config file is found, read it in.
if err := viper.ReadInConfig(); err == nil {
common.PrintVerbose(rootCmd, "Using config file: %s", viper.ConfigFileUsed())
}
if cfgDir != "" {
if err := config.ReadConfigDir(viper.GetViper(), cfgDir); err != nil {
commonCmd.ExitOnErr(rootCmd, "failed to read config dir: %w", err)
}
}
}

View file

@ -1,101 +0,0 @@
package tree
import (
"crypto/sha256"
"fmt"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/tree"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"github.com/spf13/cobra"
)
var addCmd = &cobra.Command{
Use: "add",
Short: "Add a node to the tree service",
Run: add,
PersistentPreRun: func(cmd *cobra.Command, _ []string) {
commonflags.Bind(cmd)
},
}
func initAddCmd() {
commonflags.Init(addCmd)
initCTID(addCmd)
ff := addCmd.Flags()
ff.StringSlice(metaFlagKey, nil, "Meta pairs in the form of Key1=[0x]Value1,Key2=[0x]Value2")
ff.Uint64(parentIDFlagKey, 0, "Parent node ID")
_ = cobra.MarkFlagRequired(ff, commonflags.RPC)
}
func add(cmd *cobra.Command, _ []string) {
pk := key.GetOrGenerate(cmd)
var cnr cid.ID
err := cnr.DecodeString(cmd.Flag(commonflags.CIDFlag).Value.String())
commonCmd.ExitOnErr(cmd, "decode container ID string: %w", err)
tid, _ := cmd.Flags().GetString(treeIDFlagKey)
pid, _ := cmd.Flags().GetUint64(parentIDFlagKey)
meta, err := parseMeta(cmd)
commonCmd.ExitOnErr(cmd, "meta data parsing: %w", err)
ctx := cmd.Context()
cli, err := _client(ctx)
commonCmd.ExitOnErr(cmd, "failed to create client: %w", err)
rawCID := make([]byte, sha256.Size)
cnr.Encode(rawCID)
var bt []byte
if t := common.ReadBearerToken(cmd, bearerFlagKey); t != nil {
bt = t.Marshal()
}
req := new(tree.AddRequest)
req.Body = &tree.AddRequest_Body{
ContainerId: rawCID,
TreeId: tid,
ParentId: pid,
Meta: meta,
BearerToken: bt,
}
commonCmd.ExitOnErr(cmd, "signing message: %w", tree.SignMessage(req, pk))
resp, err := cli.Add(ctx, req)
commonCmd.ExitOnErr(cmd, "failed to cal add: %w", err)
cmd.Println("Node ID: ", resp.Body.NodeId)
}
func parseMeta(cmd *cobra.Command) ([]*tree.KeyValue, error) {
raws, _ := cmd.Flags().GetStringSlice(metaFlagKey)
if len(raws) == 0 {
return nil, nil
}
pairs := make([]*tree.KeyValue, 0, len(raws))
for i := range raws {
k, v, found := strings.Cut(raws[i], "=")
if !found {
return nil, fmt.Errorf("invalid meta pair format: %s", raws[i])
}
var pair tree.KeyValue
pair.Key = k
pair.Value = []byte(v)
pairs = append(pairs, &pair)
}
return pairs, nil
}

View file

@ -1,99 +0,0 @@
package tree
import (
"crypto/sha256"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/tree"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"github.com/spf13/cobra"
)
var addByPathCmd = &cobra.Command{
Use: "add-by-path",
Short: "Add a node by the path",
Run: addByPath,
PersistentPreRun: func(cmd *cobra.Command, _ []string) {
commonflags.Bind(cmd)
},
}
func initAddByPathCmd() {
commonflags.Init(addByPathCmd)
initCTID(addByPathCmd)
ff := addByPathCmd.Flags()
// tree service does not allow any attribute except
// the 'FileName' but that's a limitation of the
// current implementation, not the rule
// ff.String(pathAttributeFlagKey, "", "Path attribute")
ff.String(pathFlagKey, "", "Path to a node")
ff.StringSlice(metaFlagKey, nil, "Meta pairs in the form of Key1=[0x]Value1,Key2=[0x]Value2")
_ = cobra.MarkFlagRequired(ff, commonflags.RPC)
_ = cobra.MarkFlagRequired(ff, pathFlagKey)
}
func addByPath(cmd *cobra.Command, _ []string) {
pk := key.GetOrGenerate(cmd)
cidRaw, _ := cmd.Flags().GetString(commonflags.CIDFlag)
var cnr cid.ID
err := cnr.DecodeString(cidRaw)
commonCmd.ExitOnErr(cmd, "decode container ID string: %w", err)
tid, _ := cmd.Flags().GetString(treeIDFlagKey)
ctx := cmd.Context()
cli, err := _client(ctx)
commonCmd.ExitOnErr(cmd, "failed to create client: %w", err)
rawCID := make([]byte, sha256.Size)
cnr.Encode(rawCID)
meta, err := parseMeta(cmd)
commonCmd.ExitOnErr(cmd, "meta data parsing: %w", err)
path, _ := cmd.Flags().GetString(pathFlagKey)
var bt []byte
if t := common.ReadBearerToken(cmd, bearerFlagKey); t != nil {
bt = t.Marshal()
}
req := new(tree.AddByPathRequest)
req.Body = &tree.AddByPathRequest_Body{
ContainerId: rawCID,
TreeId: tid,
PathAttribute: objectSDK.AttributeFileName,
// PathAttribute: pAttr,
Path: strings.Split(path, "/"),
Meta: meta,
BearerToken: bt,
}
commonCmd.ExitOnErr(cmd, "signing message: %w", tree.SignMessage(req, pk))
resp, err := cli.AddByPath(ctx, req)
commonCmd.ExitOnErr(cmd, "failed to addByPath %w", err)
cmd.Printf("Parent ID: %d\n", resp.GetBody().GetParentId())
nn := resp.GetBody().GetNodes()
if len(nn) == 0 {
common.PrintVerbose(cmd, "No new nodes were created")
return
}
cmd.Println("Created nodes:")
for _, node := range resp.GetBody().GetNodes() {
cmd.Printf("\t%d\n", node)
}
}

View file

@ -1,51 +0,0 @@
package tree
import (
"context"
"strings"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/tree"
metrics "git.frostfs.info/TrueCloudLab/frostfs-observability/metrics/grpc"
tracing "git.frostfs.info/TrueCloudLab/frostfs-observability/tracing/grpc"
"github.com/spf13/viper"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
)
// _client returns grpc Tree service client. Should be removed
// after making Tree API public.
func _client(ctx context.Context) (tree.TreeServiceClient, error) {
var netAddr network.Address
err := netAddr.FromString(viper.GetString(commonflags.RPC))
if err != nil {
return nil, err
}
opts := []grpc.DialOption{
grpc.WithBlock(),
grpc.WithChainUnaryInterceptor(
metrics.NewUnaryClientInterceptor(),
tracing.NewUnaryClientInteceptor(),
),
grpc.WithChainStreamInterceptor(
metrics.NewStreamClientInterceptor(),
tracing.NewStreamClientInterceptor(),
),
}
if !strings.HasPrefix(netAddr.URIAddr(), "grpcs:") {
opts = append(opts, grpc.WithTransportCredentials(insecure.NewCredentials()))
}
// a default connection establishing timeout
const defaultClientConnectTimeout = time.Second * 2
ctx, cancel := context.WithTimeout(ctx, defaultClientConnectTimeout)
cc, err := grpc.DialContext(ctx, netAddr.URIAddr(), opts...)
cancel()
return tree.NewTreeServiceClient(cc), err
}

View file

@ -1,103 +0,0 @@
package tree
import (
"crypto/sha256"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/tree"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"github.com/spf13/cobra"
)
var getByPathCmd = &cobra.Command{
Use: "get-by-path",
Short: "Get a node by its path",
Run: getByPath,
PersistentPreRun: func(cmd *cobra.Command, _ []string) {
commonflags.Bind(cmd)
},
}
func initGetByPathCmd() {
commonflags.Init(getByPathCmd)
initCTID(getByPathCmd)
ff := getByPathCmd.Flags()
// tree service does not allow any attribute except
// the 'FileName' but that's a limitation of the
// current implementation, not the rule
// ff.String(pathAttributeFlagKey, "", "Path attribute")
ff.String(pathFlagKey, "", "Path to a node")
ff.Bool(latestOnlyFlagKey, false, "Look only for the latest version of a node")
_ = cobra.MarkFlagRequired(ff, commonflags.RPC)
}
func getByPath(cmd *cobra.Command, _ []string) {
pk := key.GetOrGenerate(cmd)
cidRaw, _ := cmd.Flags().GetString(commonflags.CIDFlag)
var cnr cid.ID
err := cnr.DecodeString(cidRaw)
commonCmd.ExitOnErr(cmd, "decode container ID string: %w", err)
tid, _ := cmd.Flags().GetString(treeIDFlagKey)
ctx := cmd.Context()
cli, err := _client(ctx)
commonCmd.ExitOnErr(cmd, "failed to create client: %w", err)
rawCID := make([]byte, sha256.Size)
cnr.Encode(rawCID)
latestOnly, _ := cmd.Flags().GetBool(latestOnlyFlagKey)
path, _ := cmd.Flags().GetString(pathFlagKey)
var bt []byte
if t := common.ReadBearerToken(cmd, bearerFlagKey); t != nil {
bt = t.Marshal()
}
req := new(tree.GetNodeByPathRequest)
req.Body = &tree.GetNodeByPathRequest_Body{
ContainerId: rawCID,
TreeId: tid,
PathAttribute: objectSDK.AttributeFileName,
// PathAttribute: pAttr,
Path: strings.Split(path, "/"),
LatestOnly: latestOnly,
AllAttributes: true,
BearerToken: bt,
}
commonCmd.ExitOnErr(cmd, "signing message: %w", tree.SignMessage(req, pk))
resp, err := cli.GetNodeByPath(ctx, req)
commonCmd.ExitOnErr(cmd, "failed to call getNodeByPath: %w", err)
nn := resp.GetBody().GetNodes()
if len(nn) == 0 {
common.PrintVerbose(cmd, "The node is not found")
return
}
for _, n := range nn {
cmd.Printf("%d:\n", n.GetNodeId())
cmd.Println("\tParent ID: ", n.GetParentId())
cmd.Println("\tTimestamp: ", n.GetTimestamp())
cmd.Println("\tMeta pairs: ")
for _, kv := range n.GetMeta() {
cmd.Printf("\t\t%s: %s\n", kv.GetKey(), string(kv.GetValue()))
}
}
}

View file

@ -1,92 +0,0 @@
package tree
import (
"crypto/sha256"
"errors"
"io"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/pilorama"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/tree"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"github.com/spf13/cobra"
)
var getOpLogCmd = &cobra.Command{
Use: "get-op-log",
Short: "Get logged operations starting with some height",
Run: getOpLog,
PersistentPreRun: func(cmd *cobra.Command, _ []string) {
commonflags.Bind(cmd)
},
}
func initGetOpLogCmd() {
commonflags.Init(getOpLogCmd)
initCTID(getOpLogCmd)
ff := getOpLogCmd.Flags()
ff.Uint64(heightFlagKey, 0, "Height to start with")
ff.Uint64(countFlagKey, 10, "Logged operations count")
_ = cobra.MarkFlagRequired(ff, commonflags.RPC)
}
func getOpLog(cmd *cobra.Command, _ []string) {
pk := key.GetOrGenerate(cmd)
cidRaw, _ := cmd.Flags().GetString(commonflags.CIDFlag)
var cnr cid.ID
err := cnr.DecodeString(cidRaw)
commonCmd.ExitOnErr(cmd, "decode container ID string: %w", err)
tid, _ := cmd.Flags().GetString(treeIDFlagKey)
ctx := cmd.Context()
cli, err := _client(ctx)
commonCmd.ExitOnErr(cmd, "failed to create client: %w", err)
rawCID := make([]byte, sha256.Size)
cnr.Encode(rawCID)
height, _ := cmd.Flags().GetUint64(heightFlagKey)
count, _ := cmd.Flags().GetUint64(countFlagKey)
req := &tree.GetOpLogRequest{
Body: &tree.GetOpLogRequest_Body{
ContainerId: rawCID,
TreeId: tid,
Height: height,
Count: count,
},
}
commonCmd.ExitOnErr(cmd, "signing message: %w", tree.SignMessage(req, pk))
resp, err := cli.GetOpLog(ctx, req)
commonCmd.ExitOnErr(cmd, "get op log: %w", err)
opLogResp, err := resp.Recv()
for ; err == nil; opLogResp, err = resp.Recv() {
o := opLogResp.GetBody().GetOperation()
cmd.Println("Parent ID: ", o.GetParentId())
cmd.Println("\tChild ID: ", o.GetChildId())
m := &pilorama.Meta{}
err = m.FromBytes(o.GetMeta())
commonCmd.ExitOnErr(cmd, "could not unmarshal meta: %w", err)
cmd.Printf("\tMeta:\n")
cmd.Printf("\t\tTime: %d\n", m.Time)
for _, item := range m.Items {
cmd.Printf("\t\t%s: %s\n", item.Key, item.Value)
}
}
if !errors.Is(err, io.EOF) {
commonCmd.ExitOnErr(cmd, "get op log response stream: %w", err)
}
}

View file

@ -1,43 +0,0 @@
package tree
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/tree"
"github.com/spf13/cobra"
)
var healthcheckCmd = &cobra.Command{
Use: "healthcheck",
Short: "Check tree service availability",
Run: healthcheck,
PersistentPreRun: func(cmd *cobra.Command, _ []string) {
commonflags.Bind(cmd)
},
}
func initHealthcheckCmd() {
commonflags.Init(healthcheckCmd)
ff := healthcheckCmd.Flags()
_ = cobra.MarkFlagRequired(ff, commonflags.RPC)
}
func healthcheck(cmd *cobra.Command, _ []string) {
pk := key.GetOrGenerate(cmd)
ctx := cmd.Context()
cli, err := _client(ctx)
commonCmd.ExitOnErr(cmd, "failed to create client: %w", err)
req := &tree.HealthcheckRequest{
Body: &tree.HealthcheckRequest_Body{},
}
commonCmd.ExitOnErr(cmd, "signing message: %w", tree.SignMessage(req, pk))
_, err = cli.Healthcheck(ctx, req)
commonCmd.ExitOnErr(cmd, "failed to call healthcheck: %w", err)
common.PrintVerbose(cmd, "Successful healthcheck invocation.")
}

Some files were not shown because too many files have changed in this diff Show more