Compare commits

..

7 commits

Author SHA1 Message Date
9551f34f00 [#163] Support JSON bearer token
All checks were successful
/ DCO (pull_request) Successful in 31s
/ Vulncheck (pull_request) Successful in 50s
/ Builds (pull_request) Successful in 57s
/ Lint (pull_request) Successful in 2m36s
/ Tests (pull_request) Successful in 57s
/ Builds (push) Successful in 53s
/ Vulncheck (push) Successful in 53s
/ Lint (push) Successful in 2m34s
/ Tests (push) Successful in 57s
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2025-01-09 11:26:37 +03:00
a4e3767d4b [#175] Adopt 1.6.* aio versoins in integration tests
All checks were successful
/ Vulncheck (push) Successful in 3m0s
/ Builds (push) Successful in 3m28s
/ Lint (push) Successful in 3m53s
/ Tests (push) Successful in 3m23s
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-12-24 08:01:33 +00:00
d32ac4b537 Release v0.32.0
All checks were successful
/ DCO (pull_request) Successful in 10m3s
/ Builds (pull_request) Successful in 3m0s
/ Vulncheck (pull_request) Successful in 10m39s
/ Lint (pull_request) Successful in 4m12s
/ Tests (pull_request) Successful in 2m48s
/ Vulncheck (push) Successful in 2m57s
/ Lint (push) Successful in 3m32s
/ Tests (push) Successful in 8m9s
/ Builds (push) Successful in 2m53s
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-12-20 15:23:02 +03:00
a658f3adc0 [#181] index_page: Ignore deleted objects in versioned buckets
All checks were successful
/ Vulncheck (push) Successful in 1m48s
/ Builds (push) Successful in 5m23s
/ Lint (push) Successful in 6m16s
/ Tests (push) Successful in 5m19s
Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
2024-12-17 13:06:57 +00:00
a945a947ac [#183] Unlink API.md to README file
Some checks failed
/ Vulncheck (push) Successful in 2m19s
/ Builds (push) Has been cancelled
/ Lint (push) Has been cancelled
/ Tests (push) Has been cancelled
This is useful for auto-generated document tools
which parse docs dir.

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-12-17 13:03:02 +00:00
1be92fa4be
[#166] Fix getting s3 object with the FrostFS OID name
Some checks failed
/ DCO (pull_request) Successful in 4m21s
/ Vulncheck (pull_request) Successful in 4m31s
/ Builds (pull_request) Successful in 1m35s
/ Lint (pull_request) Successful in 2m56s
/ Tests (pull_request) Successful in 1m57s
/ Builds (push) Has been cancelled
/ Lint (push) Has been cancelled
/ Tests (push) Has been cancelled
/ Vulncheck (push) Has been cancelled
Prioritize getting s3 object with the key, which equals to valid FrostFS OID, rather than getting non-existent object with OID via native protocol for GET and HEAD requests

Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
2024-12-17 10:32:22 +03:00
dc100f03a6 [#174] Add fallback path to search
All checks were successful
/ Vulncheck (push) Successful in 4m45s
/ Builds (push) Successful in 2m25s
/ Lint (push) Successful in 3m16s
/ Tests (push) Successful in 2m29s
Fallback path to search is needed because
some software may keep FileName attribute
and ignore FilePath attribute during file
upload. Therefore, if this feature is
enabled under certain conditions (for more
information, see gate-configuration.md) a
search will be performed for the FileName
attribute.

Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-12-16 10:43:34 +00:00
24 changed files with 649 additions and 341 deletions

View file

@ -4,6 +4,19 @@ This document outlines major changes between releases.
## [Unreleased]
## [0.32.0] - Khumbu - 2024-12-20
### Fixed
- Getting S3 object with FrostFS Object ID-like key (#166)
- Ignore delete marked objects in versioned bucket in index page (#181)
### Added
- Metric of dropped logs by log sampler (#150)
- Fallback FileName attribute search during FilePath attribute search (#174)
### Changed
- Updated tree service pool without api-go dependency (#178)
## [0.31.0] - Rongbuk - 2024-11-20
### Fixed
@ -170,4 +183,5 @@ To see CHANGELOG for older versions, refer to https://github.com/nspcc-dev/neofs
[0.30.2]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.30.1...v0.30.2
[0.30.3]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.30.2...v0.30.3
[0.31.0]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.30.3...v0.31.0
[Unreleased]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.31.0...master
[0.32.0]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.31.0...v0.32.0
[Unreleased]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.32.0...master

139
README.md
View file

@ -217,41 +217,8 @@ Also, in case of downloading, you need to have a file inside a container.
### NNS
In all download/upload routes you can use container name instead of its id (`$CID`).
Read more about it in [docs/nns.md](./docs/nns.md).
Steps to start using name resolving:
1. Enable NNS resolving in config (`rpc_endpoint` must be a valid neo rpc node, see [configs](./config) for other examples):
```yaml
rpc_endpoint: http://morph-chain.frostfs.devenv:30333
resolve_order:
- nns
```
2. Make sure your container is registered in NNS contract. If you use [frostfs-dev-env](https://git.frostfs.info/TrueCloudLab/frostfs-dev-env)
you can check if your container (e.g. with `container-name` name) is registered in NNS:
```shell
$ curl -s --data '{"id":1,"jsonrpc":"2.0","method":"getcontractstate","params":[1]}' \
http://morph-chain.frostfs.devenv:30333 | jq -r '.result.hash'
0x8e6c3cd4b976b28e84a3788f6ea9e2676c15d667
$ docker exec -it morph_chain neo-go \
contract testinvokefunction \
-r http://morph-chain.frostfs.devenv:30333 0x8e6c3cd4b976b28e84a3788f6ea9e2676c15d667 \
resolve string:container-name.container int:16 \
| jq -r '.stack[0].value | if type=="array" then .[0].value else . end' \
| base64 -d && echo
7f3vvkw4iTiS5ZZbu5BQXEmJtETWbi3uUjLNaSs29xrL
```
3. Use container name instead of its `$CID`. For example:
```shell
$ curl http://localhost:8082/get_by_attribute/container-name/FileName/object-name
```
#### Create a container
@ -462,109 +429,7 @@ object ID, like this:
#### Authentication
You can always upload files to public containers (open for anyone to put
objects into), but for restricted containers you need to explicitly allow PUT
operations for a request signed with your HTTP Gateway keys.
If you don't want to manage gateway's secret keys and adjust policies when
gateway configuration changes (new gate, key rotation, etc) or you plan to use
public services, there is an option to let your application backend (or you) to
issue Bearer Tokens and pass them from the client via gate down to FrostFS level
to grant access.
FrostFS Bearer Token basically is a container owner-signed policy (refer to FrostFS
documentation for more details). There are two options to pass them to gateway:
* "Authorization" header with "Bearer" type and base64-encoded token in
credentials field
* "Bearer" cookie with base64-encoded token contents
For example, you have a mobile application frontend with a backend part storing
data in FrostFS. When a user authorizes in the mobile app, the backend issues a FrostFS
Bearer token and provides it to the frontend. Then, the mobile app may generate
some data and upload it via any available FrostFS HTTP Gateway by adding
the corresponding header to the upload request. Accessing policy protected data
works the same way.
##### Example
In order to generate a bearer token, you need to have wallet (which will be used to sign the token)
1. Suppose you have a container with private policy for wallet key
```
$ frostfs-cli container create -r <endpoint> --wallet <wallet> -policy <policy> --basic-acl 0 --await
CID: 9dfzyvq82JnFqp5svxcREf2iy6XNuifYcJPusEDnGK9Z
$ frostfs-cli ape-manager add -r <endpoint> --wallet <wallet> \
--target-type container --target-name 9dfzyvq82JnFqp5svxcREf2iy6XNuifYcJPusEDnGK9Z \
--rule "allow Object.* RequestCondition:"\$Actor:publicKey"=03b09baabff3f6107c7e9acb8721a6fc5618d45b50247a314d82e548702cce8cd5 *" \
--chain-id <chainID>
```
2. Form a Bearer token (10000 is lifetime expiration in epoch) to impersonate
HTTP Gateway request as wallet signed request and save it to **bearer.json**:
```
{
"body": {
"allowImpersonate": true,
"lifetime": {
"exp": "10000",
"nbf": "0",
"iat": "0"
}
},
"signature": null
}
```
3. Sign it with the wallet:
```
$ frostfs-cli util sign bearer-token --from bearer.json --to signed.json -w <wallet>
```
4. Encode to base64 to use in header:
```
$ base64 -w 0 signed.json
# output: Ck4KKgoECAIQBhIiCiCZGdlbN7DPGPMg9rsWqV+p2XdMzUqknRiexewSFp8kmBIbChk17MUri6OJ0X5ftsHzy7NERDNFB4C92PcaGgMIkE4SZgohAxpsb7vfAso1F0X6hrm6WpRS14WsT3/Ct1SMoqRsT89KEkEEGxKi8GjKSf52YqhppgaOTQHbUsL3jn7SHLqS3ndAQ7NtAATnmRHleZw2V2xRRSRBQdjDC05KK83LhdSax72Fsw==
```
After that, the Bearer token can be used:
```
$ curl -F 'file=@cat.jpeg;filename=cat.jpeg' -H "Authorization: Bearer Ck4KKgoECAIQBhIiCiCZGdlbN7DPGPMg9rsWqV+p2XdMzUqknRiexewSFp8kmBIbChk17MUri6OJ0X5ftsHzy7NERDNFB4C92PcaGgMIkE4SZgohAxpsb7vfAso1F0X6hrm6WpRS14WsT3/Ct1SMoqRsT89KEkEEGxKi8GjKSf52YqhppgaOTQHbUsL3jn7SHLqS3ndAQ7NtAATnmRHleZw2V2xRRSRBQdjDC05KK83LhdSax72Fsw==" \
http://localhost:8082/upload/BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K
# output:
# {
# "object_id": "DhfES9nVrFksxGDD2jQLunGADfrXExxNwqXbDafyBn9X",
# "container_id": "BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K"
# }
```
##### Note: Bearer Token owner
You can specify exact key who can use Bearer Token (gateway wallet address).
To do this, encode wallet address in base64 format
```
$ echo 'NhVtreTTCoqsMQV5Wp55fqnriiUCpEaKm3' | base58 --decode | base64
# output: NezFK4ujidF+X7bB88uzREQzRQeAvdj3Gg==
```
Then specify this value in Bearer Token Json
```
{
"body": {
"ownerID": {
"value": "NezFK4ujidF+X7bB88uzREQzRQeAvdj3Gg=="
},
...
```
##### Note: Policy override
Instead of impersonation, you can define the set of policies that will be applied
to the request sender. This allows to restrict access to specific operation and
specific objects without giving full impersonation control to the token user.
Read more about request authentication in [docs/authentication.md](./docs/authemtnication.md)
### Metrics and Pprof

View file

@ -1 +1 @@
v0.31.0
v0.32.0

View file

@ -95,21 +95,22 @@ type (
dialerSource *internalnet.DialerSource
workerPoolSize int
mu sync.RWMutex
defaultTimestamp bool
zipCompression bool
clientCut bool
returnIndexPage bool
indexPageTemplate string
bufferMaxSizeForPut uint64
namespaceHeader string
defaultNamespaces []string
corsAllowOrigin string
corsAllowMethods []string
corsAllowHeaders []string
corsExposeHeaders []string
corsAllowCredentials bool
corsMaxAge int
mu sync.RWMutex
defaultTimestamp bool
zipCompression bool
clientCut bool
returnIndexPage bool
indexPageTemplate string
bufferMaxSizeForPut uint64
namespaceHeader string
defaultNamespaces []string
corsAllowOrigin string
corsAllowMethods []string
corsAllowHeaders []string
corsExposeHeaders []string
corsAllowCredentials bool
corsMaxAge int
enableFilepathFallback bool
}
CORS struct {
@ -189,6 +190,7 @@ func (s *appSettings) update(v *viper.Viper, l *zap.Logger) {
corsExposeHeaders := v.GetStringSlice(cfgCORSExposeHeaders)
corsAllowCredentials := v.GetBool(cfgCORSAllowCredentials)
corsMaxAge := fetchCORSMaxAge(v)
enableFilepathFallback := v.GetBool(cfgFeaturesEnableFilepathFallback)
s.mu.Lock()
defer s.mu.Unlock()
@ -208,6 +210,7 @@ func (s *appSettings) update(v *viper.Viper, l *zap.Logger) {
s.corsExposeHeaders = corsExposeHeaders
s.corsAllowCredentials = corsAllowCredentials
s.corsMaxAge = corsMaxAge
s.enableFilepathFallback = enableFilepathFallback
}
func (s *loggerSettings) DroppedLogsInc() {
@ -305,6 +308,12 @@ func (s *appSettings) FormContainerZone(ns string) (zone string, isDefault bool)
return ns + ".ns", false
}
func (s *appSettings) EnableFilepathFallback() bool {
s.mu.RLock()
defer s.mu.RUnlock()
return s.enableFilepathFallback
}
func (a *app) initResolver() {
var err error
a.resolver, err = resolver.NewContainerResolver(a.getResolverConfig())
@ -499,10 +508,10 @@ func (a *app) Serve() {
close(a.webDone)
}()
handler := handler.New(a.AppParams(), a.settings, tree.NewTree(frostfs.NewPoolWrapper(a.treePool)), workerPool)
handle := handler.New(a.AppParams(), a.settings, tree.NewTree(frostfs.NewPoolWrapper(a.treePool)), workerPool)
// Configure router.
a.configureRouter(handler)
a.configureRouter(handle)
a.startServices()
a.initServers(a.ctx)

View file

@ -14,6 +14,7 @@ import (
"net/http"
"os"
"sort"
"strings"
"testing"
"time"
@ -54,6 +55,7 @@ func TestIntegration(t *testing.T) {
"1.2.7",
"1.3.0",
"1.5.0",
"1.6.5",
}
key, err := keys.NewPrivateKeyFromHex("1dd37fba80fec4e6a6f13fd708d8dcb3b29def768017052f6c930fa1c5d90bbb")
require.NoError(t, err)
@ -70,23 +72,28 @@ func TestIntegration(t *testing.T) {
ctx, cancel2 := context.WithCancel(rootCtx)
aioContainer := createDockerContainer(ctx, t, aioImage+version)
if strings.HasPrefix(version, "1.6") {
registerUser(t, ctx, aioContainer, file.Name())
}
// See the logs from the command execution.
server, cancel := runServer(file.Name())
clientPool := getPool(ctx, t, key)
CID, err := createContainer(ctx, t, clientPool, ownerID, version)
CID, err := createContainer(ctx, t, clientPool, ownerID)
require.NoError(t, err, version)
jsonToken, binaryToken := makeBearerTokens(t, key, ownerID, version)
t.Run("simple put "+version, func(t *testing.T) { simplePut(ctx, t, clientPool, CID, version) })
t.Run("simple put "+version, func(t *testing.T) { simplePut(ctx, t, clientPool, CID) })
t.Run("put with json bearer token in header"+version, func(t *testing.T) { putWithBearerTokenInHeader(ctx, t, clientPool, CID, jsonToken) })
t.Run("put with json bearer token in cookie"+version, func(t *testing.T) { putWithBearerTokenInCookie(ctx, t, clientPool, CID, jsonToken) })
t.Run("put with binary bearer token in header"+version, func(t *testing.T) { putWithBearerTokenInHeader(ctx, t, clientPool, CID, binaryToken) })
t.Run("put with binary bearer token in cookie"+version, func(t *testing.T) { putWithBearerTokenInCookie(ctx, t, clientPool, CID, binaryToken) })
t.Run("put with duplicate keys "+version, func(t *testing.T) { putWithDuplicateKeys(t, CID) })
t.Run("simple get "+version, func(t *testing.T) { simpleGet(ctx, t, clientPool, ownerID, CID, version) })
t.Run("get by attribute "+version, func(t *testing.T) { getByAttr(ctx, t, clientPool, ownerID, CID, version) })
t.Run("get zip "+version, func(t *testing.T) { getZip(ctx, t, clientPool, ownerID, CID, version) })
t.Run("test namespaces "+version, func(t *testing.T) { checkNamespaces(ctx, t, clientPool, ownerID, CID, version) })
t.Run("simple get "+version, func(t *testing.T) { simpleGet(ctx, t, clientPool, ownerID, CID) })
t.Run("get by attribute "+version, func(t *testing.T) { getByAttr(ctx, t, clientPool, ownerID, CID) })
t.Run("get zip "+version, func(t *testing.T) { getZip(ctx, t, clientPool, ownerID, CID) })
t.Run("test namespaces "+version, func(t *testing.T) { checkNamespaces(ctx, t, clientPool, ownerID, CID) })
cancel()
server.Wait()
@ -109,7 +116,7 @@ func runServer(pathToWallet string) (App, context.CancelFunc) {
return application, cancel
}
func simplePut(ctx context.Context, t *testing.T, p *pool.Pool, CID cid.ID, version string) {
func simplePut(ctx context.Context, t *testing.T, p *pool.Pool, CID cid.ID) {
url := testHost + "/upload/" + CID.String()
makePutRequestAndCheck(ctx, t, p, CID, url)
@ -257,7 +264,7 @@ func putWithDuplicateKeys(t *testing.T, CID cid.ID) {
require.Equal(t, http.StatusBadRequest, resp.StatusCode)
}
func simpleGet(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID, version string) {
func simpleGet(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID) {
content := "content of file"
attributes := map[string]string{
"some-attr": "some-get-value",
@ -304,7 +311,7 @@ func checkGetByAttrResponse(t *testing.T, resp *http.Response, content string, a
}
}
func getByAttr(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID, version string) {
func getByAttr(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID) {
keyAttr, valAttr := "some-attr", "some-get-by-attr-value"
content := "content of file"
attributes := map[string]string{keyAttr: valAttr}
@ -326,7 +333,7 @@ func getByAttr(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID
checkGetByAttrResponse(t, resp, content, expectedAttr)
}
func getZip(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID, version string) {
func getZip(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID) {
names := []string{"zipfolder/dir/name1.txt", "zipfolder/name2.txt"}
contents := []string{"content of file1", "content of file2"}
attributes1 := map[string]string{object.AttributeFilePath: names[0]}
@ -391,7 +398,7 @@ func checkZip(t *testing.T, data []byte, length int64, names, contents []string)
}
}
func checkNamespaces(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID, version string) {
func checkNamespaces(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID) {
content := "content of file"
attributes := map[string]string{
"some-attr": "some-get-value",
@ -428,7 +435,7 @@ func checkNamespaces(ctx context.Context, t *testing.T, clientPool *pool.Pool, o
func createDockerContainer(ctx context.Context, t *testing.T, image string) testcontainers.Container {
req := testcontainers.ContainerRequest{
Image: image,
WaitingFor: wait.NewLogStrategy("aio container started").WithStartupTimeout(30 * time.Second),
WaitingFor: wait.NewLogStrategy("aio container started").WithStartupTimeout(2 * time.Minute),
Name: "aio",
Hostname: "aio",
NetworkMode: "host",
@ -468,7 +475,7 @@ func getPool(ctx context.Context, t *testing.T, key *keys.PrivateKey) *pool.Pool
return clientPool
}
func createContainer(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, version string) (cid.ID, error) {
func createContainer(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID) (cid.ID, error) {
var policy netmap.PlacementPolicy
err := policy.DecodeString("REP 1")
require.NoError(t, err)
@ -528,6 +535,18 @@ func putObject(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID
return id.ObjectID
}
func registerUser(t *testing.T, ctx context.Context, aioContainer testcontainers.Container, pathToWallet string) {
err := aioContainer.CopyFileToContainer(ctx, pathToWallet, "/usr/wallet.json", 644)
require.NoError(t, err)
_, err = aioContainer.Exec(ctx, []string{
"/usr/bin/frostfs-s3-authmate", "register-user",
"--wallet", "/usr/wallet.json",
"--rpc-endpoint", "http://localhost:30333",
"--contract-wallet", "/config/s3-gw-wallet.json"})
require.NoError(t, err)
}
func makeBearerTokens(t *testing.T, key *keys.PrivateKey, ownerID user.ID, version string) (jsonTokenBase64, binaryTokenBase64 string) {
tkn := new(bearer.Token)
tkn.ForUser(ownerID)

View file

@ -164,6 +164,9 @@ const (
cfgMultinetFallbackDelay = "multinet.fallback_delay"
cfgMultinetSubnets = "multinet.subnets"
// Feature.
cfgFeaturesEnableFilepathFallback = "features.enable_filepath_fallback"
// Command line args.
cmdHelp = "help"
cmdVersion = "version"

View file

@ -158,4 +158,7 @@ HTTP_GW_WORKER_POOL_SIZE=1000
# Enable index page support
HTTP_GW_INDEX_PAGE_ENABLED=false
# Index page template path
HTTP_GW_INDEX_PAGE_TEMPLATE_PATH=internal/handler/templates/index.gotmpl
HTTP_GW_INDEX_PAGE_TEMPLATE_PATH=internal/handler/templates/index.gotmpl
# Enable using fallback path to search for a object by attribute
HTTP_GW_FEATURES_ENABLE_FILEPATH_FALLBACK=false

View file

@ -172,3 +172,7 @@ multinet:
source_ips:
- 1.2.3.4
- 1.2.3.5
features:
# Enable using fallback path to search for a object by attribute
enable_filepath_fallback: false

View file

@ -8,7 +8,7 @@
| `/zip/{cid}/{prefix}` | [Download objects in archive](#download-zip) |
**Note:** `cid` parameter can be base58 encoded container ID or container name
(the name must be registered in NNS, see appropriate section in [README](../README.md#nns)).
(the name must be registered in NNS, see appropriate section in [nns.md](./nns.md)).
Route parameters can be:
@ -18,7 +18,7 @@ Route parameters can be:
### Bearer token
All routes can accept [bearer token](../README.md#authentication) from:
All routes can accept [bearer token](./authentication.md) from:
* `Authorization` header with `Bearer` type and base64-encoded token in
credentials field

108
docs/authentication.md Normal file
View file

@ -0,0 +1,108 @@
# Request authentication
HTTP Gateway does not authorize requests. Gateway converts HTTP request to a
FrostFS request and signs it with its own private key.
You can always upload files to public containers (open for anyone to put
objects into), but for restricted containers you need to explicitly allow PUT
operations for a request signed with your HTTP Gateway keys.
If you don't want to manage gateway's secret keys and adjust policies when
gateway configuration changes (new gate, key rotation, etc) or you plan to use
public services, there is an option to let your application backend (or you) to
issue Bearer Tokens and pass them from the client via gate down to FrostFS level
to grant access.
FrostFS Bearer Token basically is a container owner-signed policy (refer to FrostFS
documentation for more details). There are two options to pass them to gateway:
* "Authorization" header with "Bearer" type and base64-encoded token in
credentials field
* "Bearer" cookie with base64-encoded token contents
For example, you have a mobile application frontend with a backend part storing
data in FrostFS. When a user authorizes in the mobile app, the backend issues a FrostFS
Bearer token and provides it to the frontend. Then, the mobile app may generate
some data and upload it via any available FrostFS HTTP Gateway by adding
the corresponding header to the upload request. Accessing policy protected data
works the same way.
##### Example
In order to generate a bearer token, you need to have wallet (which will be used to sign the token)
1. Suppose you have a container with private policy for wallet key
```
$ frostfs-cli container create -r <endpoint> --wallet <wallet> -policy <policy> --basic-acl 0 --await
CID: 9dfzyvq82JnFqp5svxcREf2iy6XNuifYcJPusEDnGK9Z
$ frostfs-cli ape-manager add -r <endpoint> --wallet <wallet> \
--target-type container --target-name 9dfzyvq82JnFqp5svxcREf2iy6XNuifYcJPusEDnGK9Z \
--rule "allow Object.* RequestCondition:"\$Actor:publicKey"=03b09baabff3f6107c7e9acb8721a6fc5618d45b50247a314d82e548702cce8cd5 *" \
--chain-id <chainID>
```
2. Form a Bearer token (10000 is lifetime expiration in epoch) to impersonate
HTTP Gateway request as wallet signed request and save it to **bearer.json**:
```
{
"body": {
"allowImpersonate": true,
"lifetime": {
"exp": "10000",
"nbf": "0",
"iat": "0"
}
},
"signature": null
}
```
3. Sign it with the wallet:
```
$ frostfs-cli util sign bearer-token --from bearer.json --to signed.json -w <wallet>
```
4. Encode to base64 to use in header:
```
$ base64 -w 0 signed.json
# output: Ck4KKgoECAIQBhIiCiCZGdlbN7DPGPMg9rsWqV+p2XdMzUqknRiexewSFp8kmBIbChk17MUri6OJ0X5ftsHzy7NERDNFB4C92PcaGgMIkE4SZgohAxpsb7vfAso1F0X6hrm6WpRS14WsT3/Ct1SMoqRsT89KEkEEGxKi8GjKSf52YqhppgaOTQHbUsL3jn7SHLqS3ndAQ7NtAATnmRHleZw2V2xRRSRBQdjDC05KK83LhdSax72Fsw==
```
After that, the Bearer token can be used:
```
$ curl -F 'file=@cat.jpeg;filename=cat.jpeg' -H "Authorization: Bearer Ck4KKgoECAIQBhIiCiCZGdlbN7DPGPMg9rsWqV+p2XdMzUqknRiexewSFp8kmBIbChk17MUri6OJ0X5ftsHzy7NERDNFB4C92PcaGgMIkE4SZgohAxpsb7vfAso1F0X6hrm6WpRS14WsT3/Ct1SMoqRsT89KEkEEGxKi8GjKSf52YqhppgaOTQHbUsL3jn7SHLqS3ndAQ7NtAATnmRHleZw2V2xRRSRBQdjDC05KK83LhdSax72Fsw==" \
http://localhost:8082/upload/BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K
# output:
# {
# "object_id": "DhfES9nVrFksxGDD2jQLunGADfrXExxNwqXbDafyBn9X",
# "container_id": "BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K"
# }
```
##### Note: Bearer Token owner
You can specify exact key who can use Bearer Token (gateway wallet address).
To do this, encode wallet address in base64 format
```
$ echo 'NhVtreTTCoqsMQV5Wp55fqnriiUCpEaKm3' | base58 --decode | base64
# output: NezFK4ujidF+X7bB88uzREQzRQeAvdj3Gg==
```
Then specify this value in Bearer Token Json
```
{
"body": {
"ownerID": {
"value": "NezFK4ujidF+X7bB88uzREQzRQeAvdj3Gg=="
},
...
```
##### Note: Policy override
Instead of impersonation, you can define the set of policies that will be applied
to the request sender. This allows to restrict access to specific operation and
specific objects without giving full impersonation control to the token user.

View file

@ -59,7 +59,7 @@ $ cat http.log
| `resolve_bucket` | [Bucket name resolving configuration](#resolve_bucket-section) |
| `index_page` | [Index page configuration](#index_page-section) |
| `multinet` | [Multinet configuration](#multinet-section) |
| `features` | [Features configuration](#features-section) |
# General section
@ -457,3 +457,16 @@ multinet:
|--------------|------------|---------------|---------------|----------------------------------------------------------------------|
| `mask` | `string` | yes | | Destination subnet. |
| `source_ips` | `[]string` | yes | | Array of source IP addresses to use when dialing destination subnet. |
# `features` section
Contains parameters for enabling features.
```yaml
features:
enable_filepath_fallback: true
```
| Parameter | Type | SIGHUP reload | Default value | Description |
| ----------------------------------- | ------ | ------------- | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `features.enable_filepath_fallback` | `bool` | yes | `false` | Enable using fallback path to search for a object by attribute. If the value of the `FilePath` attribute in the request contains no `/` symbols or single leading `/` symbol and the object was not found, then an attempt is made to search for the object by the attribute `FileName`. |

36
docs/nns.md Normal file
View file

@ -0,0 +1,36 @@
# Nicename Resolving with NNS
Steps to start using name resolving:
1. Enable NNS resolving in config (`rpc_endpoint` must be a valid neo rpc node, see [configs](./config) for other examples):
```yaml
rpc_endpoint: http://morph-chain.frostfs.devenv:30333
resolve_order:
- nns
```
2. Make sure your container is registered in NNS contract. If you use [frostfs-dev-env](https://git.frostfs.info/TrueCloudLab/frostfs-dev-env)
you can check if your container (e.g. with `container-name` name) is registered in NNS:
```shell
$ curl -s --data '{"id":1,"jsonrpc":"2.0","method":"getcontractstate","params":[1]}' \
http://morph-chain.frostfs.devenv:30333 | jq -r '.result.hash'
0x8e6c3cd4b976b28e84a3788f6ea9e2676c15d667
$ docker exec -it morph_chain neo-go \
contract testinvokefunction \
-r http://morph-chain.frostfs.devenv:30333 0x8e6c3cd4b976b28e84a3788f6ea9e2676c15d667 \
resolve string:container-name.container int:16 \
| jq -r '.stack[0].value | if type=="array" then .[0].value else . end' \
| base64 -d && echo
7f3vvkw4iTiS5ZZbu5BQXEmJtETWbi3uUjLNaSs29xrL
```
3. Use container name instead of its `$CID`. For example:
```shell
$ curl http://localhost:8082/get_by_attribute/container-name/FileName/object-name
```

View file

@ -1,4 +1,4 @@
package api
package data
import (
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
@ -7,12 +7,21 @@ import (
// NodeVersion represent node from tree service.
type NodeVersion struct {
BaseNodeVersion
DeleteMarker bool
IsPrefixNode bool
}
// BaseNodeVersion is minimal node info from tree service.
// Basically used for "system" object.
type BaseNodeVersion struct {
OID oid.ID
ID uint64
OID oid.ID
IsDeleteMarker bool
}
type NodeInfo struct {
Meta []NodeMeta
}
type NodeMeta interface {
GetKey() string
GetValue() []byte
}

View file

@ -22,11 +22,13 @@ import (
)
const (
dateFormat = "02-01-2006 15:04"
attrOID = "OID"
attrCreated = "Created"
attrFileName = "FileName"
attrSize = "Size"
dateFormat = "02-01-2006 15:04"
attrOID = "OID"
attrCreated = "Created"
attrFileName = "FileName"
attrFilePath = "FilePath"
attrSize = "Size"
attrDeleteMarker = "IsDeleteMarker"
)
type (
@ -38,23 +40,25 @@ type (
Objects []ResponseObject
}
ResponseObject struct {
OID string
Created string
FileName string
FilePath string
Size string
IsDir bool
GetURL string
OID string
Created string
FileName string
FilePath string
Size string
IsDir bool
GetURL string
IsDeleteMarker bool
}
)
func newListObjectsResponseS3(attrs map[string]string) ResponseObject {
return ResponseObject{
Created: formatTimestamp(attrs[attrCreated]),
OID: attrs[attrOID],
FileName: attrs[attrFileName],
Size: attrs[attrSize],
IsDir: attrs[attrOID] == "",
Created: formatTimestamp(attrs[attrCreated]),
OID: attrs[attrOID],
FileName: attrs[attrFileName],
Size: attrs[attrSize],
IsDir: attrs[attrOID] == "",
IsDeleteMarker: attrs[attrDeleteMarker] == "true",
}
}
@ -169,7 +173,7 @@ func (h *Handler) getDirObjectsS3(ctx context.Context, bucketInfo *data.BucketIn
objects: make([]ResponseObject, 0, len(nodes)),
}
for _, node := range nodes {
meta := node.GetMeta()
meta := node.Meta
if meta == nil {
continue
}
@ -178,6 +182,9 @@ func (h *Handler) getDirObjectsS3(ctx context.Context, bucketInfo *data.BucketIn
attrs[m.GetKey()] = string(m.GetValue())
}
obj := newListObjectsResponseS3(attrs)
if obj.IsDeleteMarker {
continue
}
obj.FilePath = prefix + obj.FileName
obj.GetURL = "/get/" + bucketInfo.Name + urlencode(obj.FilePath)
result.objects = append(result.objects, obj)

View file

@ -4,12 +4,14 @@ import (
"archive/zip"
"bufio"
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/response"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
@ -23,21 +25,46 @@ import (
// DownloadByAddressOrBucketName handles download requests using simple cid/oid or bucketname/key format.
func (h *Handler) DownloadByAddressOrBucketName(c *fasthttp.RequestCtx) {
oidURLParam := c.UserValue("oid").(string)
downloadQueryParam := c.QueryArgs().GetBool("download")
cidParam := c.UserValue("cid").(string)
oidParam := c.UserValue("oid").(string)
downloadParam := c.QueryArgs().GetBool("download")
switch {
case isObjectID(oidURLParam):
h.byNativeAddress(c, h.receiveFile)
case !isContainerRoot(oidURLParam) && (downloadQueryParam || !isDir(oidURLParam)):
h.byS3Path(c, h.receiveFile)
default:
h.browseIndex(c)
ctx := utils.GetContextFromRequest(c)
log := utils.GetReqLogOrDefault(ctx, h.log).With(
zap.String("cid", cidParam),
zap.String("oid", oidParam),
)
bktInfo, err := h.getBucketInfo(ctx, cidParam, log)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
checkS3Err := h.tree.CheckSettingsNodeExists(ctx, bktInfo)
if checkS3Err != nil && !errors.Is(checkS3Err, layer.ErrNodeNotFound) {
logAndSendBucketError(c, log, checkS3Err)
return
}
req := h.newRequest(c, log)
var objID oid.ID
if checkS3Err == nil && shouldDownload(oidParam, downloadParam) {
h.byS3Path(ctx, req, bktInfo.CID, oidParam, h.receiveFile)
} else if err = objID.DecodeString(oidParam); err == nil {
h.byNativeAddress(ctx, req, bktInfo.CID, objID, h.receiveFile)
} else {
h.browseIndex(c, checkS3Err != nil)
}
}
func (h *Handler) newRequest(ctx *fasthttp.RequestCtx, log *zap.Logger) *request {
return &request{
func shouldDownload(oidParam string, downloadParam bool) bool {
return !isDir(oidParam) || downloadParam
}
func (h *Handler) newRequest(ctx *fasthttp.RequestCtx, log *zap.Logger) request {
return request{
RequestCtx: ctx,
log: log,
}

View file

@ -11,9 +11,9 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/cache"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/handler/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/response"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tree"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
@ -35,6 +35,7 @@ type Config interface {
IndexPageTemplate() string
BufferMaxSizeForPut() uint64
NamespaceHeader() string
EnableFilepathFallback() bool
}
// PrmContainer groups parameters of FrostFS.Container operation.
@ -164,7 +165,7 @@ type Handler struct {
ownerID *user.ID
config Config
containerResolver ContainerResolver
tree *tree.Tree
tree layer.TreeService
cache *cache.BucketCache
workerPool *ants.Pool
}
@ -177,7 +178,7 @@ type AppParams struct {
Cache *cache.BucketCache
}
func New(params *AppParams, config Config, tree *tree.Tree, workerPool *ants.Pool) *Handler {
func New(params *AppParams, config Config, tree layer.TreeService, workerPool *ants.Pool) *Handler {
return &Handler{
log: params.Logger,
frostfs: params.FrostFS,
@ -192,77 +193,34 @@ func New(params *AppParams, config Config, tree *tree.Tree, workerPool *ants.Poo
// byNativeAddress is a wrapper for function (e.g. request.headObject, request.receiveFile) that
// prepares request and object address to it.
func (h *Handler) byNativeAddress(c *fasthttp.RequestCtx, f func(context.Context, request, oid.Address)) {
idCnr, _ := c.UserValue("cid").(string)
idObj, _ := url.PathUnescape(c.UserValue("oid").(string))
ctx := utils.GetContextFromRequest(c)
reqLog := utils.GetReqLogOrDefault(ctx, h.log)
log := reqLog.With(zap.String("cid", idCnr), zap.String("oid", idObj))
bktInfo, err := h.getBucketInfo(ctx, idCnr, log)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
objID := new(oid.ID)
if err = objID.DecodeString(idObj); err != nil {
log.Error(logs.WrongObjectID, zap.Error(err))
response.Error(c, "wrong object id", fasthttp.StatusBadRequest)
return
}
addr := newAddress(bktInfo.CID, *objID)
f(ctx, *h.newRequest(c, log), addr)
func (h *Handler) byNativeAddress(ctx context.Context, req request, cnrID cid.ID, objID oid.ID, handler func(context.Context, request, oid.Address)) {
addr := newAddress(cnrID, objID)
handler(ctx, req, addr)
}
// byS3Path is a wrapper for function (e.g. request.headObject, request.receiveFile) that
// resolves object address from S3-like path <bucket name>/<object key>.
func (h *Handler) byS3Path(c *fasthttp.RequestCtx, f func(context.Context, request, oid.Address)) {
bucketname := c.UserValue("cid").(string)
key := c.UserValue("oid").(string)
func (h *Handler) byS3Path(ctx context.Context, req request, cnrID cid.ID, path string, handler func(context.Context, request, oid.Address)) {
c, log := req.RequestCtx, req.log
ctx := utils.GetContextFromRequest(c)
reqLog := utils.GetReqLogOrDefault(ctx, h.log)
log := reqLog.With(zap.String("bucketname", bucketname), zap.String("key", key))
unescapedKey, err := url.QueryUnescape(key)
foundOID, err := h.tree.GetLatestVersion(ctx, &cnrID, path)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
bktInfo, err := h.getBucketInfo(ctx, bucketname, log)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
foundOid, err := h.tree.GetLatestVersion(ctx, &bktInfo.CID, unescapedKey)
if err != nil {
if errors.Is(err, tree.ErrNodeAccessDenied) {
response.Error(c, "Access Denied", fasthttp.StatusForbidden)
} else {
response.Error(c, "object wasn't found", fasthttp.StatusNotFound)
log.Error(logs.GetLatestObjectVersion, zap.Error(err))
}
return
}
if foundOid.DeleteMarker {
if foundOID.IsDeleteMarker {
log.Error(logs.ObjectWasDeleted)
response.Error(c, "object deleted", fasthttp.StatusNotFound)
return
}
addr := newAddress(bktInfo.CID, foundOid.OID)
f(ctx, *h.newRequest(c, log), addr)
addr := newAddress(cnrID, foundOID.OID)
handler(ctx, h.newRequest(c, log), addr)
}
// byAttribute is a wrapper similar to byNativeAddress.
func (h *Handler) byAttribute(c *fasthttp.RequestCtx, f func(context.Context, request, oid.Address)) {
scid, _ := c.UserValue("cid").(string)
func (h *Handler) byAttribute(c *fasthttp.RequestCtx, handler func(context.Context, request, oid.Address)) {
cidParam, _ := c.UserValue("cid").(string)
key, _ := c.UserValue("attr_key").(string)
val, _ := c.UserValue("attr_val").(string)
@ -271,55 +229,78 @@ func (h *Handler) byAttribute(c *fasthttp.RequestCtx, f func(context.Context, re
key, err := url.QueryUnescape(key)
if err != nil {
log.Error(logs.FailedToUnescapeQuery, zap.String("cid", scid), zap.String("attr_key", key), zap.Error(err))
log.Error(logs.FailedToUnescapeQuery, zap.String("cid", cidParam), zap.String("attr_key", key), zap.Error(err))
response.Error(c, "could not unescape attr_key: "+err.Error(), fasthttp.StatusBadRequest)
return
}
val, err = url.QueryUnescape(val)
if err != nil {
log.Error(logs.FailedToUnescapeQuery, zap.String("cid", scid), zap.String("attr_val", val), zap.Error(err))
log.Error(logs.FailedToUnescapeQuery, zap.String("cid", cidParam), zap.String("attr_val", val), zap.Error(err))
response.Error(c, "could not unescape attr_val: "+err.Error(), fasthttp.StatusBadRequest)
return
}
log = log.With(zap.String("cid", scid), zap.String("attr_key", key), zap.String("attr_val", val))
log = log.With(zap.String("cid", cidParam), zap.String("attr_key", key), zap.String("attr_val", val))
bktInfo, err := h.getBucketInfo(ctx, scid, log)
bktInfo, err := h.getBucketInfo(ctx, cidParam, log)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
res, err := h.search(ctx, bktInfo.CID, key, val, object.MatchStringEqual)
objID, err := h.findObjectByAttribute(ctx, log, bktInfo.CID, key, val)
if err != nil {
log.Error(logs.CouldNotSearchForObjects, zap.Error(err))
response.Error(c, "could not search for objects: "+err.Error(), fasthttp.StatusBadRequest)
if errors.Is(err, io.EOF) {
response.Error(c, err.Error(), fasthttp.StatusNotFound)
return
}
response.Error(c, err.Error(), fasthttp.StatusBadRequest)
return
}
var addr oid.Address
addr.SetContainer(bktInfo.CID)
addr.SetObject(objID)
handler(ctx, h.newRequest(c, log), addr)
}
func (h *Handler) findObjectByAttribute(ctx context.Context, log *zap.Logger, cnrID cid.ID, attrKey, attrVal string) (oid.ID, error) {
res, err := h.search(ctx, cnrID, attrKey, attrVal, object.MatchStringEqual)
if err != nil {
log.Error(logs.CouldNotSearchForObjects, zap.Error(err))
return oid.ID{}, fmt.Errorf("could not search for objects: %w", err)
}
defer res.Close()
buf := make([]oid.ID, 1)
n, err := res.Read(buf)
if n == 0 {
if errors.Is(err, io.EOF) {
switch {
case errors.Is(err, io.EOF) && h.needSearchByFileName(attrKey, attrVal):
log.Debug(logs.ObjectNotFoundByFilePathTrySearchByFileName)
return h.findObjectByAttribute(ctx, log, cnrID, attrFileName, attrVal)
case errors.Is(err, io.EOF):
log.Error(logs.ObjectNotFound, zap.Error(err))
response.Error(c, "object not found", fasthttp.StatusNotFound)
return
return oid.ID{}, fmt.Errorf("object not found: %w", err)
default:
log.Error(logs.ReadObjectListFailed, zap.Error(err))
return oid.ID{}, fmt.Errorf("read object list failed: %w", err)
}
log.Error(logs.ReadObjectListFailed, zap.Error(err))
response.Error(c, "read object list failed: "+err.Error(), fasthttp.StatusBadRequest)
return
}
var addrObj oid.Address
addrObj.SetContainer(bktInfo.CID)
addrObj.SetObject(buf[0])
return buf[0], nil
}
f(ctx, *h.newRequest(c, log), addrObj)
func (h *Handler) needSearchByFileName(key, val string) bool {
if key != attrFilePath || !h.config.EnableFilepathFallback() {
return false
}
return strings.HasPrefix(val, "/") && strings.Count(val, "/") == 1 || !strings.Contains(val, "/")
}
// resolveContainer decode container id, if it's not a valid container id
@ -388,7 +369,7 @@ func (h *Handler) readContainer(ctx context.Context, cnrID cid.ID) (*data.Bucket
return bktInfo, err
}
func (h *Handler) browseIndex(c *fasthttp.RequestCtx) {
func (h *Handler) browseIndex(c *fasthttp.RequestCtx, isNativeList bool) {
if !h.config.IndexPageEnabled() {
c.SetStatusCode(fasthttp.StatusNotFound)
return
@ -414,18 +395,9 @@ func (h *Handler) browseIndex(c *fasthttp.RequestCtx) {
}
listFunc := h.getDirObjectsS3
isNativeList := false
err = h.tree.CheckSettingsNodeExist(ctx, bktInfo)
if err != nil {
if errors.Is(err, tree.ErrNodeNotFound) {
// tree probe failed, try to use native
listFunc = h.getDirObjectsNative
isNativeList = true
} else {
logAndSendBucketError(c, log, err)
return
}
if isNativeList {
// tree probe failed, trying to use native
listFunc = h.getDirObjectsNative
}
h.browseObjects(c, browseParams{

View file

@ -14,8 +14,8 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/cache"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/handler/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/resolver"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tree"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
@ -32,18 +32,34 @@ import (
"go.uber.org/zap"
)
type treeClientMock struct {
type treeServiceMock struct {
system map[string]map[string]*data.BaseNodeVersion
}
func (t *treeClientMock) GetNodes(context.Context, *tree.GetNodesParams) ([]tree.NodeResponse, error) {
return nil, nil
func newTreeService() *treeServiceMock {
return &treeServiceMock{
system: make(map[string]map[string]*data.BaseNodeVersion),
}
}
func (t *treeClientMock) GetSubTree(context.Context, *data.BucketInfo, string, []uint64, uint32, bool) ([]tree.NodeResponse, error) {
func (t *treeServiceMock) CheckSettingsNodeExists(context.Context, *data.BucketInfo) error {
_, ok := t.system["bucket-settings"]
if !ok {
return layer.ErrNodeNotFound
}
return nil
}
func (t *treeServiceMock) GetSubTreeByPrefix(context.Context, *data.BucketInfo, string, bool) ([]data.NodeInfo, string, error) {
return nil, "", nil
}
func (t *treeServiceMock) GetLatestVersion(context.Context, *cid.ID, string) (*data.NodeVersion, error) {
return nil, nil
}
type configMock struct {
additionalSearch bool
}
func (c *configMock) DefaultTimestamp() bool {
@ -78,13 +94,17 @@ func (c *configMock) NamespaceHeader() string {
return ""
}
func (c *configMock) EnableFilepathFallback() bool {
return c.additionalSearch
}
type handlerContext struct {
key *keys.PrivateKey
owner user.ID
h *Handler
frostfs *TestFrostFS
tree *treeClientMock
tree *treeServiceMock
cfg *configMock
}
@ -125,14 +145,14 @@ func prepareHandlerContext() (*handlerContext, error) {
}),
}
treeMock := &treeClientMock{}
treeMock := newTreeService()
cfgMock := &configMock{}
workerPool, err := ants.NewPool(1000)
workerPool, err := ants.NewPool(1)
if err != nil {
return nil, err
}
handler := New(params, cfgMock, tree.NewTree(treeMock), workerPool)
handler := New(params, cfgMock, treeMock, workerPool)
return &handlerContext{
key: key,
@ -199,10 +219,8 @@ func TestBasic(t *testing.T) {
require.NoError(t, err)
obj := hc.frostfs.objects[putRes.ContainerID+"/"+putRes.ObjectID]
attr := object.NewAttribute()
attr.SetKey(object.AttributeFilePath)
attr.SetValue(objFileName)
obj.SetAttributes(append(obj.Attributes(), *attr)...)
attr := prepareObjectAttributes(object.AttributeFilePath, objFileName)
obj.SetAttributes(append(obj.Attributes(), attr)...)
t.Run("get", func(t *testing.T) {
r = prepareGetRequest(ctx, cnrID.EncodeToString(), putRes.ObjectID)
@ -251,6 +269,159 @@ func TestBasic(t *testing.T) {
})
}
func TestFindObjectByAttribute(t *testing.T) {
hc, err := prepareHandlerContext()
require.NoError(t, err)
hc.cfg.additionalSearch = true
bktName := "bucket"
cnrID, cnr, err := hc.prepareContainer(bktName, acl.PublicRWExtended)
require.NoError(t, err)
hc.frostfs.SetContainer(cnrID, cnr)
ctx := context.Background()
ctx = middleware.SetNamespace(ctx, "")
content := "hello"
r, err := prepareUploadRequest(ctx, cnrID.EncodeToString(), content)
require.NoError(t, err)
hc.Handler().Upload(r)
require.Equal(t, r.Response.StatusCode(), http.StatusOK)
var putRes putResponse
err = json.Unmarshal(r.Response.Body(), &putRes)
require.NoError(t, err)
testAttrVal1 := "test-attr-val1"
testAttrVal2 := "test-attr-val2"
testAttrVal3 := "test-attr-val3"
for _, tc := range []struct {
name string
firstAttr object.Attribute
secondAttr object.Attribute
reqAttrKey string
reqAttrValue string
err string
additionalSearch bool
}{
{
name: "success search by FileName",
firstAttr: prepareObjectAttributes(attrFilePath, testAttrVal1),
secondAttr: prepareObjectAttributes(attrFileName, testAttrVal2),
reqAttrKey: attrFileName,
reqAttrValue: testAttrVal2,
additionalSearch: false,
},
{
name: "failed search by FileName",
firstAttr: prepareObjectAttributes(attrFilePath, testAttrVal1),
secondAttr: prepareObjectAttributes(attrFileName, testAttrVal2),
reqAttrKey: attrFileName,
reqAttrValue: testAttrVal3,
err: "not found",
additionalSearch: false,
},
{
name: "success search by FilePath (with additional search)",
firstAttr: prepareObjectAttributes(attrFilePath, testAttrVal1),
secondAttr: prepareObjectAttributes(attrFileName, testAttrVal2),
reqAttrKey: attrFilePath,
reqAttrValue: testAttrVal2,
additionalSearch: true,
},
{
name: "failed by FilePath (with additional search)",
firstAttr: prepareObjectAttributes(attrFilePath, testAttrVal1),
secondAttr: prepareObjectAttributes(attrFileName, testAttrVal2),
reqAttrKey: attrFilePath,
reqAttrValue: testAttrVal3,
err: "not found",
additionalSearch: true,
},
} {
t.Run(tc.name, func(t *testing.T) {
obj := hc.frostfs.objects[putRes.ContainerID+"/"+putRes.ObjectID]
obj.SetAttributes(tc.firstAttr, tc.secondAttr)
hc.cfg.additionalSearch = tc.additionalSearch
objID, err := hc.Handler().findObjectByAttribute(ctx, hc.Handler().log, cnrID, tc.reqAttrKey, tc.reqAttrValue)
if tc.err != "" {
require.Error(t, err)
require.Contains(t, err.Error(), tc.err)
return
}
require.NoError(t, err)
require.Equal(t, putRes.ObjectID, objID.EncodeToString())
})
}
}
func TestNeedSearchByFileName(t *testing.T) {
hc, err := prepareHandlerContext()
require.NoError(t, err)
for _, tc := range []struct {
name string
attrKey string
attrVal string
additionalSearch bool
expected bool
}{
{
name: "need search - not contains slash",
attrKey: attrFilePath,
attrVal: "cat.png",
additionalSearch: true,
expected: true,
},
{
name: "need search - single lead slash",
attrKey: attrFilePath,
attrVal: "/cat.png",
additionalSearch: true,
expected: true,
},
{
name: "don't need search - single slash but not lead",
attrKey: attrFilePath,
attrVal: "cats/cat.png",
additionalSearch: true,
expected: false,
},
{
name: "don't need search - more one slash",
attrKey: attrFilePath,
attrVal: "/cats/cat.png",
additionalSearch: true,
expected: false,
},
{
name: "don't need search - incorrect attribute key",
attrKey: attrFileName,
attrVal: "cat.png",
additionalSearch: true,
expected: false,
},
{
name: "don't need search - additional search disabled",
attrKey: attrFilePath,
attrVal: "cat.png",
additionalSearch: false,
expected: false,
},
} {
t.Run(tc.name, func(t *testing.T) {
hc.cfg.additionalSearch = tc.additionalSearch
res := hc.h.needSearchByFileName(tc.attrKey, tc.attrVal)
require.Equal(t, tc.expected, res)
})
}
}
func prepareUploadRequest(ctx context.Context, bucket, content string) (*fasthttp.RequestCtx, error) {
r := new(fasthttp.RequestCtx)
utils.SetContextToRequest(ctx, r)
@ -283,6 +454,13 @@ func prepareGetZipped(ctx context.Context, bucket, prefix string) *fasthttp.Requ
return r
}
func prepareObjectAttributes(attrKey, attrValue string) object.Attribute {
attr := object.NewAttribute()
attr.SetKey(attrKey)
attr.SetValue(attrValue)
return *attr
}
const (
keyAttr = "User-Attribute"
valAttr = "user value"

View file

@ -2,11 +2,13 @@ package handler
import (
"context"
"errors"
"io"
"net/http"
"strconv"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
@ -102,14 +104,36 @@ func idsToResponse(resp *fasthttp.Response, obj *object.Object) {
// HeadByAddressOrBucketName handles head requests using simple cid/oid or bucketname/key format.
func (h *Handler) HeadByAddressOrBucketName(c *fasthttp.RequestCtx) {
test, _ := c.UserValue("oid").(string)
var id oid.ID
cidParam, _ := c.UserValue("cid").(string)
oidParam, _ := c.UserValue("oid").(string)
err := id.DecodeString(test)
ctx := utils.GetContextFromRequest(c)
log := utils.GetReqLogOrDefault(ctx, h.log).With(
zap.String("cid", cidParam),
zap.String("oid", oidParam),
)
bktInfo, err := h.getBucketInfo(ctx, cidParam, log)
if err != nil {
h.byS3Path(c, h.headObject)
logAndSendBucketError(c, log, err)
return
}
checkS3Err := h.tree.CheckSettingsNodeExists(ctx, bktInfo)
if checkS3Err != nil && !errors.Is(checkS3Err, layer.ErrNodeNotFound) {
logAndSendBucketError(c, log, checkS3Err)
return
}
req := h.newRequest(c, log)
var objID oid.ID
if checkS3Err == nil {
h.byS3Path(ctx, req, bktInfo.CID, oidParam, h.headObject)
} else if err = objID.DecodeString(oidParam); err == nil {
h.byNativeAddress(ctx, req, bktInfo.CID, objID, h.headObject)
} else {
h.byNativeAddress(c, h.headObject)
logAndSendBucketError(c, log, checkS3Err)
return
}
}

View file

@ -42,16 +42,7 @@ func bearerToken(ctx context.Context) *bearer.Token {
}
func isDir(name string) bool {
return strings.HasSuffix(name, "/")
}
func isObjectID(s string) bool {
var objID oid.ID
return objID.DecodeString(s) == nil
}
func isContainerRoot(key string) bool {
return key == ""
return name == "" || strings.HasSuffix(name, "/")
}
func loadAttributes(attrs []object.Attribute) map[string]string {

View file

@ -4,13 +4,15 @@ import (
"context"
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/api"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
)
// TreeService provide interface to interact with tree service using s3 data models.
type TreeService interface {
GetLatestVersion(ctx context.Context, cnrID *cid.ID, objectName string) (*api.NodeVersion, error)
GetLatestVersion(ctx context.Context, cnrID *cid.ID, objectName string) (*data.NodeVersion, error)
GetSubTreeByPrefix(ctx context.Context, bktInfo *data.BucketInfo, prefix string, latestOnly bool) ([]data.NodeInfo, string, error)
CheckSettingsNodeExists(ctx context.Context, bktInfo *data.BucketInfo) error
}
var (

View file

@ -79,6 +79,11 @@ const (
InvalidLifetimeUsingDefaultValue = "invalid lifetime, using default value (in seconds)" // Error in ../../cmd/http-gw/settings.go
InvalidCacheSizeUsingDefaultValue = "invalid cache size, using default value" // Error in ../../cmd/http-gw/settings.go
FailedToUnescapeQuery = "failed to unescape query"
FailedToParseAddressInTreeNode = "failed to parse object addr in tree node"
SettingsNodeInvalidOwnerKey = "settings node: invalid owner key"
SystemNodeHasMultipleIDs = "system node has multiple ids"
FailedToRemoveOldSystemNode = "failed to remove old system node"
BucketSettingsNodeHasMultipleIDs = "bucket settings node has multiple ids"
ServerReconnecting = "reconnecting server..."
ServerReconnectedSuccessfully = "server reconnected successfully"
ServerReconnectFailed = "failed to reconnect server"
@ -87,4 +92,5 @@ const (
MultinetDialFail = "multinet dial failed"
FailedToLoadMultinetConfig = "failed to load multinet config"
MultinetConfigWontBeUpdated = "multinet config won't be updated"
ObjectNotFoundByFilePathTrySearchByFileName = "object not found by filePath attribute, try search by fileName"
)

View file

@ -6,9 +6,8 @@ import (
"fmt"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/api"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
)
@ -118,7 +117,7 @@ func (n *treeNode) FileName() (string, bool) {
return value, ok
}
func newNodeVersion(node NodeResponse) (*api.NodeVersion, error) {
func newNodeVersion(node NodeResponse) (*data.NodeVersion, error) {
tNode, err := newTreeNode(node)
if err != nil {
return nil, fmt.Errorf("invalid tree node: %w", err)
@ -127,20 +126,30 @@ func newNodeVersion(node NodeResponse) (*api.NodeVersion, error) {
return newNodeVersionFromTreeNode(tNode), nil
}
func newNodeVersionFromTreeNode(treeNode *treeNode) *api.NodeVersion {
func newNodeVersionFromTreeNode(treeNode *treeNode) *data.NodeVersion {
_, isDeleteMarker := treeNode.Get(isDeleteMarkerKV)
size, _ := treeNode.Get(sizeKV)
version := &api.NodeVersion{
BaseNodeVersion: api.BaseNodeVersion{
OID: treeNode.ObjID,
version := &data.NodeVersion{
BaseNodeVersion: data.BaseNodeVersion{
OID: treeNode.ObjID,
IsDeleteMarker: isDeleteMarker,
},
DeleteMarker: isDeleteMarker,
IsPrefixNode: size == "",
}
return version
}
func newNodeInfo(node NodeResponse) data.NodeInfo {
nodeMeta := node.GetMeta()
nodeInfo := data.NodeInfo{
Meta: make([]data.NodeMeta, 0, len(nodeMeta)),
}
for _, meta := range nodeMeta {
nodeInfo.Meta = append(nodeInfo.Meta, meta)
}
return nodeInfo
}
func newMultiNode(nodes []NodeResponse) (*multiSystemNode, error) {
var (
err error
@ -180,7 +189,7 @@ func (m *multiSystemNode) Old() []*treeNode {
return m.nodes[1:]
}
func (c *Tree) GetLatestVersion(ctx context.Context, cnrID *cid.ID, objectName string) (*api.NodeVersion, error) {
func (c *Tree) GetLatestVersion(ctx context.Context, cnrID *cid.ID, objectName string) (*data.NodeVersion, error) {
nodes, err := c.GetVersions(ctx, cnrID, objectName)
if err != nil {
return nil, err
@ -210,7 +219,7 @@ func (c *Tree) GetVersions(ctx context.Context, cnrID *cid.ID, objectName string
return c.service.GetNodes(ctx, p)
}
func (c *Tree) CheckSettingsNodeExist(ctx context.Context, bktInfo *data.BucketInfo) error {
func (c *Tree) CheckSettingsNodeExists(ctx context.Context, bktInfo *data.BucketInfo) error {
_, err := c.getSystemNode(ctx, bktInfo, settingsFileName)
if err != nil {
return err
@ -236,7 +245,7 @@ func (c *Tree) getSystemNode(ctx context.Context, bktInfo *data.BucketInfo, name
nodes = filterMultipartNodes(nodes)
if len(nodes) == 0 {
return nil, ErrNodeNotFound
return nil, layer.ErrNodeNotFound
}
return newMultiNode(nodes)
@ -298,14 +307,14 @@ func pathFromName(objectName string) []string {
return strings.Split(objectName, separator)
}
func (c *Tree) GetSubTreeByPrefix(ctx context.Context, bktInfo *data.BucketInfo, prefix string, latestOnly bool) ([]NodeResponse, string, error) {
func (c *Tree) GetSubTreeByPrefix(ctx context.Context, bktInfo *data.BucketInfo, prefix string, latestOnly bool) ([]data.NodeInfo, string, error) {
rootID, tailPrefix, err := c.determinePrefixNode(ctx, bktInfo, versionTree, prefix)
if err != nil {
return nil, "", err
}
subTree, err := c.service.GetSubTree(ctx, bktInfo, versionTree, rootID, 2, false)
if err != nil {
if errors.Is(err, layer.ErrNodeNotFound) {
if errors.Is(err, ErrNodeNotFound) {
return nil, "", nil
}
return nil, "", err
@ -340,14 +349,23 @@ func (c *Tree) GetSubTreeByPrefix(ctx context.Context, bktInfo *data.BucketInfo,
nodesMap[fileName] = nodes
}
result := make([]NodeResponse, 0, len(subTree))
result := make([]data.NodeInfo, 0, len(subTree))
for _, nodes := range nodesMap {
result = append(result, nodes...)
result = append(result, nodeResponseToNodeInfo(nodes)...)
}
return result, strings.TrimSuffix(prefix, tailPrefix), nil
}
func nodeResponseToNodeInfo(nodes []NodeResponse) []data.NodeInfo {
nodesInfo := make([]data.NodeInfo, 0, len(nodes))
for _, node := range nodes {
nodesInfo = append(nodesInfo, newNodeInfo(node))
}
return nodesInfo
}
func (c *Tree) determinePrefixNode(ctx context.Context, bktInfo *data.BucketInfo, treeID, prefix string) ([]uint64, string, error) {
rootID := []uint64{0}
path := strings.Split(prefix, separator)