Compare commits

..

2 commits

Author SHA1 Message Date
876f7b7dcb [#174] Add kludge additional search
Advanced search is needed because some
software may keep FileName attribute
and ignore FilePath attribute during
file upload.

Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-12-12 09:28:33 +03:00
71fc6c85d5 [#163] Support JSON bearer token
All checks were successful
/ DCO (pull_request) Successful in 5m13s
/ Vulncheck (pull_request) Successful in 6m9s
/ Builds (pull_request) Successful in 4m36s
/ Lint (pull_request) Successful in 5m3s
/ Tests (pull_request) Successful in 4m4s
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-12-11 16:49:50 +03:00
27 changed files with 463 additions and 606 deletions

View file

@ -4,19 +4,6 @@ This document outlines major changes between releases.
## [Unreleased]
## [0.32.0] - Khumbu - 2024-12-20
### Fixed
- Getting S3 object with FrostFS Object ID-like key (#166)
- Ignore delete marked objects in versioned bucket in index page (#181)
### Added
- Metric of dropped logs by log sampler (#150)
- Fallback FileName attribute search during FilePath attribute search (#174)
### Changed
- Updated tree service pool without api-go dependency (#178)
## [0.31.0] - Rongbuk - 2024-11-20
### Fixed
@ -183,5 +170,4 @@ To see CHANGELOG for older versions, refer to https://github.com/nspcc-dev/neofs
[0.30.2]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.30.1...v0.30.2
[0.30.3]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.30.2...v0.30.3
[0.31.0]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.30.3...v0.31.0
[0.32.0]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.31.0...v0.32.0
[Unreleased]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.32.0...master
[Unreleased]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.31.0...master

View file

@ -1,3 +1 @@
.* @TrueCloudLab/storage-services-developers @TrueCloudLab/storage-services-committers
.forgejo/.* @potyarkin
Makefile @potyarkin
.* @alexvanin @dkirillov

139
README.md
View file

@ -217,8 +217,41 @@ Also, in case of downloading, you need to have a file inside a container.
### NNS
In all download/upload routes you can use container name instead of its id (`$CID`).
Read more about it in [docs/nns.md](./docs/nns.md).
Steps to start using name resolving:
1. Enable NNS resolving in config (`rpc_endpoint` must be a valid neo rpc node, see [configs](./config) for other examples):
```yaml
rpc_endpoint: http://morph-chain.frostfs.devenv:30333
resolve_order:
- nns
```
2. Make sure your container is registered in NNS contract. If you use [frostfs-dev-env](https://git.frostfs.info/TrueCloudLab/frostfs-dev-env)
you can check if your container (e.g. with `container-name` name) is registered in NNS:
```shell
$ curl -s --data '{"id":1,"jsonrpc":"2.0","method":"getcontractstate","params":[1]}' \
http://morph-chain.frostfs.devenv:30333 | jq -r '.result.hash'
0x8e6c3cd4b976b28e84a3788f6ea9e2676c15d667
$ docker exec -it morph_chain neo-go \
contract testinvokefunction \
-r http://morph-chain.frostfs.devenv:30333 0x8e6c3cd4b976b28e84a3788f6ea9e2676c15d667 \
resolve string:container-name.container int:16 \
| jq -r '.stack[0].value | if type=="array" then .[0].value else . end' \
| base64 -d && echo
7f3vvkw4iTiS5ZZbu5BQXEmJtETWbi3uUjLNaSs29xrL
```
3. Use container name instead of its `$CID`. For example:
```shell
$ curl http://localhost:8082/get_by_attribute/container-name/FileName/object-name
```
#### Create a container
@ -429,7 +462,109 @@ object ID, like this:
#### Authentication
Read more about request authentication in [docs/authentication.md](./docs/authemtnication.md)
You can always upload files to public containers (open for anyone to put
objects into), but for restricted containers you need to explicitly allow PUT
operations for a request signed with your HTTP Gateway keys.
If you don't want to manage gateway's secret keys and adjust policies when
gateway configuration changes (new gate, key rotation, etc) or you plan to use
public services, there is an option to let your application backend (or you) to
issue Bearer Tokens and pass them from the client via gate down to FrostFS level
to grant access.
FrostFS Bearer Token basically is a container owner-signed policy (refer to FrostFS
documentation for more details). There are two options to pass them to gateway:
* "Authorization" header with "Bearer" type and base64-encoded token in
credentials field
* "Bearer" cookie with base64-encoded token contents
For example, you have a mobile application frontend with a backend part storing
data in FrostFS. When a user authorizes in the mobile app, the backend issues a FrostFS
Bearer token and provides it to the frontend. Then, the mobile app may generate
some data and upload it via any available FrostFS HTTP Gateway by adding
the corresponding header to the upload request. Accessing policy protected data
works the same way.
##### Example
In order to generate a bearer token, you need to have wallet (which will be used to sign the token)
1. Suppose you have a container with private policy for wallet key
```
$ frostfs-cli container create -r <endpoint> --wallet <wallet> -policy <policy> --basic-acl 0 --await
CID: 9dfzyvq82JnFqp5svxcREf2iy6XNuifYcJPusEDnGK9Z
$ frostfs-cli ape-manager add -r <endpoint> --wallet <wallet> \
--target-type container --target-name 9dfzyvq82JnFqp5svxcREf2iy6XNuifYcJPusEDnGK9Z \
--rule "allow Object.* RequestCondition:"\$Actor:publicKey"=03b09baabff3f6107c7e9acb8721a6fc5618d45b50247a314d82e548702cce8cd5 *" \
--chain-id <chainID>
```
2. Form a Bearer token (10000 is lifetime expiration in epoch) to impersonate
HTTP Gateway request as wallet signed request and save it to **bearer.json**:
```
{
"body": {
"allowImpersonate": true,
"lifetime": {
"exp": "10000",
"nbf": "0",
"iat": "0"
}
},
"signature": null
}
```
3. Sign it with the wallet:
```
$ frostfs-cli util sign bearer-token --from bearer.json --to signed.json -w <wallet>
```
4. Encode to base64 to use in header:
```
$ base64 -w 0 signed.json
# output: Ck4KKgoECAIQBhIiCiCZGdlbN7DPGPMg9rsWqV+p2XdMzUqknRiexewSFp8kmBIbChk17MUri6OJ0X5ftsHzy7NERDNFB4C92PcaGgMIkE4SZgohAxpsb7vfAso1F0X6hrm6WpRS14WsT3/Ct1SMoqRsT89KEkEEGxKi8GjKSf52YqhppgaOTQHbUsL3jn7SHLqS3ndAQ7NtAATnmRHleZw2V2xRRSRBQdjDC05KK83LhdSax72Fsw==
```
After that, the Bearer token can be used:
```
$ curl -F 'file=@cat.jpeg;filename=cat.jpeg' -H "Authorization: Bearer Ck4KKgoECAIQBhIiCiCZGdlbN7DPGPMg9rsWqV+p2XdMzUqknRiexewSFp8kmBIbChk17MUri6OJ0X5ftsHzy7NERDNFB4C92PcaGgMIkE4SZgohAxpsb7vfAso1F0X6hrm6WpRS14WsT3/Ct1SMoqRsT89KEkEEGxKi8GjKSf52YqhppgaOTQHbUsL3jn7SHLqS3ndAQ7NtAATnmRHleZw2V2xRRSRBQdjDC05KK83LhdSax72Fsw==" \
http://localhost:8082/upload/BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K
# output:
# {
# "object_id": "DhfES9nVrFksxGDD2jQLunGADfrXExxNwqXbDafyBn9X",
# "container_id": "BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K"
# }
```
##### Note: Bearer Token owner
You can specify exact key who can use Bearer Token (gateway wallet address).
To do this, encode wallet address in base64 format
```
$ echo 'NhVtreTTCoqsMQV5Wp55fqnriiUCpEaKm3' | base58 --decode | base64
# output: NezFK4ujidF+X7bB88uzREQzRQeAvdj3Gg==
```
Then specify this value in Bearer Token Json
```
{
"body": {
"ownerID": {
"value": "NezFK4ujidF+X7bB88uzREQzRQeAvdj3Gg=="
},
...
```
##### Note: Policy override
Instead of impersonation, you can define the set of policies that will be applied
to the request sender. This allows to restrict access to specific operation and
specific objects without giving full impersonation control to the token user.
### Metrics and Pprof

View file

@ -1 +1 @@
v0.32.0
v0.31.0

View file

@ -95,22 +95,22 @@ type (
dialerSource *internalnet.DialerSource
workerPoolSize int
mu sync.RWMutex
defaultTimestamp bool
zipCompression bool
clientCut bool
returnIndexPage bool
indexPageTemplate string
bufferMaxSizeForPut uint64
namespaceHeader string
defaultNamespaces []string
corsAllowOrigin string
corsAllowMethods []string
corsAllowHeaders []string
corsExposeHeaders []string
corsAllowCredentials bool
corsMaxAge int
enableFilepathFallback bool
mu sync.RWMutex
defaultTimestamp bool
zipCompression bool
clientCut bool
returnIndexPage bool
indexPageTemplate string
bufferMaxSizeForPut uint64
namespaceHeader string
defaultNamespaces []string
corsAllowOrigin string
corsAllowMethods []string
corsAllowHeaders []string
corsExposeHeaders []string
corsAllowCredentials bool
corsMaxAge int
additionalSearch bool
}
CORS struct {
@ -190,7 +190,7 @@ func (s *appSettings) update(v *viper.Viper, l *zap.Logger) {
corsExposeHeaders := v.GetStringSlice(cfgCORSExposeHeaders)
corsAllowCredentials := v.GetBool(cfgCORSAllowCredentials)
corsMaxAge := fetchCORSMaxAge(v)
enableFilepathFallback := v.GetBool(cfgFeaturesEnableFilepathFallback)
additionalSearch := v.GetBool(cfgKludgeAdditionalSearch)
s.mu.Lock()
defer s.mu.Unlock()
@ -210,7 +210,7 @@ func (s *appSettings) update(v *viper.Viper, l *zap.Logger) {
s.corsExposeHeaders = corsExposeHeaders
s.corsAllowCredentials = corsAllowCredentials
s.corsMaxAge = corsMaxAge
s.enableFilepathFallback = enableFilepathFallback
s.additionalSearch = additionalSearch
}
func (s *loggerSettings) DroppedLogsInc() {
@ -308,10 +308,10 @@ func (s *appSettings) FormContainerZone(ns string) (zone string, isDefault bool)
return ns + ".ns", false
}
func (s *appSettings) EnableFilepathFallback() bool {
func (s *appSettings) AdditionalSearch() bool {
s.mu.RLock()
defer s.mu.RUnlock()
return s.enableFilepathFallback
return s.additionalSearch
}
func (a *app) initResolver() {
@ -508,10 +508,10 @@ func (a *app) Serve() {
close(a.webDone)
}()
handle := handler.New(a.AppParams(), a.settings, tree.NewTree(frostfs.NewPoolWrapper(a.treePool)), workerPool)
handler := handler.New(a.AppParams(), a.settings, tree.NewTree(frostfs.NewPoolWrapper(a.treePool)), workerPool)
// Configure router.
a.configureRouter(handle)
a.configureRouter(handler)
a.startServices()
a.initServers(a.ctx)

View file

@ -14,7 +14,6 @@ import (
"net/http"
"os"
"sort"
"strings"
"testing"
"time"
@ -55,7 +54,6 @@ func TestIntegration(t *testing.T) {
"1.2.7",
"1.3.0",
"1.5.0",
"1.6.5",
}
key, err := keys.NewPrivateKeyFromHex("1dd37fba80fec4e6a6f13fd708d8dcb3b29def768017052f6c930fa1c5d90bbb")
require.NoError(t, err)
@ -72,26 +70,23 @@ func TestIntegration(t *testing.T) {
ctx, cancel2 := context.WithCancel(rootCtx)
aioContainer := createDockerContainer(ctx, t, aioImage+version)
if strings.HasPrefix(version, "1.6") {
registerUser(t, ctx, aioContainer, file.Name())
}
// See the logs from the command execution.
server, cancel := runServer(file.Name())
clientPool := getPool(ctx, t, key)
CID, err := createContainer(ctx, t, clientPool, ownerID)
CID, err := createContainer(ctx, t, clientPool, ownerID, version)
require.NoError(t, err, version)
token := makeBearerToken(t, key, ownerID, version)
jsonToken, binaryToken := makeBearerTokens(t, key, ownerID, version)
t.Run("simple put "+version, func(t *testing.T) { simplePut(ctx, t, clientPool, CID) })
t.Run("put with bearer token in header"+version, func(t *testing.T) { putWithBearerTokenInHeader(ctx, t, clientPool, CID, token) })
t.Run("put with bearer token in cookie"+version, func(t *testing.T) { putWithBearerTokenInCookie(ctx, t, clientPool, CID, token) })
t.Run("simple put "+version, func(t *testing.T) { simplePut(ctx, t, clientPool, CID, version) })
t.Run("put with json bearer token in header"+version, func(t *testing.T) { putWithBearerTokenInHeader(ctx, t, clientPool, CID, jsonToken) })
t.Run("put with json bearer token in cookie"+version, func(t *testing.T) { putWithBearerTokenInCookie(ctx, t, clientPool, CID, jsonToken) })
t.Run("put with binary bearer token in header"+version, func(t *testing.T) { putWithBearerTokenInHeader(ctx, t, clientPool, CID, binaryToken) })
t.Run("put with binary bearer token in cookie"+version, func(t *testing.T) { putWithBearerTokenInCookie(ctx, t, clientPool, CID, binaryToken) })
t.Run("put with duplicate keys "+version, func(t *testing.T) { putWithDuplicateKeys(t, CID) })
t.Run("simple get "+version, func(t *testing.T) { simpleGet(ctx, t, clientPool, ownerID, CID) })
t.Run("get by attribute "+version, func(t *testing.T) { getByAttr(ctx, t, clientPool, ownerID, CID) })
t.Run("get zip "+version, func(t *testing.T) { getZip(ctx, t, clientPool, ownerID, CID) })
t.Run("test namespaces "+version, func(t *testing.T) { checkNamespaces(ctx, t, clientPool, ownerID, CID) })
t.Run("simple get "+version, func(t *testing.T) { simpleGet(ctx, t, clientPool, ownerID, CID, version) })
t.Run("get by attribute "+version, func(t *testing.T) { getByAttr(ctx, t, clientPool, ownerID, CID, version) })
t.Run("get zip "+version, func(t *testing.T) { getZip(ctx, t, clientPool, ownerID, CID, version) })
t.Run("test namespaces "+version, func(t *testing.T) { checkNamespaces(ctx, t, clientPool, ownerID, CID, version) })
cancel()
server.Wait()
@ -114,7 +109,7 @@ func runServer(pathToWallet string) (App, context.CancelFunc) {
return application, cancel
}
func simplePut(ctx context.Context, t *testing.T, p *pool.Pool, CID cid.ID) {
func simplePut(ctx context.Context, t *testing.T, p *pool.Pool, CID cid.ID, version string) {
url := testHost + "/upload/" + CID.String()
makePutRequestAndCheck(ctx, t, p, CID, url)
@ -262,7 +257,7 @@ func putWithDuplicateKeys(t *testing.T, CID cid.ID) {
require.Equal(t, http.StatusBadRequest, resp.StatusCode)
}
func simpleGet(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID) {
func simpleGet(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID, version string) {
content := "content of file"
attributes := map[string]string{
"some-attr": "some-get-value",
@ -309,7 +304,7 @@ func checkGetByAttrResponse(t *testing.T, resp *http.Response, content string, a
}
}
func getByAttr(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID) {
func getByAttr(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID, version string) {
keyAttr, valAttr := "some-attr", "some-get-by-attr-value"
content := "content of file"
attributes := map[string]string{keyAttr: valAttr}
@ -331,7 +326,7 @@ func getByAttr(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID
checkGetByAttrResponse(t, resp, content, expectedAttr)
}
func getZip(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID) {
func getZip(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID, version string) {
names := []string{"zipfolder/dir/name1.txt", "zipfolder/name2.txt"}
contents := []string{"content of file1", "content of file2"}
attributes1 := map[string]string{object.AttributeFilePath: names[0]}
@ -396,7 +391,7 @@ func checkZip(t *testing.T, data []byte, length int64, names, contents []string)
}
}
func checkNamespaces(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID) {
func checkNamespaces(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID, version string) {
content := "content of file"
attributes := map[string]string{
"some-attr": "some-get-value",
@ -433,7 +428,7 @@ func checkNamespaces(ctx context.Context, t *testing.T, clientPool *pool.Pool, o
func createDockerContainer(ctx context.Context, t *testing.T, image string) testcontainers.Container {
req := testcontainers.ContainerRequest{
Image: image,
WaitingFor: wait.NewLogStrategy("aio container started").WithStartupTimeout(2 * time.Minute),
WaitingFor: wait.NewLogStrategy("aio container started").WithStartupTimeout(30 * time.Second),
Name: "aio",
Hostname: "aio",
NetworkMode: "host",
@ -473,7 +468,7 @@ func getPool(ctx context.Context, t *testing.T, key *keys.PrivateKey) *pool.Pool
return clientPool
}
func createContainer(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID) (cid.ID, error) {
func createContainer(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, version string) (cid.ID, error) {
var policy netmap.PlacementPolicy
err := policy.DecodeString("REP 1")
require.NoError(t, err)
@ -533,19 +528,7 @@ func putObject(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID
return id.ObjectID
}
func registerUser(t *testing.T, ctx context.Context, aioContainer testcontainers.Container, pathToWallet string) {
err := aioContainer.CopyFileToContainer(ctx, pathToWallet, "/usr/wallet.json", 644)
require.NoError(t, err)
_, err = aioContainer.Exec(ctx, []string{
"/usr/bin/frostfs-s3-authmate", "register-user",
"--wallet", "/usr/wallet.json",
"--rpc-endpoint", "http://localhost:30333",
"--contract-wallet", "/config/s3-gw-wallet.json"})
require.NoError(t, err)
}
func makeBearerToken(t *testing.T, key *keys.PrivateKey, ownerID user.ID, version string) string {
func makeBearerTokens(t *testing.T, key *keys.PrivateKey, ownerID user.ID, version string) (jsonTokenBase64, binaryTokenBase64 string) {
tkn := new(bearer.Token)
tkn.ForUser(ownerID)
tkn.SetExp(10000)
@ -559,10 +542,16 @@ func makeBearerToken(t *testing.T, key *keys.PrivateKey, ownerID user.ID, versio
err := tkn.Sign(key.PrivateKey)
require.NoError(t, err)
t64 := base64.StdEncoding.EncodeToString(tkn.Marshal())
require.NotEmpty(t, t64)
jsonToken, err := tkn.MarshalJSON()
require.NoError(t, err)
return t64
jsonTokenBase64 = base64.StdEncoding.EncodeToString(jsonToken)
binaryTokenBase64 = base64.StdEncoding.EncodeToString(tkn.Marshal())
require.NotEmpty(t, jsonTokenBase64)
require.NotEmpty(t, binaryTokenBase64)
return
}
func makeTempWallet(t *testing.T, key *keys.PrivateKey, path string) {

View file

@ -164,8 +164,8 @@ const (
cfgMultinetFallbackDelay = "multinet.fallback_delay"
cfgMultinetSubnets = "multinet.subnets"
// Feature.
cfgFeaturesEnableFilepathFallback = "features.enable_filepath_fallback"
// Kludge.
cfgKludgeAdditionalSearch = "kludge.additional_search"
// Command line args.
cmdHelp = "help"

View file

@ -160,5 +160,5 @@ HTTP_GW_INDEX_PAGE_ENABLED=false
# Index page template path
HTTP_GW_INDEX_PAGE_TEMPLATE_PATH=internal/handler/templates/index.gotmpl
# Enable using fallback path to search for a object by attribute
HTTP_GW_FEATURES_ENABLE_FILEPATH_FALLBACK=false
# Enable using additional search by attribute
HTTP_GW_KLUDGE_ADDITIONAL_SEARCH=false

View file

@ -173,6 +173,6 @@ multinet:
- 1.2.3.4
- 1.2.3.5
features:
# Enable using fallback path to search for a object by attribute
enable_filepath_fallback: false
kludge:
# Enable using additional search by attribute
additional_search: false

View file

@ -8,7 +8,7 @@
| `/zip/{cid}/{prefix}` | [Download objects in archive](#download-zip) |
**Note:** `cid` parameter can be base58 encoded container ID or container name
(the name must be registered in NNS, see appropriate section in [nns.md](./nns.md)).
(the name must be registered in NNS, see appropriate section in [README](../README.md#nns)).
Route parameters can be:
@ -18,7 +18,7 @@ Route parameters can be:
### Bearer token
All routes can accept [bearer token](./authentication.md) from:
All routes can accept [bearer token](../README.md#authentication) from:
* `Authorization` header with `Bearer` type and base64-encoded token in
credentials field

View file

@ -1,108 +0,0 @@
# Request authentication
HTTP Gateway does not authorize requests. Gateway converts HTTP request to a
FrostFS request and signs it with its own private key.
You can always upload files to public containers (open for anyone to put
objects into), but for restricted containers you need to explicitly allow PUT
operations for a request signed with your HTTP Gateway keys.
If you don't want to manage gateway's secret keys and adjust policies when
gateway configuration changes (new gate, key rotation, etc) or you plan to use
public services, there is an option to let your application backend (or you) to
issue Bearer Tokens and pass them from the client via gate down to FrostFS level
to grant access.
FrostFS Bearer Token basically is a container owner-signed policy (refer to FrostFS
documentation for more details). There are two options to pass them to gateway:
* "Authorization" header with "Bearer" type and base64-encoded token in
credentials field
* "Bearer" cookie with base64-encoded token contents
For example, you have a mobile application frontend with a backend part storing
data in FrostFS. When a user authorizes in the mobile app, the backend issues a FrostFS
Bearer token and provides it to the frontend. Then, the mobile app may generate
some data and upload it via any available FrostFS HTTP Gateway by adding
the corresponding header to the upload request. Accessing policy protected data
works the same way.
##### Example
In order to generate a bearer token, you need to have wallet (which will be used to sign the token)
1. Suppose you have a container with private policy for wallet key
```
$ frostfs-cli container create -r <endpoint> --wallet <wallet> -policy <policy> --basic-acl 0 --await
CID: 9dfzyvq82JnFqp5svxcREf2iy6XNuifYcJPusEDnGK9Z
$ frostfs-cli ape-manager add -r <endpoint> --wallet <wallet> \
--target-type container --target-name 9dfzyvq82JnFqp5svxcREf2iy6XNuifYcJPusEDnGK9Z \
--rule "allow Object.* RequestCondition:"\$Actor:publicKey"=03b09baabff3f6107c7e9acb8721a6fc5618d45b50247a314d82e548702cce8cd5 *" \
--chain-id <chainID>
```
2. Form a Bearer token (10000 is lifetime expiration in epoch) to impersonate
HTTP Gateway request as wallet signed request and save it to **bearer.json**:
```
{
"body": {
"allowImpersonate": true,
"lifetime": {
"exp": "10000",
"nbf": "0",
"iat": "0"
}
},
"signature": null
}
```
3. Sign it with the wallet:
```
$ frostfs-cli util sign bearer-token --from bearer.json --to signed.json -w <wallet>
```
4. Encode to base64 to use in header:
```
$ base64 -w 0 signed.json
# output: Ck4KKgoECAIQBhIiCiCZGdlbN7DPGPMg9rsWqV+p2XdMzUqknRiexewSFp8kmBIbChk17MUri6OJ0X5ftsHzy7NERDNFB4C92PcaGgMIkE4SZgohAxpsb7vfAso1F0X6hrm6WpRS14WsT3/Ct1SMoqRsT89KEkEEGxKi8GjKSf52YqhppgaOTQHbUsL3jn7SHLqS3ndAQ7NtAATnmRHleZw2V2xRRSRBQdjDC05KK83LhdSax72Fsw==
```
After that, the Bearer token can be used:
```
$ curl -F 'file=@cat.jpeg;filename=cat.jpeg' -H "Authorization: Bearer Ck4KKgoECAIQBhIiCiCZGdlbN7DPGPMg9rsWqV+p2XdMzUqknRiexewSFp8kmBIbChk17MUri6OJ0X5ftsHzy7NERDNFB4C92PcaGgMIkE4SZgohAxpsb7vfAso1F0X6hrm6WpRS14WsT3/Ct1SMoqRsT89KEkEEGxKi8GjKSf52YqhppgaOTQHbUsL3jn7SHLqS3ndAQ7NtAATnmRHleZw2V2xRRSRBQdjDC05KK83LhdSax72Fsw==" \
http://localhost:8082/upload/BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K
# output:
# {
# "object_id": "DhfES9nVrFksxGDD2jQLunGADfrXExxNwqXbDafyBn9X",
# "container_id": "BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K"
# }
```
##### Note: Bearer Token owner
You can specify exact key who can use Bearer Token (gateway wallet address).
To do this, encode wallet address in base64 format
```
$ echo 'NhVtreTTCoqsMQV5Wp55fqnriiUCpEaKm3' | base58 --decode | base64
# output: NezFK4ujidF+X7bB88uzREQzRQeAvdj3Gg==
```
Then specify this value in Bearer Token Json
```
{
"body": {
"ownerID": {
"value": "NezFK4ujidF+X7bB88uzREQzRQeAvdj3Gg=="
},
...
```
##### Note: Policy override
Instead of impersonation, you can define the set of policies that will be applied
to the request sender. This allows to restrict access to specific operation and
specific objects without giving full impersonation control to the token user.

View file

@ -59,7 +59,7 @@ $ cat http.log
| `resolve_bucket` | [Bucket name resolving configuration](#resolve_bucket-section) |
| `index_page` | [Index page configuration](#index_page-section) |
| `multinet` | [Multinet configuration](#multinet-section) |
| `features` | [Features configuration](#features-section) |
| `kludge` | [Kludge configuration](#kludge-section) |
# General section
@ -458,15 +458,15 @@ multinet:
| `mask` | `string` | yes | | Destination subnet. |
| `source_ips` | `[]string` | yes | | Array of source IP addresses to use when dialing destination subnet. |
# `features` section
# `kludge` section
Contains parameters for enabling features.
Workarounds for non-standard use cases.
```yaml
features:
enable_filepath_fallback: true
kludge:
additional_search: true
```
| Parameter | Type | SIGHUP reload | Default value | Description |
| ----------------------------------- | ------ | ------------- | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `features.enable_filepath_fallback` | `bool` | yes | `false` | Enable using fallback path to search for a object by attribute. If the value of the `FilePath` attribute in the request contains no `/` symbols or single leading `/` symbol and the object was not found, then an attempt is made to search for the object by the attribute `FileName`. |
| Parameter | Type | SIGHUP reload | Default value | Description |
|----------------------------|--------|---------------|---------------| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `kludge.additional_search` | `bool` | yes | `false` | Enable using additional search by attribute. If the value of the `FilePath` attribute in the request contains no `/` symbols or single leading `/` symbol and the object was not found, then an attempt is made to search for the object by the attribute `FileName`. |

View file

@ -1,36 +0,0 @@
# Nicename Resolving with NNS
Steps to start using name resolving:
1. Enable NNS resolving in config (`rpc_endpoint` must be a valid neo rpc node, see [configs](./config) for other examples):
```yaml
rpc_endpoint: http://morph-chain.frostfs.devenv:30333
resolve_order:
- nns
```
2. Make sure your container is registered in NNS contract. If you use [frostfs-dev-env](https://git.frostfs.info/TrueCloudLab/frostfs-dev-env)
you can check if your container (e.g. with `container-name` name) is registered in NNS:
```shell
$ curl -s --data '{"id":1,"jsonrpc":"2.0","method":"getcontractstate","params":[1]}' \
http://morph-chain.frostfs.devenv:30333 | jq -r '.result.hash'
0x8e6c3cd4b976b28e84a3788f6ea9e2676c15d667
$ docker exec -it morph_chain neo-go \
contract testinvokefunction \
-r http://morph-chain.frostfs.devenv:30333 0x8e6c3cd4b976b28e84a3788f6ea9e2676c15d667 \
resolve string:container-name.container int:16 \
| jq -r '.stack[0].value | if type=="array" then .[0].value else . end' \
| base64 -d && echo
7f3vvkw4iTiS5ZZbu5BQXEmJtETWbi3uUjLNaSs29xrL
```
3. Use container name instead of its `$CID`. For example:
```shell
$ curl http://localhost:8082/get_by_attribute/container-name/FileName/object-name
```

View file

@ -4,15 +4,13 @@ import (
"context"
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/api"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
)
// TreeService provide interface to interact with tree service using s3 data models.
type TreeService interface {
GetLatestVersion(ctx context.Context, cnrID *cid.ID, objectName string) (*data.NodeVersion, error)
GetSubTreeByPrefix(ctx context.Context, bktInfo *data.BucketInfo, prefix string, latestOnly bool) ([]data.NodeInfo, string, error)
CheckSettingsNodeExists(ctx context.Context, bktInfo *data.BucketInfo) error
GetLatestVersion(ctx context.Context, cnrID *cid.ID, objectName string) (*api.NodeVersion, error)
}
var (

View file

@ -1,4 +1,4 @@
package data
package api
import (
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
@ -7,21 +7,12 @@ import (
// NodeVersion represent node from tree service.
type NodeVersion struct {
BaseNodeVersion
DeleteMarker bool
IsPrefixNode bool
}
// BaseNodeVersion is minimal node info from tree service.
// Basically used for "system" object.
type BaseNodeVersion struct {
ID uint64
OID oid.ID
IsDeleteMarker bool
}
type NodeInfo struct {
Meta []NodeMeta
}
type NodeMeta interface {
GetKey() string
GetValue() []byte
OID oid.ID
}

View file

@ -22,13 +22,12 @@ import (
)
const (
dateFormat = "02-01-2006 15:04"
attrOID = "OID"
attrCreated = "Created"
attrFileName = "FileName"
attrFilePath = "FilePath"
attrSize = "Size"
attrDeleteMarker = "IsDeleteMarker"
dateFormat = "02-01-2006 15:04"
attrOID = "OID"
attrCreated = "Created"
attrFileName = "FileName"
attrFilePath = "FilePath"
attrSize = "Size"
)
type (
@ -40,25 +39,23 @@ type (
Objects []ResponseObject
}
ResponseObject struct {
OID string
Created string
FileName string
FilePath string
Size string
IsDir bool
GetURL string
IsDeleteMarker bool
OID string
Created string
FileName string
FilePath string
Size string
IsDir bool
GetURL string
}
)
func newListObjectsResponseS3(attrs map[string]string) ResponseObject {
return ResponseObject{
Created: formatTimestamp(attrs[attrCreated]),
OID: attrs[attrOID],
FileName: attrs[attrFileName],
Size: attrs[attrSize],
IsDir: attrs[attrOID] == "",
IsDeleteMarker: attrs[attrDeleteMarker] == "true",
Created: formatTimestamp(attrs[attrCreated]),
OID: attrs[attrOID],
FileName: attrs[attrFileName],
Size: attrs[attrSize],
IsDir: attrs[attrOID] == "",
}
}
@ -173,7 +170,7 @@ func (h *Handler) getDirObjectsS3(ctx context.Context, bucketInfo *data.BucketIn
objects: make([]ResponseObject, 0, len(nodes)),
}
for _, node := range nodes {
meta := node.Meta
meta := node.GetMeta()
if meta == nil {
continue
}
@ -182,9 +179,6 @@ func (h *Handler) getDirObjectsS3(ctx context.Context, bucketInfo *data.BucketIn
attrs[m.GetKey()] = string(m.GetValue())
}
obj := newListObjectsResponseS3(attrs)
if obj.IsDeleteMarker {
continue
}
obj.FilePath = prefix + obj.FileName
obj.GetURL = "/get/" + bucketInfo.Name + urlencode(obj.FilePath)
result.objects = append(result.objects, obj)

View file

@ -4,14 +4,12 @@ import (
"archive/zip"
"bufio"
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/response"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
@ -25,46 +23,21 @@ import (
// DownloadByAddressOrBucketName handles download requests using simple cid/oid or bucketname/key format.
func (h *Handler) DownloadByAddressOrBucketName(c *fasthttp.RequestCtx) {
cidParam := c.UserValue("cid").(string)
oidParam := c.UserValue("oid").(string)
downloadParam := c.QueryArgs().GetBool("download")
oidURLParam := c.UserValue("oid").(string)
downloadQueryParam := c.QueryArgs().GetBool("download")
ctx := utils.GetContextFromRequest(c)
log := utils.GetReqLogOrDefault(ctx, h.log).With(
zap.String("cid", cidParam),
zap.String("oid", oidParam),
)
bktInfo, err := h.getBucketInfo(ctx, cidParam, log)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
checkS3Err := h.tree.CheckSettingsNodeExists(ctx, bktInfo)
if checkS3Err != nil && !errors.Is(checkS3Err, layer.ErrNodeNotFound) {
logAndSendBucketError(c, log, checkS3Err)
return
}
req := h.newRequest(c, log)
var objID oid.ID
if checkS3Err == nil && shouldDownload(oidParam, downloadParam) {
h.byS3Path(ctx, req, bktInfo.CID, oidParam, h.receiveFile)
} else if err = objID.DecodeString(oidParam); err == nil {
h.byNativeAddress(ctx, req, bktInfo.CID, objID, h.receiveFile)
} else {
h.browseIndex(c, checkS3Err != nil)
switch {
case isObjectID(oidURLParam):
h.byNativeAddress(c, h.receiveFile)
case !isContainerRoot(oidURLParam) && (downloadQueryParam || !isDir(oidURLParam)):
h.byS3Path(c, h.receiveFile)
default:
h.browseIndex(c)
}
}
func shouldDownload(oidParam string, downloadParam bool) bool {
return !isDir(oidParam) || downloadParam
}
func (h *Handler) newRequest(ctx *fasthttp.RequestCtx, log *zap.Logger) request {
return request{
func (h *Handler) newRequest(ctx *fasthttp.RequestCtx, log *zap.Logger) *request {
return &request{
RequestCtx: ctx,
log: log,
}

View file

@ -11,9 +11,9 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/cache"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/handler/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/response"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tree"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
@ -35,7 +35,7 @@ type Config interface {
IndexPageTemplate() string
BufferMaxSizeForPut() uint64
NamespaceHeader() string
EnableFilepathFallback() bool
AdditionalSearch() bool
}
// PrmContainer groups parameters of FrostFS.Container operation.
@ -165,7 +165,7 @@ type Handler struct {
ownerID *user.ID
config Config
containerResolver ContainerResolver
tree layer.TreeService
tree *tree.Tree
cache *cache.BucketCache
workerPool *ants.Pool
}
@ -178,7 +178,7 @@ type AppParams struct {
Cache *cache.BucketCache
}
func New(params *AppParams, config Config, tree layer.TreeService, workerPool *ants.Pool) *Handler {
func New(params *AppParams, config Config, tree *tree.Tree, workerPool *ants.Pool) *Handler {
return &Handler{
log: params.Logger,
frostfs: params.FrostFS,
@ -193,34 +193,77 @@ func New(params *AppParams, config Config, tree layer.TreeService, workerPool *a
// byNativeAddress is a wrapper for function (e.g. request.headObject, request.receiveFile) that
// prepares request and object address to it.
func (h *Handler) byNativeAddress(ctx context.Context, req request, cnrID cid.ID, objID oid.ID, handler func(context.Context, request, oid.Address)) {
addr := newAddress(cnrID, objID)
handler(ctx, req, addr)
}
func (h *Handler) byNativeAddress(c *fasthttp.RequestCtx, f func(context.Context, request, oid.Address)) {
idCnr, _ := c.UserValue("cid").(string)
idObj, _ := url.PathUnescape(c.UserValue("oid").(string))
// byS3Path is a wrapper for function (e.g. request.headObject, request.receiveFile) that
// resolves object address from S3-like path <bucket name>/<object key>.
func (h *Handler) byS3Path(ctx context.Context, req request, cnrID cid.ID, path string, handler func(context.Context, request, oid.Address)) {
c, log := req.RequestCtx, req.log
ctx := utils.GetContextFromRequest(c)
reqLog := utils.GetReqLogOrDefault(ctx, h.log)
log := reqLog.With(zap.String("cid", idCnr), zap.String("oid", idObj))
foundOID, err := h.tree.GetLatestVersion(ctx, &cnrID, path)
bktInfo, err := h.getBucketInfo(ctx, idCnr, log)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
if foundOID.IsDeleteMarker {
objID := new(oid.ID)
if err = objID.DecodeString(idObj); err != nil {
log.Error(logs.WrongObjectID, zap.Error(err))
response.Error(c, "wrong object id", fasthttp.StatusBadRequest)
return
}
addr := newAddress(bktInfo.CID, *objID)
f(ctx, *h.newRequest(c, log), addr)
}
// byS3Path is a wrapper for function (e.g. request.headObject, request.receiveFile) that
// resolves object address from S3-like path <bucket name>/<object key>.
func (h *Handler) byS3Path(c *fasthttp.RequestCtx, f func(context.Context, request, oid.Address)) {
bucketname := c.UserValue("cid").(string)
key := c.UserValue("oid").(string)
ctx := utils.GetContextFromRequest(c)
reqLog := utils.GetReqLogOrDefault(ctx, h.log)
log := reqLog.With(zap.String("bucketname", bucketname), zap.String("key", key))
unescapedKey, err := url.QueryUnescape(key)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
bktInfo, err := h.getBucketInfo(ctx, bucketname, log)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
foundOid, err := h.tree.GetLatestVersion(ctx, &bktInfo.CID, unescapedKey)
if err != nil {
if errors.Is(err, tree.ErrNodeAccessDenied) {
response.Error(c, "Access Denied", fasthttp.StatusForbidden)
} else {
response.Error(c, "object wasn't found", fasthttp.StatusNotFound)
log.Error(logs.GetLatestObjectVersion, zap.Error(err))
}
return
}
if foundOid.DeleteMarker {
log.Error(logs.ObjectWasDeleted)
response.Error(c, "object deleted", fasthttp.StatusNotFound)
return
}
addr := newAddress(bktInfo.CID, foundOid.OID)
addr := newAddress(cnrID, foundOID.OID)
handler(ctx, h.newRequest(c, log), addr)
f(ctx, *h.newRequest(c, log), addr)
}
// byAttribute is a wrapper similar to byNativeAddress.
func (h *Handler) byAttribute(c *fasthttp.RequestCtx, handler func(context.Context, request, oid.Address)) {
cidParam, _ := c.UserValue("cid").(string)
func (h *Handler) byAttribute(c *fasthttp.RequestCtx, f func(context.Context, request, oid.Address)) {
scid, _ := c.UserValue("cid").(string)
key, _ := c.UserValue("attr_key").(string)
val, _ := c.UserValue("attr_val").(string)
@ -229,21 +272,21 @@ func (h *Handler) byAttribute(c *fasthttp.RequestCtx, handler func(context.Conte
key, err := url.QueryUnescape(key)
if err != nil {
log.Error(logs.FailedToUnescapeQuery, zap.String("cid", cidParam), zap.String("attr_key", key), zap.Error(err))
log.Error(logs.FailedToUnescapeQuery, zap.String("cid", scid), zap.String("attr_key", key), zap.Error(err))
response.Error(c, "could not unescape attr_key: "+err.Error(), fasthttp.StatusBadRequest)
return
}
val, err = url.QueryUnescape(val)
if err != nil {
log.Error(logs.FailedToUnescapeQuery, zap.String("cid", cidParam), zap.String("attr_val", val), zap.Error(err))
log.Error(logs.FailedToUnescapeQuery, zap.String("cid", scid), zap.String("attr_val", val), zap.Error(err))
response.Error(c, "could not unescape attr_val: "+err.Error(), fasthttp.StatusBadRequest)
return
}
log = log.With(zap.String("cid", cidParam), zap.String("attr_key", key), zap.String("attr_val", val))
log = log.With(zap.String("cid", scid), zap.String("attr_key", key), zap.String("attr_val", val))
bktInfo, err := h.getBucketInfo(ctx, cidParam, log)
bktInfo, err := h.getBucketInfo(ctx, scid, log)
if err != nil {
logAndSendBucketError(c, log, err)
return
@ -260,11 +303,11 @@ func (h *Handler) byAttribute(c *fasthttp.RequestCtx, handler func(context.Conte
return
}
var addr oid.Address
addr.SetContainer(bktInfo.CID)
addr.SetObject(objID)
var addrObj oid.Address
addrObj.SetContainer(bktInfo.CID)
addrObj.SetObject(objID)
handler(ctx, h.newRequest(c, log), addr)
f(ctx, *h.newRequest(c, log), addrObj)
}
func (h *Handler) findObjectByAttribute(ctx context.Context, log *zap.Logger, cnrID cid.ID, attrKey, attrVal string) (oid.ID, error) {
@ -281,7 +324,7 @@ func (h *Handler) findObjectByAttribute(ctx context.Context, log *zap.Logger, cn
if n == 0 {
switch {
case errors.Is(err, io.EOF) && h.needSearchByFileName(attrKey, attrVal):
log.Debug(logs.ObjectNotFoundByFilePathTrySearchByFileName)
log.Warn(logs.WarnObjectNotFoundByFilePathTrySearchByFileName)
return h.findObjectByAttribute(ctx, log, cnrID, attrFileName, attrVal)
case errors.Is(err, io.EOF):
log.Error(logs.ObjectNotFound, zap.Error(err))
@ -296,11 +339,11 @@ func (h *Handler) findObjectByAttribute(ctx context.Context, log *zap.Logger, cn
}
func (h *Handler) needSearchByFileName(key, val string) bool {
if key != attrFilePath || !h.config.EnableFilepathFallback() {
if key != attrFilePath || !h.config.AdditionalSearch() {
return false
}
return strings.HasPrefix(val, "/") && strings.Count(val, "/") == 1 || !strings.Contains(val, "/")
return strings.HasPrefix(val, "/") && strings.Count(val, "/") == 1 || !strings.ContainsRune(val, '/')
}
// resolveContainer decode container id, if it's not a valid container id
@ -369,7 +412,7 @@ func (h *Handler) readContainer(ctx context.Context, cnrID cid.ID) (*data.Bucket
return bktInfo, err
}
func (h *Handler) browseIndex(c *fasthttp.RequestCtx, isNativeList bool) {
func (h *Handler) browseIndex(c *fasthttp.RequestCtx) {
if !h.config.IndexPageEnabled() {
c.SetStatusCode(fasthttp.StatusNotFound)
return
@ -395,9 +438,18 @@ func (h *Handler) browseIndex(c *fasthttp.RequestCtx, isNativeList bool) {
}
listFunc := h.getDirObjectsS3
if isNativeList {
// tree probe failed, trying to use native
listFunc = h.getDirObjectsNative
isNativeList := false
err = h.tree.CheckSettingsNodeExist(ctx, bktInfo)
if err != nil {
if errors.Is(err, tree.ErrNodeNotFound) {
// tree probe failed, try to use native
listFunc = h.getDirObjectsNative
isNativeList = true
} else {
logAndSendBucketError(c, log, err)
return
}
}
h.browseObjects(c, browseParams{

View file

@ -14,8 +14,8 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/cache"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/handler/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/resolver"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tree"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
@ -32,29 +32,14 @@ import (
"go.uber.org/zap"
)
type treeServiceMock struct {
system map[string]map[string]*data.BaseNodeVersion
type treeClientMock struct {
}
func newTreeService() *treeServiceMock {
return &treeServiceMock{
system: make(map[string]map[string]*data.BaseNodeVersion),
}
func (t *treeClientMock) GetNodes(context.Context, *tree.GetNodesParams) ([]tree.NodeResponse, error) {
return nil, nil
}
func (t *treeServiceMock) CheckSettingsNodeExists(context.Context, *data.BucketInfo) error {
_, ok := t.system["bucket-settings"]
if !ok {
return layer.ErrNodeNotFound
}
return nil
}
func (t *treeServiceMock) GetSubTreeByPrefix(context.Context, *data.BucketInfo, string, bool) ([]data.NodeInfo, string, error) {
return nil, "", nil
}
func (t *treeServiceMock) GetLatestVersion(context.Context, *cid.ID, string) (*data.NodeVersion, error) {
func (t *treeClientMock) GetSubTree(context.Context, *data.BucketInfo, string, []uint64, uint32, bool) ([]tree.NodeResponse, error) {
return nil, nil
}
@ -94,7 +79,7 @@ func (c *configMock) NamespaceHeader() string {
return ""
}
func (c *configMock) EnableFilepathFallback() bool {
func (c *configMock) AdditionalSearch() bool {
return c.additionalSearch
}
@ -104,7 +89,7 @@ type handlerContext struct {
h *Handler
frostfs *TestFrostFS
tree *treeServiceMock
tree *treeClientMock
cfg *configMock
}
@ -145,14 +130,14 @@ func prepareHandlerContext() (*handlerContext, error) {
}),
}
treeMock := newTreeService()
treeMock := &treeClientMock{}
cfgMock := &configMock{}
workerPool, err := ants.NewPool(1)
workerPool, err := ants.NewPool(1000)
if err != nil {
return nil, err
}
handler := New(params, cfgMock, treeMock, workerPool)
handler := New(params, cfgMock, tree.NewTree(treeMock), workerPool)
return &handlerContext{
key: key,
@ -219,8 +204,10 @@ func TestBasic(t *testing.T) {
require.NoError(t, err)
obj := hc.frostfs.objects[putRes.ContainerID+"/"+putRes.ObjectID]
attr := prepareObjectAttributes(object.AttributeFilePath, objFileName)
obj.SetAttributes(append(obj.Attributes(), attr)...)
attr := object.NewAttribute()
attr.SetKey(object.AttributeFilePath)
attr.SetValue(objFileName)
obj.SetAttributes(append(obj.Attributes(), *attr)...)
t.Run("get", func(t *testing.T) {
r = prepareGetRequest(ctx, cnrID.EncodeToString(), putRes.ObjectID)
@ -268,154 +255,61 @@ func TestBasic(t *testing.T) {
require.Equal(t, content, string(data))
})
}
func TestFindObjectByAttribute(t *testing.T) {
func TestNeedSearchByFileName(t *testing.T) {
hc, err := prepareHandlerContext()
require.NoError(t, err)
hc.cfg.additionalSearch = true
bktName := "bucket"
cnrID, cnr, err := hc.prepareContainer(bktName, acl.PublicRWExtended)
require.NoError(t, err)
hc.frostfs.SetContainer(cnrID, cnr)
ctx := context.Background()
ctx = middleware.SetNamespace(ctx, "")
content := "hello"
r, err := prepareUploadRequest(ctx, cnrID.EncodeToString(), content)
require.NoError(t, err)
hc.Handler().Upload(r)
require.Equal(t, r.Response.StatusCode(), http.StatusOK)
var putRes putResponse
err = json.Unmarshal(r.Response.Body(), &putRes)
require.NoError(t, err)
testAttrVal1 := "test-attr-val1"
testAttrVal2 := "test-attr-val2"
testAttrVal3 := "test-attr-val3"
for _, tc := range []struct {
name string
firstAttr object.Attribute
secondAttr object.Attribute
reqAttrKey string
reqAttrValue string
err string
additionalSearch bool
name string
attrKey string
attrVal string
additionalSearchDisabled bool
expected bool
}{
{
name: "success search by FileName",
firstAttr: prepareObjectAttributes(attrFilePath, testAttrVal1),
secondAttr: prepareObjectAttributes(attrFileName, testAttrVal2),
reqAttrKey: attrFileName,
reqAttrValue: testAttrVal2,
additionalSearch: false,
name: "need search - not contains slash",
attrKey: attrFilePath,
attrVal: "cat.png",
expected: true,
},
{
name: "failed search by FileName",
firstAttr: prepareObjectAttributes(attrFilePath, testAttrVal1),
secondAttr: prepareObjectAttributes(attrFileName, testAttrVal2),
reqAttrKey: attrFileName,
reqAttrValue: testAttrVal3,
err: "not found",
additionalSearch: false,
name: "need search - single lead slash",
attrKey: attrFilePath,
attrVal: "/cat.png",
expected: true,
},
{
name: "success search by FilePath (with additional search)",
firstAttr: prepareObjectAttributes(attrFilePath, testAttrVal1),
secondAttr: prepareObjectAttributes(attrFileName, testAttrVal2),
reqAttrKey: attrFilePath,
reqAttrValue: testAttrVal2,
additionalSearch: true,
name: "don't need search - single slash but not lead",
attrKey: attrFilePath,
attrVal: "cats/cat.png",
expected: false,
},
{
name: "failed by FilePath (with additional search)",
firstAttr: prepareObjectAttributes(attrFilePath, testAttrVal1),
secondAttr: prepareObjectAttributes(attrFileName, testAttrVal2),
reqAttrKey: attrFilePath,
reqAttrValue: testAttrVal3,
err: "not found",
additionalSearch: true,
name: "don't need search - more one slash",
attrKey: attrFilePath,
attrVal: "/cats/cat.png",
expected: false,
},
{
name: "don't need search - incorrect attribute key",
attrKey: attrFileName,
attrVal: "cat.png",
expected: false,
},
{
name: "don't need search - additional search disabled",
attrKey: attrFilePath,
attrVal: "cat.png",
additionalSearchDisabled: true,
expected: false,
},
} {
t.Run(tc.name, func(t *testing.T) {
obj := hc.frostfs.objects[putRes.ContainerID+"/"+putRes.ObjectID]
obj.SetAttributes(tc.firstAttr, tc.secondAttr)
hc.cfg.additionalSearch = tc.additionalSearch
objID, err := hc.Handler().findObjectByAttribute(ctx, hc.Handler().log, cnrID, tc.reqAttrKey, tc.reqAttrValue)
if tc.err != "" {
require.Error(t, err)
require.Contains(t, err.Error(), tc.err)
return
if tc.additionalSearchDisabled {
hc.cfg.additionalSearch = false
}
require.NoError(t, err)
require.Equal(t, putRes.ObjectID, objID.EncodeToString())
})
}
}
func TestNeedSearchByFileName(t *testing.T) {
hc, err := prepareHandlerContext()
require.NoError(t, err)
for _, tc := range []struct {
name string
attrKey string
attrVal string
additionalSearch bool
expected bool
}{
{
name: "need search - not contains slash",
attrKey: attrFilePath,
attrVal: "cat.png",
additionalSearch: true,
expected: true,
},
{
name: "need search - single lead slash",
attrKey: attrFilePath,
attrVal: "/cat.png",
additionalSearch: true,
expected: true,
},
{
name: "don't need search - single slash but not lead",
attrKey: attrFilePath,
attrVal: "cats/cat.png",
additionalSearch: true,
expected: false,
},
{
name: "don't need search - more one slash",
attrKey: attrFilePath,
attrVal: "/cats/cat.png",
additionalSearch: true,
expected: false,
},
{
name: "don't need search - incorrect attribute key",
attrKey: attrFileName,
attrVal: "cat.png",
additionalSearch: true,
expected: false,
},
{
name: "don't need search - additional search disabled",
attrKey: attrFilePath,
attrVal: "cat.png",
additionalSearch: false,
expected: false,
},
} {
t.Run(tc.name, func(t *testing.T) {
hc.cfg.additionalSearch = tc.additionalSearch
res := hc.h.needSearchByFileName(tc.attrKey, tc.attrVal)
require.Equal(t, tc.expected, res)
})
@ -454,13 +348,6 @@ func prepareGetZipped(ctx context.Context, bucket, prefix string) *fasthttp.Requ
return r
}
func prepareObjectAttributes(attrKey, attrValue string) object.Attribute {
attr := object.NewAttribute()
attr.SetKey(attrKey)
attr.SetValue(attrValue)
return *attr
}
const (
keyAttr = "User-Attribute"
valAttr = "user value"

View file

@ -2,13 +2,11 @@ package handler
import (
"context"
"errors"
"io"
"net/http"
"strconv"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
@ -104,36 +102,14 @@ func idsToResponse(resp *fasthttp.Response, obj *object.Object) {
// HeadByAddressOrBucketName handles head requests using simple cid/oid or bucketname/key format.
func (h *Handler) HeadByAddressOrBucketName(c *fasthttp.RequestCtx) {
cidParam, _ := c.UserValue("cid").(string)
oidParam, _ := c.UserValue("oid").(string)
test, _ := c.UserValue("oid").(string)
var id oid.ID
ctx := utils.GetContextFromRequest(c)
log := utils.GetReqLogOrDefault(ctx, h.log).With(
zap.String("cid", cidParam),
zap.String("oid", oidParam),
)
bktInfo, err := h.getBucketInfo(ctx, cidParam, log)
err := id.DecodeString(test)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
checkS3Err := h.tree.CheckSettingsNodeExists(ctx, bktInfo)
if checkS3Err != nil && !errors.Is(checkS3Err, layer.ErrNodeNotFound) {
logAndSendBucketError(c, log, checkS3Err)
return
}
req := h.newRequest(c, log)
var objID oid.ID
if checkS3Err == nil {
h.byS3Path(ctx, req, bktInfo.CID, oidParam, h.headObject)
} else if err = objID.DecodeString(oidParam); err == nil {
h.byNativeAddress(ctx, req, bktInfo.CID, objID, h.headObject)
h.byS3Path(c, h.headObject)
} else {
logAndSendBucketError(c, log, checkS3Err)
return
h.byNativeAddress(c, h.headObject)
}
}

View file

@ -42,7 +42,16 @@ func bearerToken(ctx context.Context) *bearer.Token {
}
func isDir(name string) bool {
return name == "" || strings.HasSuffix(name, "/")
return strings.HasSuffix(name, "/")
}
func isObjectID(s string) bool {
var objID oid.ID
return objID.DecodeString(s) == nil
}
func isContainerRoot(key string) bool {
return key == ""
}
func loadAttributes(attrs []object.Attribute) map[string]string {

View file

@ -79,11 +79,6 @@ const (
InvalidLifetimeUsingDefaultValue = "invalid lifetime, using default value (in seconds)" // Error in ../../cmd/http-gw/settings.go
InvalidCacheSizeUsingDefaultValue = "invalid cache size, using default value" // Error in ../../cmd/http-gw/settings.go
FailedToUnescapeQuery = "failed to unescape query"
FailedToParseAddressInTreeNode = "failed to parse object addr in tree node"
SettingsNodeInvalidOwnerKey = "settings node: invalid owner key"
SystemNodeHasMultipleIDs = "system node has multiple ids"
FailedToRemoveOldSystemNode = "failed to remove old system node"
BucketSettingsNodeHasMultipleIDs = "bucket settings node has multiple ids"
ServerReconnecting = "reconnecting server..."
ServerReconnectedSuccessfully = "server reconnected successfully"
ServerReconnectFailed = "failed to reconnect server"
@ -92,5 +87,5 @@ const (
MultinetDialFail = "multinet dial failed"
FailedToLoadMultinetConfig = "failed to load multinet config"
MultinetConfigWontBeUpdated = "multinet config won't be updated"
ObjectNotFoundByFilePathTrySearchByFileName = "object not found by filePath attribute, try search by fileName"
WarnObjectNotFoundByFilePathTrySearchByFileName = "object not found by filePath attribute, try search by fileName"
)

View file

@ -82,14 +82,22 @@ func fetchBearerToken(ctx *fasthttp.RequestCtx) (*bearer.Token, error) {
tkn = new(bearer.Token)
)
for _, parse := range []fromHandler{BearerTokenFromHeader, BearerTokenFromCookie} {
if buf = parse(&ctx.Request.Header); buf == nil {
buf = parse(&ctx.Request.Header)
if buf == nil {
continue
} else if data, err := base64.StdEncoding.DecodeString(string(buf)); err != nil {
}
data, err := base64.StdEncoding.DecodeString(string(buf))
if err != nil {
lastErr = fmt.Errorf("can't base64-decode bearer token: %w", err)
continue
} else if err = tkn.Unmarshal(data); err != nil {
lastErr = fmt.Errorf("can't unmarshal bearer token: %w", err)
continue
}
if err = tkn.Unmarshal(data); err != nil {
if err = tkn.UnmarshalJSON(data); err != nil {
lastErr = fmt.Errorf("can't unmarshal bearer token: %w", err)
continue
}
}
return tkn, nil

View file

@ -98,8 +98,14 @@ func TestFetchBearerToken(t *testing.T) {
tkn := new(bearer.Token)
tkn.ForUser(uid)
t64 := base64.StdEncoding.EncodeToString(tkn.Marshal())
require.NotEmpty(t, t64)
jsonToken, err := tkn.MarshalJSON()
require.NoError(t, err)
jsonTokenBase64 := base64.StdEncoding.EncodeToString(jsonToken)
binaryTokenBase64 := base64.StdEncoding.EncodeToString(tkn.Marshal())
require.NotEmpty(t, jsonTokenBase64)
require.NotEmpty(t, binaryTokenBase64)
cases := []struct {
name string
@ -143,25 +149,47 @@ func TestFetchBearerToken(t *testing.T) {
error: "can't unmarshal bearer token",
},
{
name: "bad header, but good cookie",
name: "bad header, but good cookie with binary token",
header: "dGVzdAo=",
cookie: t64,
cookie: binaryTokenBase64,
expect: tkn,
},
{
name: "bad cookie, but good header",
header: t64,
name: "bad cookie, but good header with binary token",
header: binaryTokenBase64,
cookie: "dGVzdAo=",
expect: tkn,
},
{
name: "ok for header",
header: t64,
name: "bad header, but good cookie with json token",
header: "dGVzdAo=",
cookie: jsonTokenBase64,
expect: tkn,
},
{
name: "ok for cookie",
cookie: t64,
name: "bad cookie, but good header with json token",
header: jsonTokenBase64,
cookie: "dGVzdAo=",
expect: tkn,
},
{
name: "ok for header with binary token",
header: binaryTokenBase64,
expect: tkn,
},
{
name: "ok for cookie with binary token",
cookie: binaryTokenBase64,
expect: tkn,
},
{
name: "ok for header with json token",
header: jsonTokenBase64,
expect: tkn,
},
{
name: "ok for cookie with json token",
cookie: jsonTokenBase64,
expect: tkn,
},
}

View file

@ -6,8 +6,9 @@ import (
"fmt"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/api"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
)
@ -117,7 +118,7 @@ func (n *treeNode) FileName() (string, bool) {
return value, ok
}
func newNodeVersion(node NodeResponse) (*data.NodeVersion, error) {
func newNodeVersion(node NodeResponse) (*api.NodeVersion, error) {
tNode, err := newTreeNode(node)
if err != nil {
return nil, fmt.Errorf("invalid tree node: %w", err)
@ -126,30 +127,20 @@ func newNodeVersion(node NodeResponse) (*data.NodeVersion, error) {
return newNodeVersionFromTreeNode(tNode), nil
}
func newNodeVersionFromTreeNode(treeNode *treeNode) *data.NodeVersion {
func newNodeVersionFromTreeNode(treeNode *treeNode) *api.NodeVersion {
_, isDeleteMarker := treeNode.Get(isDeleteMarkerKV)
version := &data.NodeVersion{
BaseNodeVersion: data.BaseNodeVersion{
OID: treeNode.ObjID,
IsDeleteMarker: isDeleteMarker,
size, _ := treeNode.Get(sizeKV)
version := &api.NodeVersion{
BaseNodeVersion: api.BaseNodeVersion{
OID: treeNode.ObjID,
},
DeleteMarker: isDeleteMarker,
IsPrefixNode: size == "",
}
return version
}
func newNodeInfo(node NodeResponse) data.NodeInfo {
nodeMeta := node.GetMeta()
nodeInfo := data.NodeInfo{
Meta: make([]data.NodeMeta, 0, len(nodeMeta)),
}
for _, meta := range nodeMeta {
nodeInfo.Meta = append(nodeInfo.Meta, meta)
}
return nodeInfo
}
func newMultiNode(nodes []NodeResponse) (*multiSystemNode, error) {
var (
err error
@ -189,7 +180,7 @@ func (m *multiSystemNode) Old() []*treeNode {
return m.nodes[1:]
}
func (c *Tree) GetLatestVersion(ctx context.Context, cnrID *cid.ID, objectName string) (*data.NodeVersion, error) {
func (c *Tree) GetLatestVersion(ctx context.Context, cnrID *cid.ID, objectName string) (*api.NodeVersion, error) {
nodes, err := c.GetVersions(ctx, cnrID, objectName)
if err != nil {
return nil, err
@ -219,7 +210,7 @@ func (c *Tree) GetVersions(ctx context.Context, cnrID *cid.ID, objectName string
return c.service.GetNodes(ctx, p)
}
func (c *Tree) CheckSettingsNodeExists(ctx context.Context, bktInfo *data.BucketInfo) error {
func (c *Tree) CheckSettingsNodeExist(ctx context.Context, bktInfo *data.BucketInfo) error {
_, err := c.getSystemNode(ctx, bktInfo, settingsFileName)
if err != nil {
return err
@ -245,7 +236,7 @@ func (c *Tree) getSystemNode(ctx context.Context, bktInfo *data.BucketInfo, name
nodes = filterMultipartNodes(nodes)
if len(nodes) == 0 {
return nil, layer.ErrNodeNotFound
return nil, ErrNodeNotFound
}
return newMultiNode(nodes)
@ -307,14 +298,14 @@ func pathFromName(objectName string) []string {
return strings.Split(objectName, separator)
}
func (c *Tree) GetSubTreeByPrefix(ctx context.Context, bktInfo *data.BucketInfo, prefix string, latestOnly bool) ([]data.NodeInfo, string, error) {
func (c *Tree) GetSubTreeByPrefix(ctx context.Context, bktInfo *data.BucketInfo, prefix string, latestOnly bool) ([]NodeResponse, string, error) {
rootID, tailPrefix, err := c.determinePrefixNode(ctx, bktInfo, versionTree, prefix)
if err != nil {
return nil, "", err
}
subTree, err := c.service.GetSubTree(ctx, bktInfo, versionTree, rootID, 2, false)
if err != nil {
if errors.Is(err, ErrNodeNotFound) {
if errors.Is(err, layer.ErrNodeNotFound) {
return nil, "", nil
}
return nil, "", err
@ -349,23 +340,14 @@ func (c *Tree) GetSubTreeByPrefix(ctx context.Context, bktInfo *data.BucketInfo,
nodesMap[fileName] = nodes
}
result := make([]data.NodeInfo, 0, len(subTree))
result := make([]NodeResponse, 0, len(subTree))
for _, nodes := range nodesMap {
result = append(result, nodeResponseToNodeInfo(nodes)...)
result = append(result, nodes...)
}
return result, strings.TrimSuffix(prefix, tailPrefix), nil
}
func nodeResponseToNodeInfo(nodes []NodeResponse) []data.NodeInfo {
nodesInfo := make([]data.NodeInfo, 0, len(nodes))
for _, node := range nodes {
nodesInfo = append(nodesInfo, newNodeInfo(node))
}
return nodesInfo
}
func (c *Tree) determinePrefixNode(ctx context.Context, bktInfo *data.BucketInfo, treeID, prefix string) ([]uint64, string, error) {
rootID := []uint64{0}
path := strings.Split(prefix, separator)