Compare commits

...
Sign in to create a new pull request.

104 commits

Author SHA1 Message Date
1779593f46 [#203] Port changelog from support branch
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2025-02-03 14:26:58 +00:00
7e48ca626e [#202] Bump SDK version to the latest master
Contains fixes:
- memory leak in gRPC client,
- panic and deadlock in tree pool.

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2025-02-03 14:26:58 +00:00
72e5d645b9 [#194] Fix updateServers finding logic
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2025-02-03 10:49:57 +03:00
8362cd696e [#199] Port release v0.32.1 changelog
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2025-01-29 13:10:35 +00:00
8de06e23a0 [#199] Use default value if config param is unset after SIGHUP
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2025-01-29 13:10:35 +00:00
a6fdaf9456 [#199] Clear app services list
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2025-01-29 13:10:35 +00:00
526da379ad [#199] Fix SIGHUP panic
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2025-01-29 13:10:35 +00:00
87ace4f8f7 [#201] govulncheck: Use patch release with security fixes
https://go.dev/doc/devel/release#go1.23.minor

Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2025-01-28 18:02:43 +03:00
36bd3e2d43
[#170] logs: Remove comments
Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
2025-01-23 17:16:23 +03:00
1e897aa3c3
[#170] Updated docs and configuration of archive section
Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
2025-01-23 17:16:23 +03:00
1e7309684b
[#170] Support .tar/.tgz unpacking during upload
During upload if X-Explode-Archive is set, gate tries to read archive and create an object for each file.
Each object acquires a FilePath attribute which is calculated relative to the archive root.
Archive could have compression via Gzip if "Content-Encoding: gzip" header is specified

Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
2025-01-23 17:16:12 +03:00
7901d00924
[#170] Support tar.gz downloading
Split DownloadZip handler on methods. Add handler DownloadTar for downloading tar.gz archives. Make methods more universal for using in both implementations

Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
2025-01-23 15:42:22 +03:00
a7617514d3 [#193] Use selfhosted image registry instead of Docker Hub
Existing AIO image tags referenced from our integration tests were
manually synced to git.frostfs.info prior to this change.

Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2025-01-21 12:59:25 +03:00
856e0ecf40 [#193] Update testcontainers to v0.35.0
Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2025-01-21 11:43:00 +03:00
1e82f64dfd [#193] Enable integration tests in Forgejo Actions
Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2025-01-21 11:07:00 +03:00
4b782cf124 [#187] Add handling quota limit reached error
The Access Denied status may be received
from APE due to exceeding the quota. In
this situation, you need to return the
appropriate status code.

Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2025-01-21 06:59:47 +00:00
Aleksey Kravchenko
f0c999d9a2 [#188] Improve content-type detector
Signed-off-by: Aleksey Kravchenko <al.kravchenko@yadro.com>
2025-01-21 06:52:37 +00:00
1db62f9d95 [#185] Update SDK to support new tree/pool version
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2025-01-21 06:47:52 +00:00
e1b670a727 [#192] Build and host OCI images on our own infra
Similar to TrueCloudLab/frostfs-s3-gw#587
this PR introduces a CI pipeline that builds Docker images and pushes them
to our selfhosted registry.

Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2025-01-21 06:42:25 +00:00
9551f34f00 [#163] Support JSON bearer token
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2025-01-09 11:26:37 +03:00
a4e3767d4b [#175] Adopt 1.6.* aio versoins in integration tests
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-12-24 08:01:33 +00:00
d32ac4b537 Release v0.32.0
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-12-20 15:23:02 +03:00
a658f3adc0 [#181] index_page: Ignore deleted objects in versioned buckets
Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
2024-12-17 13:06:57 +00:00
a945a947ac [#183] Unlink API.md to README file
This is useful for auto-generated document tools
which parse docs dir.

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-12-17 13:03:02 +00:00
1be92fa4be
[#166] Fix getting s3 object with the FrostFS OID name
Prioritize getting s3 object with the key, which equals to valid FrostFS OID, rather than getting non-existent object with OID via native protocol for GET and HEAD requests

Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
2024-12-17 10:32:22 +03:00
dc100f03a6 [#174] Add fallback path to search
Fallback path to search is needed because
some software may keep FileName attribute
and ignore FilePath attribute during file
upload. Therefore, if this feature is
enabled under certain conditions (for more
information, see gate-configuration.md) a
search will be performed for the FileName
attribute.

Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-12-16 10:43:34 +00:00
bbc7c7367d [#179] Refine CODEOWNERS settings
Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2024-12-10 16:18:08 +03:00
b9e44c603d
[#178] Update frostfs-sdk-go with new tree service client
Add tree service's GetBucketSettings to use them to check for protocol to use (S3 or native). Also add mock implementations for this and GetLatestVersion methods.

Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
2024-12-09 15:09:08 +03:00
e81f01c2ab [#150] Add dropped logs metric
Signed-off-by: Pavel Pogodaev <p.pogodaev@yadro.com>
2024-12-04 15:49:25 +03:00
a2f8cb6735 Release v0.31.0
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-11-20 11:09:31 +03:00
43764772aa
[#151] index page: Add browse via native protocol
Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
2024-11-19 17:33:21 +03:00
9c0b499ea6 [#164] Add tracing attributes
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-11-18 12:48:04 +00:00
22d905e51e [#165] Execute CI on push to master
Discussion:
    TrueCloudLab/frostfs-s3-gw#550

Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2024-11-15 11:41:13 +00:00
d5b92446bd [#162] Stop using obsolete .github directory
This commit is a part of multi-repo cleanup effort:
TrueCloudLab/frostfs-infra#136

Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2024-11-06 15:19:54 +03:00
679731ee52 [#161] Update SDK
Need fix TrueCloudLab/frostfs-sdk-go#282

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-11-05 17:51:35 +03:00
821f8c2248 [#160] Add documentation for multinet settings
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-10-31 11:38:54 +03:00
8bc64ce5e9 [#160] Use source dialer for gRPC connection to storage
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-10-31 11:38:49 +03:00
69b7761bd6 [#160] Add internal/net package with multinet dialer source
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-10-31 11:38:41 +03:00
46c63edd67 [#158] Support cors
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-10-25 09:31:59 +03:00
901b8ff95b [#158] Fix integration test compilation error
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-10-25 09:30:58 +03:00
8dc5272965 [#158] Rework app settings
Update settings by sighup using one lock/unlock operation

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-10-25 09:30:53 +03:00
70846fdaec [#157] Support the continuous use of interceptors
We can always add interceptors to the grpc
connection to the storage, since the actual
use will be controlled by the configuration
from the frostfs-observability library.

Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-10-22 14:24:26 +00:00
fc86ab3511 [#148] Add trace_id to logs
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-10-17 11:00:43 +00:00
495f745535 [#142] Fix multipart-objects download
Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
2024-10-15 17:17:29 +03:00
8fe8f2dcc2 [#137] Add index page support
Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
2024-10-04 14:23:16 +03:00
77eb474581 [#147] Add sampling configuration
Signed-off-by: Pavel Pogodaev <p.pogodaev@yadro.com>
2024-09-26 14:49:13 +00:00
c8473498ae [#146] Fix of sighup traicing docs
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-09-25 14:31:00 +03:00
a4233b006c [#144] Update frostfs-sdk-go
Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
2024-09-24 18:17:21 +03:00
7e80f0cce6 [#139] Add root ca cert for telemetry configuration
Signed-off-by: Aleksey Savaitan <a.savaitan@yadro.com>
2024-09-17 11:06:10 +03:00
843708a558 [#134] Support percent-encoding
Signed-off-by: Pavel Pogodaev <p.pogodaev@yadro.com>
2024-09-03 12:00:13 +00:00
77ffde58e9 [#123] Add SECURITY.md
Signed-off-by: Pavel Pogodaev <p.pogodaev@yadro.com>
2024-09-03 11:46:00 +00:00
ca426fff4d [#135] Add fuzzing tests for handlers
Signed-off-by: Roman Ognev <r.ognev@yadro.com>
2024-09-02 16:02:47 +03:00
151e5bc1c8 [#132] Update Go version
Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
2024-08-29 10:42:20 +03:00
5ee09790f0 [#126] Fix docker warnings
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-08-16 12:56:38 +03:00
fcf99d9a59 [#127] Split FrostFS ReadObject to separate methods
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-07-23 16:59:12 +03:00
f20ea67b46 Release v0.30.0
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-07-22 15:07:39 +03:00
9e2d1208cb [#129] Remove resolver duplicate
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-07-19 18:01:02 +03:00
418767c8ec [#129] Update FrostFS API and remove unused code
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-07-19 18:00:49 +03:00
16545bd3b0 [#124] Update SDK version
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-07-08 12:06:08 +03:00
d9cbd302b1 [#121] Add canonicalizer
Some headers might be passed in non-canonical way
by proxy servers, such as 'Authorization' header.
Server does not normalize headers, so we can get
custom object attributes. Therefore, app has to normalize
all non object attribute headers by itself.

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-06-26 11:21:21 +03:00
1737f1d95f [#117] Update tests
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-06-25 15:12:21 +00:00
0f22ca43c1 [#117] Fix FrostFS interface usage
HTTP Gateway expects io.Reader to work with
payload, however `WithPayload` flag reads whole
payload into header object.

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-06-25 15:12:21 +00:00
27478995b5 [#118] Replace ACLs with polices in readme
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-06-24 16:54:55 +03:00
3741e3b003 [#117] Add mocked handler for tests
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-06-18 12:04:14 +03:00
826dd0cdbe [#117] Fix integration test after updating dependencies
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-06-17 17:58:24 +03:00
23ed3ab86e [#114] Update frostfs-sdk-go version with support EC
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-06-05 15:41:36 +03:00
5a87ee7625 [#115] Fix ci build go version
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-06-05 15:28:06 +03:00
b73a4a25b3 [#115] go.mod: Update vulnerable dependencies
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-06-05 12:33:49 +03:00
5b7b872dcd [#112] Update net to v0.23.0
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-05-08 09:57:54 +03:00
c851c0529c [#112] Add integration test with bearer token
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-05-08 09:57:35 +03:00
16d6e6c34e [#112] tokens: Extend test coverage
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-05-06 21:01:53 +03:00
11965deb41 [#100] server auto re-binding
Signed-off-by: Pavel Pogodaev <p.pogodaev@yadro.com>
2024-04-04 14:19:33 +03:00
a95dc6c8c7 [#110] Update CHANGELOG
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-03-27 19:26:37 +03:00
f39b3aa93a [#110] Add "h2" as next proto to allow HTTP/2 requests in http.Serve
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-03-27 19:25:45 +03:00
6695ebe5a0 [#110] Test HTTP/2 requests
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-03-27 19:25:34 +03:00
c6383fc135 [#107] Update CHANGELOG.md
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-29 12:52:52 +03:00
5ded105c09 [#107] Check query unescape errors
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-29 12:50:56 +03:00
88e32ddd7f [#107] Add return on error in tokenizer middleware
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-29 12:30:33 +03:00
007d278caa [#107] Close server listener on error
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-29 12:14:37 +03:00
7ec9b34d33 [#105] logger: Fix logging level changing for journald
Signed-off-by: Artem Tataurov <a.tataurov@yadro.com>
2024-02-16 17:50:46 +03:00
5470916361 [#104] journald update
We want to have less useless fields in logs

Signed-off-by: Pavel Pogodaev <p.pogodaev@yadro.com>
2024-01-29 16:04:25 +03:00
c038957649 [#103] .forgejo: Check only PR commits in dco-go
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-01-26 15:12:13 +00:00
ce4ec032f9 [#103] .forgejo: Update dco-go to v3
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-01-26 15:12:13 +00:00
4049255eed [#102] Port release v0.28.1 changelog
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-01-24 17:50:59 +03:00
2c95250f72 [#99] Fix possibility of panic during SIGHUP
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-01-09 11:00:48 +03:00
5ae75eb9d8 [#94] Update api-go to fix stable marshal of empty structs
Newer version of api-go does not ignore non-nil empty
structures in protobuf messages, so compatibility with
previous version is preserved.

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2023-12-07 16:57:28 +03:00
627294bf70 [#92] Support configuring max tree request attempts
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-12-07 16:57:28 +03:00
0ef3e18ee1 [#92] Set tree request id
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-12-07 16:56:16 +03:00
2e28b2ac85 Release v0.28.0
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2023-12-07 16:28:12 +03:00
a375af7d98 [#91] Add support namespaces
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2023-12-01 10:12:55 +00:00
dc8d0d4ab3 [#95] Add dirty version check
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2023-11-22 11:58:21 +03:00
7fa973b261 [#89] Add support zapjournald logger configuration
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2023-11-09 16:21:29 +03:00
1ced82a714 [#70] Fix log messages (move to constants)
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-12 12:08:20 +00:00
49d6a27562 [#70] Adjust status codes
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-12 12:08:20 +00:00
9a5a2239bd [#70] Support bucket/container caching
Mainly it was added because
we need to know if TZ hashing is disabled or not for container

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-12 12:08:20 +00:00
8bc246f8f9 [#70] Support configuring buffer size for put
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-12 12:08:20 +00:00
9b34413e17 [#70] Support client cut
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-12 12:08:20 +00:00
e61b4867c9 [#70] Update SDK to support client cut
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-12 12:08:20 +00:00
84eb57475b [#85] Fix get latest version node
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2023-10-09 09:59:52 +03:00
e26577e753 [#74] Replace atomics with mutex for reloadable params
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2023-09-21 16:25:28 +03:00
d219943542 [#73] Uploader, downloader structures refactoring
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2023-09-05 18:18:04 +03:00
add07a21ed [#71] Add log constants linter
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2023-09-05 13:15:12 +00:00
40568590c7 [#72] Support soft memory limit setting
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2023-09-05 13:14:30 +00:00
834d5b93e5 [#69] Fix postinstall script
Post install script changes rights for user dir.
With change of user dir (home dir), this dir
isn't craeted anymore, so post install script
fails. This commit changes useradd flag `-m`  to
create user dir.

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2023-09-01 14:19:26 +03:00
93 changed files with 9416 additions and 3702 deletions

View file

@ -1,9 +1,9 @@
FROM golang:1.21-alpine as basebuilder
FROM golang:1.22-alpine AS basebuilder
RUN apk add --update make bash ca-certificates
FROM basebuilder as builder
ENV GOGC off
ENV CGO_ENABLED 0
FROM basebuilder AS builder
ENV GOGC=off
ENV CGO_ENABLED=0
ARG BUILD=now
ARG VERSION=dev
ARG REPO=repository

View file

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

View file

@ -1,4 +1,8 @@
on: [pull_request]
on:
pull_request:
push:
branches:
- master
jobs:
builds:
@ -6,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
go_versions: [ '1.20', '1.21' ]
go_versions: [ '1.22', '1.23' ]
fail-fast: false
steps:
- uses: actions/checkout@v3
@ -18,3 +22,6 @@ jobs:
- name: Build binary
run: make
- name: Check dirty suffix
run: if [[ $(make version) == *"dirty"* ]]; then echo "Version has dirty suffix" && exit 1; fi

View file

@ -12,9 +12,9 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: '1.21'
go-version: '1.23'
- name: Run commit format checker
uses: https://git.frostfs.info/TrueCloudLab/dco-go@v1
uses: https://git.frostfs.info/TrueCloudLab/dco-go@v3
with:
from: adb95642d
from: 'origin/${{ github.event.pull_request.base.ref }}'

View file

@ -0,0 +1,27 @@
on:
pull_request:
push:
workflow_dispatch:
jobs:
image:
name: OCI image
runs-on: docker
container: git.frostfs.info/truecloudlab/env:oci-image-builder-bookworm
steps:
- name: Clone git repo
uses: actions/checkout@v3
- name: Build OCI image
run: make image
- name: Push image to OCI registry
run: |
echo "$REGISTRY_PASSWORD" \
| docker login --username truecloudlab --password-stdin git.frostfs.info
make image-push
if: >-
startsWith(github.ref, 'refs/tags/v') &&
(github.event_name == 'workflow_dispatch' || github.event_name == 'push')
env:
REGISTRY_PASSWORD: ${{secrets.FORGEJO_OCI_REGISTRY_PUSH_TOKEN}}

View file

@ -1,4 +1,8 @@
on: [pull_request]
on:
pull_request:
push:
branches:
- master
jobs:
lint:
@ -7,17 +11,24 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: golangci-lint
uses: https://github.com/golangci/golangci-lint-action@v2
- name: Set up Go
uses: actions/setup-go@v3
with:
version: latest
go-version: '1.23'
cache: true
- name: Install linters
run: make lint-install
- name: Run linters
run: make lint
tests:
name: Tests
runs-on: ubuntu-latest
strategy:
matrix:
go_versions: [ '1.20', '1.21' ]
go_versions: [ '1.22', '1.23' ]
fail-fast: false
steps:
- uses: actions/checkout@v3
@ -31,4 +42,20 @@ jobs:
run: make dep
- name: Run tests
run: make test
run: make test
integration:
name: Integration tests
runs-on: oci-runner
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.23'
- name: Run integration tests
run: |-
podman-service.sh
make integration-test

View file

@ -1,4 +1,8 @@
on: [pull_request]
on:
pull_request:
push:
branches:
- master
jobs:
vulncheck:
@ -12,7 +16,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: '1.21'
go-version: '1.22.11'
- name: Install govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest

1
.github/CODEOWNERS vendored
View file

@ -1 +0,0 @@
* @alexvanin @dkirillov

View file

@ -12,7 +12,8 @@ run:
# output configuration options
output:
# colored-line-number|line-number|json|tab|checkstyle|code-climate, default is "colored-line-number"
format: tab
formats:
- format: tab
# all available settings of specific linters
linters-settings:
@ -24,6 +25,16 @@ linters-settings:
govet:
# report about shadowed variables
check-shadowing: false
custom:
truecloudlab-linters:
path: bin/external_linters.so
original-url: git.frostfs.info/TrueCloudLab/linters.git
settings:
noliteral:
enable: true
target-methods: ["Fatal"]
disable-packages: ["req", "r"]
constants-package: "git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
linters:
enable:
@ -45,6 +56,7 @@ linters:
- gofmt
- whitespace
- goimports
- truecloudlab-linters
disable-all: true
fast: false

View file

@ -30,16 +30,23 @@ repos:
hooks:
- id: shellcheck
- repo: https://github.com/golangci/golangci-lint
rev: v1.51.2
hooks:
- id: golangci-lint
- repo: local
hooks:
- id: go-unit-tests
name: go unit tests
entry: make test
pass_filenames: false
types: [go]
language: system
- id: make-lint-install
name: install linters
entry: make lint-install
language: system
pass_filenames: false
- id: make-lint
name: run linters
entry: make lint
language: system
pass_filenames: false
- id: go-unit-tests
name: go unit tests
entry: make test
pass_filenames: false
types: [go]
language: system

View file

@ -4,20 +4,147 @@ This document outlines major changes between releases.
## [Unreleased]
### Added
- Add handling quota limit reached error (#187)
## [0.32.2] - 2025-02-03
### Fixed
- Possible memory leak in gRPC client (#202)
## [0.32.1] - 2025-01-27
### Fixed
- SIGHUP panic (#198)
## [0.32.0] - Khumbu - 2024-12-20
### Fixed
- Getting S3 object with FrostFS Object ID-like key (#166)
- Ignore delete marked objects in versioned bucket in index page (#181)
### Added
- Metric of dropped logs by log sampler (#150)
- Fallback FileName attribute search during FilePath attribute search (#174)
### Changed
- Updated tree service pool without api-go dependency (#178)
## [0.31.0] - Rongbuk - 2024-11-20
### Fixed
- Docker warnings during image build (#126)
- `trace_id` parameter in logs (#148)
- SIGHUP support for `tracing.enabled` config parameter (#157)
### Added
- Vulnerability report document (#123)
- Root CA configuration for tracing (#139)
- Log sampling policy configuration (#147)
- Index page support for buckets and containers (#137, #151)
- CORS support (#158)
- Source IP binding configuration for FrostFS requests (#160)
- Tracing attributes (#164)
### Changed
- Updated Go version to 1.22 (#132)
### Removed
- Duplicated NNS Resolver code (#129)
## [0.30.3] - 2024-10-18
### Fixed
- Get response on S3 multipart object (#142)
### Added
- Support percent-encoding for GET queries (#134)
### Changed
- Split `FrostFS` interface into separate read methods (#127)
## [0.30.2] - 2024-09-03
### Added
- Fuzzing tests (#135)
## [0.30.1] - 2024-08-20
### Fixed
- Error counting in pool component before connection switch (#131)
### Added
- Log of endpoint address during tree pool errors (#131)
## [0.30.0] - Kangshung - 2024-07-22
### Fixed
- Handle query unescape and invalid bearer token errors (#107)
- Fix HTTP/2 requests (#110)
### Added
- Add new `reconnect_interval` config param (#100)
- Erasure coding support in placement policy (#114)
- HTTP Header canonicalizer for well-known headers (#121)
### Changed
- Improve test coverage (#112, #117)
- Bumped vulnerable dependencies (#115)
- Replace extended ACL examples with policies in README (#118)
### Removed
## [0.29.0] - Zemu - 2024-05-27
### Fixed
- Fix possibility of panic during SIGHUP (#99)
- Handle query unescape and invalid bearer token errors (#108)
- Fix log-level change on SIGHUP (#105)
### Added
- Support client side object cut (#70)
- Add `frostfs.client_cut` config param
- Add `frostfs.buffer_max_size_for_put` config param
- Add bucket/container caching
- Disable homomorphic hash for PUT if it's disabled in container itself
- Add new `logger.destination` config param with journald support (#89, #104)
- Add support namespaces (#91)
### Changed
- Replace atomics with mutex for reloadable params (#74)
## [0.28.1] - 2024-01-24
### Added
- Tree pool traversal limit (#92)
### Update from 0.28.0
See new `frostfs.tree_pool_max_attempts` config parameter.
## [0.28.0] - Academy of Sciences - 2023-12-07
### Fixed
- `grpc` schemas in tree configuration (#62)
- `GetSubTree` failures (#67)
- Debian packaging (#69, #90)
- Get latest version of tree node (#85)
### Added
- Support dump metrics descriptions (#29)
- Support impersonate bearer token (#40, #45)
- Tracing support (#20, #44, #60)
- Object name resolving with tree service (#30)
- Metrics for current endpoint status (#77)
- Soft memory limit with `runtime.soft_memory_limit` (#72)
- Add selection of the node of the latest version of the object (#85)
### Changed
- Update prometheus to v1.15.0 (#35)
- Update go version to 1.19 (#50)
- Finish rebranding (#2)
- Use gate key to form object owner (#66)
- Move log messages to constants (#36)
- Uploader and downloader refactor (#73)
### Removed
- Drop `tree.service` param (now endpoints from `peers` section are used) (#59)
@ -61,4 +188,15 @@ This project is a fork of [NeoFS HTTP Gateway](https://github.com/nspcc-dev/neof
To see CHANGELOG for older versions, refer to https://github.com/nspcc-dev/neofs-http-gw/blob/master/CHANGELOG.md.
[0.27.0]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/72734ab4...v0.27.0
[Unreleased]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.27.0...master
[0.28.0]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.27.0...v0.28.0
[0.28.1]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.28.0...v0.28.1
[0.29.0]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.28.1...v0.29.0
[0.30.0]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.29.0...v0.30.0
[0.30.1]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.30.0...v0.30.1
[0.30.2]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.30.1...v0.30.2
[0.30.3]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.30.2...v0.30.3
[0.31.0]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.30.3...v0.31.0
[0.32.0]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.31.0...v0.32.0
[0.32.1]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.32.0...v0.32.1
[0.32.2]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.32.1...v0.32.2
[Unreleased]: https://git.frostfs.info/TrueCloudLab/frostfs-http-gw/compare/v0.32.2...master

3
CODEOWNERS Normal file
View file

@ -0,0 +1,3 @@
.* @TrueCloudLab/storage-services-developers @TrueCloudLab/storage-services-committers
.forgejo/.* @potyarkin
Makefile @potyarkin

View file

@ -2,19 +2,24 @@
REPO ?= $(shell go list -m)
VERSION ?= $(shell git describe --tags --match "v*" --dirty --always --abbrev=8 2>/dev/null || cat VERSION 2>/dev/null || echo "develop")
GO_VERSION ?= 1.20
LINT_VERSION ?= 1.49.0
GO_VERSION ?= 1.22
LINT_VERSION ?= 1.60.3
TRUECLOUDLAB_LINT_VERSION ?= 0.0.6
BUILD ?= $(shell date -u --iso=seconds)
HUB_IMAGE ?= truecloudlab/frostfs-http-gw
HUB_IMAGE ?= git.frostfs.info/truecloudlab/frostfs-http-gw
HUB_TAG ?= "$(shell echo ${VERSION} | sed 's/^v//')"
METRICS_DUMP_OUT ?= ./metrics-dump.json
OUTPUT_LINT_DIR ?= $(shell pwd)/bin
LINT_DIR = $(OUTPUT_LINT_DIR)/golangci-lint-$(LINT_VERSION)-v$(TRUECLOUDLAB_LINT_VERSION)
TMP_DIR := .cache
# List of binaries to build. For now just one.
BINDIR = bin
DIRS = $(BINDIR)
BINS = $(BINDIR)/frostfs-http-gw
CMDS = $(addprefix frostfs-, $(notdir $(wildcard cmd/*)))
BINS = $(addprefix $(BINDIR)/, $(CMDS))
.PHONY: all $(BINS) $(DIRS) dep docker/ test cover fmt image image-push dirty-image lint docker/lint pre-commit unpre-commit version clean
@ -25,6 +30,11 @@ PKG_VERSION ?= $(shell echo $(VERSION) | sed "s/^v//" | \
sed "s/-/~/")-${OS_RELEASE}
.PHONY: debpackage debclean
FUZZ_NGFUZZ_DIR ?= ""
FUZZ_TIMEOUT ?= 30
FUZZ_FUNCTIONS ?= "all"
FUZZ_AUX ?= ""
# Make all binaries
all: $(BINS)
$(BINS): $(DIRS) dep
@ -32,7 +42,7 @@ $(BINS): $(DIRS) dep
CGO_ENABLED=0 \
go build -v -trimpath \
-ldflags "-X main.Version=$(VERSION)" \
-o $@ ./
-o $@ ./cmd/$(subst frostfs-,,$(notdir $@))
$(DIRS):
@echo "⇒ Ensure dir: $@"
@ -73,6 +83,35 @@ cover:
@go test -v -race ./... -coverprofile=coverage.txt -covermode=atomic
@go tool cover -html=coverage.txt -o coverage.html
# Run fuzzing
CLANG := $(shell which clang-17 2>/dev/null)
.PHONY: check-clang all
check-clang:
ifeq ($(CLANG),)
@echo "clang-17 is not installed. Please install it before proceeding - https://apt.llvm.org/llvm.sh "
@exit 1
endif
.PHONY: check-ngfuzz all
check-ngfuzz:
@if [ -z "$(FUZZ_NGFUZZ_DIR)" ]; then \
echo "Please set a variable FUZZ_NGFUZZ_DIR to specify path to the ngfuzz"; \
exit 1; \
fi
.PHONY: install-fuzzing-deps
install-fuzzing-deps: check-clang check-ngfuzz
.PHONY: fuzz
fuzz: install-fuzzing-deps
@START_PATH=$$(pwd); \
ROOT_PATH=$$(realpath --relative-to=$(FUZZ_NGFUZZ_DIR) $$START_PATH) ; \
cd $(FUZZ_NGFUZZ_DIR) && \
./ngfuzz -clean && \
./ngfuzz -fuzz $(FUZZ_FUNCTIONS) -rootdir $$ROOT_PATH -timeout $(FUZZ_TIMEOUT) $(FUZZ_AUX) && \
./ngfuzz -report
# Reformat code
fmt:
@echo "⇒ Processing gofmt check"
@ -85,7 +124,7 @@ image:
--build-arg REPO=$(REPO) \
--build-arg VERSION=$(VERSION) \
--rm \
-f Dockerfile \
-f .docker/Dockerfile \
-t $(HUB_IMAGE):$(HUB_TAG) .
# Push Docker image to the hub
@ -100,12 +139,26 @@ dirty-image:
--build-arg REPO=$(REPO) \
--build-arg VERSION=$(VERSION) \
--rm \
-f Dockerfile.dirty \
-f .docker/Dockerfile.dirty \
-t $(HUB_IMAGE)-dirty:$(HUB_TAG) .
# Install linters
lint-install:
@mkdir -p $(TMP_DIR)
@rm -rf $(TMP_DIR)/linters
@git -c advice.detachedHead=false clone --branch v$(TRUECLOUDLAB_LINT_VERSION) https://git.frostfs.info/TrueCloudLab/linters.git $(TMP_DIR)/linters
@@make -C $(TMP_DIR)/linters lib CGO_ENABLED=1 OUT_DIR=$(OUTPUT_LINT_DIR)
@rm -rf $(TMP_DIR)/linters
@rmdir $(TMP_DIR) 2>/dev/null || true
@CGO_ENABLED=1 GOBIN=$(LINT_DIR) go install github.com/golangci/golangci-lint/cmd/golangci-lint@v$(LINT_VERSION)
# Run linters
lint:
@golangci-lint --timeout=5m run
@if [ ! -d "$(LINT_DIR)" ]; then \
echo "Run make lint-install"; \
exit 1; \
fi
$(LINT_DIR)/golangci-lint --timeout=5m run
# Run linters in Docker
docker/lint:
@ -130,7 +183,7 @@ version:
# Clean up
clean:
rm -rf vendor
rm -rf $(BINDIR)
rm -rf $(BINDIR)
# Package for Debian
debpackage:

183
README.md
View file

@ -1,5 +1,5 @@
<p align="center">
<img src="./.github/logo.svg" width="500px" alt="FrostFS logo">
<img src="./.forgejo/logo.svg" width="500px" alt="FrostFS logo">
</p>
<p align="center">
<a href="https://frostfs.info">FrostFS</a> is a decentralized distributed object storage integrated with the <a href="https://neo.org">NEO Blockchain</a>.
@ -38,7 +38,7 @@ version Show current version
```
Or you can also use a [Docker
image](https://hub.docker.com/r/truecloudlab/frostfs-http-gw) provided for the released
image](https://git.frostfs.info/TrueCloudLab/-/packages/container/frostfs-http-gw) provided for the released
(and occasionally unreleased) versions of the gateway (`:latest` points to the
latest stable release).
@ -217,41 +217,8 @@ Also, in case of downloading, you need to have a file inside a container.
### NNS
In all download/upload routes you can use container name instead of its id (`$CID`).
Read more about it in [docs/nns.md](./docs/nns.md).
Steps to start using name resolving:
1. Enable NNS resolving in config (`rpc_endpoint` must be a valid neo rpc node, see [configs](./config) for other examples):
```yaml
rpc_endpoint: http://morph-chain.frostfs.devenv:30333
resolve_order:
- nns
```
2. Make sure your container is registered in NNS contract. If you use [frostfs-dev-env](https://git.frostfs.info/TrueCloudLab/frostfs-dev-env)
you can check if your container (e.g. with `container-name` name) is registered in NNS:
```shell
$ curl -s --data '{"id":1,"jsonrpc":"2.0","method":"getcontractstate","params":[1]}' \
http://morph-chain.frostfs.devenv:30333 | jq -r '.result.hash'
0x8e6c3cd4b976b28e84a3788f6ea9e2676c15d667
$ docker exec -it morph_chain neo-go \
contract testinvokefunction \
-r http://morph-chain.frostfs.devenv:30333 0x8e6c3cd4b976b28e84a3788f6ea9e2676c15d667 \
resolve string:container-name.container int:16 \
| jq -r '.stack[0].value | if type=="array" then .[0].value else . end' \
| base64 -d && echo
7f3vvkw4iTiS5ZZbu5BQXEmJtETWbi3uUjLNaSs29xrL
```
3. Use container name instead of its `$CID`. For example:
```shell
$ curl http://localhost:8082/get_by_attribute/container-name/FileName/object-name
```
#### Create a container
@ -462,126 +429,7 @@ object ID, like this:
#### Authentication
You can always upload files to public containers (open for anyone to put
objects into), but for restricted containers you need to explicitly allow PUT
operations for a request signed with your HTTP Gateway keys.
If your don't want to manage gateway's secret keys and adjust eACL rules when
gateway configuration changes (new gate, key rotation, etc) or you plan to use
public services, there is an option to let your application backend (or you) to
issue Bearer Tokens ans pass them from the client via gate down to FrostFS level
to grant access.
FrostFS Bearer Token basically is a container owner-signed ACL data (refer to FrostFS
documentation for more details). There are two options to pass them to gateway:
* "Authorization" header with "Bearer" type and base64-encoded token in
credentials field
* "Bearer" cookie with base64-encoded token contents
For example, you have a mobile application frontend with a backend part storing
data in FrostFS. When a user authorizes in the mobile app, the backend issues a FrostFS
Bearer token and provides it to the frontend. Then, the mobile app may generate
some data and upload it via any available FrostFS HTTP Gateway by adding
the corresponding header to the upload request. Accessing the ACL protected data
works the same way.
##### Example
In order to generate a bearer token, you need to have wallet (which will be used to sign the token) and
the address of the sender who will do the request to FrostFS (in our case, it's a gateway wallet address).
Suppose we have:
* **NhVtreTTCoqsMQV5Wp55fqnriiUCpEaKm3** (token owner (gateway address))
Firstly, we need to encode the container id and the sender address to base64 (now it's base58).
So use **base58** and **base64** utils.
1. Encoding token owner id:
```
$ echo 'NhVtreTTCoqsMQV5Wp55fqnriiUCpEaKm3' | base58 --decode | base64
# output: NezFK4ujidF+X7bB88uzREQzRQeAvdj3Gg==
```
2. Form a Bearer token (10000 is lifetime expiration in epoch) and save it to **bearer.json**:
```
{
"body": {
"allowImpersonate": true,
"ownerID": {
"value": "NezFK4ujidF+X7bB88uzREQzRQeAvdj3Gg=="
},
"lifetime": {
"exp": "10000",
"nbf": "0",
"iat": "0"
}
},
"signature": null
}
```
3. Sign it with the wallet:
```
$ frostfs-cli util sign bearer-token --from bearer.json --to signed.json -w ./wallet.json
```
4. Encode to base64 to use in header:
```
$ base64 -w 0 signed.json
# output: Ck4KKgoECAIQBhIiCiCZGdlbN7DPGPMg9rsWqV+p2XdMzUqknRiexewSFp8kmBIbChk17MUri6OJ0X5ftsHzy7NERDNFB4C92PcaGgMIkE4SZgohAxpsb7vfAso1F0X6hrm6WpRS14WsT3/Ct1SMoqRsT89KEkEEGxKi8GjKSf52YqhppgaOTQHbUsL3jn7SHLqS3ndAQ7NtAATnmRHleZw2V2xRRSRBQdjDC05KK83LhdSax72Fsw==
```
After that, the Bearer token can be used:
```
$ curl -F 'file=@cat.jpeg;filename=cat.jpeg' -H "Authorization: Bearer Ck4KKgoECAIQBhIiCiCZGdlbN7DPGPMg9rsWqV+p2XdMzUqknRiexewSFp8kmBIbChk17MUri6OJ0X5ftsHzy7NERDNFB4C92PcaGgMIkE4SZgohAxpsb7vfAso1F0X6hrm6WpRS14WsT3/Ct1SMoqRsT89KEkEEGxKi8GjKSf52YqhppgaOTQHbUsL3jn7SHLqS3ndAQ7NtAATnmRHleZw2V2xRRSRBQdjDC05KK83LhdSax72Fsw==" \
http://localhost:8082/upload/BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K
# output:
# {
# "object_id": "DhfES9nVrFksxGDD2jQLunGADfrXExxNwqXbDafyBn9X",
# "container_id": "BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K"
# }
```
##### Note
For the token to work correctly, you need to create a container with a basic ACL that:
1. Allow PUT operation to others
2. Doesn't set "final" bit
For example:
```
$ frostfs-cli -w ./wallet.json --basic-acl 0x0FFFCFFF -r 192.168.130.72:8080 container create --policy "REP 3" --await
```
To deny access to a container without a token, set the eACL rules:
```
$ frostfs-cli -w ./wallet.json -r 192.168.130.72:8080 container set-eacl --table eacl.json --await --cid BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K
```
File **eacl.json**:
```
{
"version": {
"major": 0,
"minor": 0
},
"containerID": {
"value": "mRnZWzewzxjzIPa7Fqlfqdl3TM1KpJ0YnsXsEhafJJg="
},
"records": [
{
"operation": "PUT",
"action": "DENY",
"filters": [],
"targets": [
{
"role": "OTHERS",
"keys": []
}
]
}
]
}
```
Read more about request authentication in [docs/authentication.md](./docs/authemtnication.md)
### Metrics and Pprof
@ -592,3 +440,26 @@ See [configuration](./docs/gate-configuration.md).
## Credits
Please see [CREDITS](CREDITS.md) for details.
## Fuzzing
To run fuzzing tests use the following command:
```shell
$ make fuzz
```
This command will install dependencies for the fuzzing process and run existing fuzzing tests.
You can also use the following arguments:
```
FUZZ_TIMEOUT - time to run each fuzzing test (default 30)
FUZZ_FUNCTIONS - fuzzing tests that will be started (default "all")
FUZZ_AUX - additional parameters for the fuzzer (for example, "-debug")
FUZZ_NGFUZZ_DIR - path to ngfuzz tool
````
## Credits
Please see [CREDITS](CREDITS.md) for details.

26
SECURITY.md Normal file
View file

@ -0,0 +1,26 @@
# Security Policy
## How To Report a Vulnerability
If you think you have found a vulnerability in this repository, please report it to us through coordinated disclosure.
**Please do not report security vulnerabilities through public issues, discussions, or change requests.**
Instead, you can report it using one of the following ways:
* Contact the [TrueCloudLab Security Team](mailto:security@frostfs.info) via email
Please include as much of the information listed below as you can to help us better understand and resolve the issue:
* The type of issue (e.g., buffer overflow, or cross-site scripting)
* Affected version(s)
* Impact of the issue, including how an attacker might exploit the issue
* Step-by-step instructions to reproduce the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Full paths of source file(s) related to the manifestation of the issue
* Any special configuration required to reproduce the issue
* Any log files that are related to this issue (if possible)
* Proof-of-concept or exploit code (if possible)
This information will help us triage your report more quickly.

View file

@ -1 +1 @@
v0.27.0
v0.32.2

598
app.go
View file

@ -1,598 +0,0 @@
package main
import (
"context"
"fmt"
"net/http"
"os"
"os/signal"
"sync"
"syscall"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/downloader"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/frostfs/services"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/metrics"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/resolver"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/response"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tokens"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tree"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/uploader"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
treepool "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool/tree"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/fasthttp/router"
"github.com/nspcc-dev/neo-go/cli/flags"
"github.com/nspcc-dev/neo-go/cli/input"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/wallet"
"github.com/spf13/viper"
"github.com/valyala/fasthttp"
"go.uber.org/zap"
)
type (
app struct {
ctx context.Context
log *zap.Logger
logLevel zap.AtomicLevel
pool *pool.Pool
treePool *treepool.Pool
key *keys.PrivateKey
owner *user.ID
cfg *viper.Viper
webServer *fasthttp.Server
webDone chan struct{}
resolver *resolver.ContainerResolver
metrics *gateMetrics
services []*metrics.Service
settings *appSettings
servers []Server
}
appSettings struct {
Uploader *uploader.Settings
Downloader *downloader.Settings
}
// App is an interface for the main gateway function.
App interface {
Wait()
Serve()
}
// Option is an application option.
Option func(a *app)
gateMetrics struct {
logger *zap.Logger
provider *metrics.GateMetrics
mu sync.RWMutex
enabled bool
}
)
// WithLogger returns Option to set a specific logger.
func WithLogger(l *zap.Logger, lvl zap.AtomicLevel) Option {
return func(a *app) {
if l == nil {
return
}
a.log = l
a.logLevel = lvl
}
}
// WithConfig returns Option to use specific Viper configuration.
func WithConfig(c *viper.Viper) Option {
return func(a *app) {
if c == nil {
return
}
a.cfg = c
}
}
func newApp(ctx context.Context, opt ...Option) App {
a := &app{
ctx: ctx,
log: zap.L(),
cfg: viper.GetViper(),
webServer: new(fasthttp.Server),
webDone: make(chan struct{}),
}
for i := range opt {
opt[i](a)
}
// -- setup FastHTTP server --
a.webServer.Name = "frost-http-gw"
a.webServer.ReadBufferSize = a.cfg.GetInt(cfgWebReadBufferSize)
a.webServer.WriteBufferSize = a.cfg.GetInt(cfgWebWriteBufferSize)
a.webServer.ReadTimeout = a.cfg.GetDuration(cfgWebReadTimeout)
a.webServer.WriteTimeout = a.cfg.GetDuration(cfgWebWriteTimeout)
a.webServer.DisableHeaderNamesNormalizing = true
a.webServer.NoDefaultServerHeader = true
a.webServer.NoDefaultContentType = true
a.webServer.MaxRequestBodySize = a.cfg.GetInt(cfgWebMaxRequestBodySize)
a.webServer.DisablePreParseMultipartForm = true
a.webServer.StreamRequestBody = a.cfg.GetBool(cfgWebStreamRequestBody)
// -- -- -- -- -- -- -- -- -- -- -- -- -- --
a.pool, a.treePool, a.key = getPools(ctx, a.log, a.cfg)
var owner user.ID
user.IDFromKey(&owner, a.key.PrivateKey.PublicKey)
a.owner = &owner
a.initAppSettings()
a.initResolver()
a.initMetrics()
a.initTracing(ctx)
return a
}
func (a *app) initAppSettings() {
a.settings = &appSettings{
Uploader: &uploader.Settings{},
Downloader: &downloader.Settings{},
}
a.updateSettings()
}
func (a *app) initResolver() {
var err error
a.resolver, err = resolver.NewContainerResolver(a.getResolverConfig())
if err != nil {
a.log.Fatal(logs.FailedToCreateResolver, zap.Error(err))
}
}
func (a *app) getResolverConfig() ([]string, *resolver.Config) {
resolveCfg := &resolver.Config{
FrostFS: resolver.NewFrostFSResolver(a.pool),
RPCAddress: a.cfg.GetString(cfgRPCEndpoint),
}
order := a.cfg.GetStringSlice(cfgResolveOrder)
if resolveCfg.RPCAddress == "" {
order = remove(order, resolver.NNSResolver)
a.log.Warn(logs.ResolverNNSWontBeUsedSinceRPCEndpointIsntProvided)
}
if len(order) == 0 {
a.log.Info(logs.ContainerResolverWillBeDisabledBecauseOfResolversResolverOrderIsEmpty)
}
return order, resolveCfg
}
func (a *app) initMetrics() {
gateMetricsProvider := metrics.NewGateMetrics(a.pool)
a.metrics = newGateMetrics(a.log, gateMetricsProvider, a.cfg.GetBool(cfgPrometheusEnabled))
a.metrics.SetHealth(metrics.HealthStatusStarting)
}
func newGateMetrics(logger *zap.Logger, provider *metrics.GateMetrics, enabled bool) *gateMetrics {
if !enabled {
logger.Warn(logs.MetricsAreDisabled)
}
return &gateMetrics{
logger: logger,
provider: provider,
enabled: enabled,
}
}
func (m *gateMetrics) isEnabled() bool {
m.mu.RLock()
defer m.mu.RUnlock()
return m.enabled
}
func (m *gateMetrics) SetEnabled(enabled bool) {
if !enabled {
m.logger.Warn(logs.MetricsAreDisabled)
}
m.mu.Lock()
m.enabled = enabled
m.mu.Unlock()
}
func (m *gateMetrics) SetHealth(status metrics.HealthStatus) {
if !m.isEnabled() {
return
}
m.provider.SetHealth(status)
}
func (m *gateMetrics) SetVersion(ver string) {
if !m.isEnabled() {
return
}
m.provider.SetVersion(ver)
}
func (m *gateMetrics) Shutdown() {
m.mu.Lock()
if m.enabled {
m.provider.SetHealth(metrics.HealthStatusShuttingDown)
m.enabled = false
}
m.provider.Unregister()
m.mu.Unlock()
}
func (m *gateMetrics) MarkHealthy(endpoint string) {
if !m.isEnabled() {
return
}
m.provider.MarkHealthy(endpoint)
}
func (m *gateMetrics) MarkUnhealthy(endpoint string) {
if !m.isEnabled() {
return
}
m.provider.MarkUnhealthy(endpoint)
}
func remove(list []string, element string) []string {
for i, item := range list {
if item == element {
return append(list[:i], list[i+1:]...)
}
}
return list
}
func getFrostFSKey(cfg *viper.Viper, log *zap.Logger) (*keys.PrivateKey, error) {
walletPath := cfg.GetString(cfgWalletPath)
if len(walletPath) == 0 {
log.Info(logs.NoWalletPathSpecifiedCreatingEphemeralKeyAutomaticallyForThisRun)
key, err := keys.NewPrivateKey()
if err != nil {
return nil, err
}
return key, nil
}
w, err := wallet.NewWalletFromFile(walletPath)
if err != nil {
return nil, err
}
var password *string
if cfg.IsSet(cfgWalletPassphrase) {
pwd := cfg.GetString(cfgWalletPassphrase)
password = &pwd
}
address := cfg.GetString(cfgWalletAddress)
return getKeyFromWallet(w, address, password)
}
func getKeyFromWallet(w *wallet.Wallet, addrStr string, password *string) (*keys.PrivateKey, error) {
var addr util.Uint160
var err error
if addrStr == "" {
addr = w.GetChangeAddress()
} else {
addr, err = flags.ParseAddress(addrStr)
if err != nil {
return nil, fmt.Errorf("invalid address")
}
}
acc := w.GetAccount(addr)
if acc == nil {
return nil, fmt.Errorf("couldn't find wallet account for %s", addrStr)
}
if password == nil {
pwd, err := input.ReadPassword("Enter password > ")
if err != nil {
return nil, fmt.Errorf("couldn't read password")
}
password = &pwd
}
if err := acc.Decrypt(*password, w.Scrypt); err != nil {
return nil, fmt.Errorf("couldn't decrypt account: %w", err)
}
return acc.PrivateKey(), nil
}
func (a *app) Wait() {
a.log.Info(logs.StartingApplication, zap.String("app_name", "frostfs-http-gw"), zap.String("version", Version))
a.metrics.SetVersion(Version)
a.setHealthStatus()
<-a.webDone // wait for web-server to be stopped
}
func (a *app) setHealthStatus() {
a.metrics.SetHealth(metrics.HealthStatusReady)
}
func (a *app) Serve() {
uploadRoutes := uploader.New(a.AppParams(), a.settings.Uploader)
downloadRoutes := downloader.New(a.AppParams(), a.settings.Downloader, tree.NewTree(services.NewPoolWrapper(a.treePool)))
// Configure router.
a.configureRouter(uploadRoutes, downloadRoutes)
a.startServices()
a.initServers(a.ctx)
for i := range a.servers {
go func(i int) {
a.log.Info(logs.StartingServer, zap.String("address", a.servers[i].Address()))
if err := a.webServer.Serve(a.servers[i].Listener()); err != nil && err != http.ErrServerClosed {
a.metrics.MarkUnhealthy(a.servers[i].Address())
a.log.Fatal(logs.ListenAndServe, zap.Error(err))
}
}(i)
}
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGHUP)
LOOP:
for {
select {
case <-a.ctx.Done():
break LOOP
case <-sigs:
a.configReload(a.ctx)
}
}
a.log.Info(logs.ShuttingDownWebServer, zap.Error(a.webServer.Shutdown()))
a.metrics.Shutdown()
a.stopServices()
a.shutdownTracing()
close(a.webDone)
}
func (a *app) shutdownTracing() {
const tracingShutdownTimeout = 5 * time.Second
shdnCtx, cancel := context.WithTimeout(context.Background(), tracingShutdownTimeout)
defer cancel()
if err := tracing.Shutdown(shdnCtx); err != nil {
a.log.Warn(logs.FailedToShutdownTracing, zap.Error(err))
}
}
func (a *app) configReload(ctx context.Context) {
a.log.Info(logs.SIGHUPConfigReloadStarted)
if !a.cfg.IsSet(cmdConfig) && !a.cfg.IsSet(cmdConfigDir) {
a.log.Warn(logs.FailedToReloadConfigBecauseItsMissed)
return
}
if err := readInConfig(a.cfg); err != nil {
a.log.Warn(logs.FailedToReloadConfig, zap.Error(err))
return
}
if lvl, err := getLogLevel(a.cfg); err != nil {
a.log.Warn(logs.LogLevelWontBeUpdated, zap.Error(err))
} else {
a.logLevel.SetLevel(lvl)
}
if err := a.resolver.UpdateResolvers(a.getResolverConfig()); err != nil {
a.log.Warn(logs.FailedToUpdateResolvers, zap.Error(err))
}
if err := a.updateServers(); err != nil {
a.log.Warn(logs.FailedToReloadServerParameters, zap.Error(err))
}
a.stopServices()
a.startServices()
a.updateSettings()
a.metrics.SetEnabled(a.cfg.GetBool(cfgPrometheusEnabled))
a.initTracing(ctx)
a.setHealthStatus()
a.log.Info(logs.SIGHUPConfigReloadCompleted)
}
func (a *app) updateSettings() {
a.settings.Uploader.SetDefaultTimestamp(a.cfg.GetBool(cfgUploaderHeaderEnableDefaultTimestamp))
a.settings.Downloader.SetZipCompression(a.cfg.GetBool(cfgZipCompression))
}
func (a *app) startServices() {
pprofConfig := metrics.Config{Enabled: a.cfg.GetBool(cfgPprofEnabled), Address: a.cfg.GetString(cfgPprofAddress)}
pprofService := metrics.NewPprofService(a.log, pprofConfig)
a.services = append(a.services, pprofService)
go pprofService.Start()
prometheusConfig := metrics.Config{Enabled: a.cfg.GetBool(cfgPrometheusEnabled), Address: a.cfg.GetString(cfgPrometheusAddress)}
prometheusService := metrics.NewPrometheusService(a.log, prometheusConfig)
a.services = append(a.services, prometheusService)
go prometheusService.Start()
}
func (a *app) stopServices() {
ctx, cancel := context.WithTimeout(context.Background(), defaultShutdownTimeout)
defer cancel()
for _, svc := range a.services {
svc.ShutDown(ctx)
}
}
func (a *app) configureRouter(uploadRoutes *uploader.Uploader, downloadRoutes *downloader.Downloader) {
r := router.New()
r.RedirectTrailingSlash = true
r.NotFound = func(r *fasthttp.RequestCtx) {
response.Error(r, "Not found", fasthttp.StatusNotFound)
}
r.MethodNotAllowed = func(r *fasthttp.RequestCtx) {
response.Error(r, "Method Not Allowed", fasthttp.StatusMethodNotAllowed)
}
r.POST("/upload/{cid}", a.logger(a.tokenizer(a.tracer(uploadRoutes.Upload))))
a.log.Info(logs.AddedPathUploadCid)
r.GET("/get/{cid}/{oid:*}", a.logger(a.tokenizer(a.tracer(downloadRoutes.DownloadByAddressOrBucketName))))
r.HEAD("/get/{cid}/{oid:*}", a.logger(a.tokenizer(a.tracer(downloadRoutes.HeadByAddressOrBucketName))))
a.log.Info(logs.AddedPathGetCidOid)
r.GET("/get_by_attribute/{cid}/{attr_key}/{attr_val:*}", a.logger(a.tokenizer(a.tracer(downloadRoutes.DownloadByAttribute))))
r.HEAD("/get_by_attribute/{cid}/{attr_key}/{attr_val:*}", a.logger(a.tokenizer(a.tracer(downloadRoutes.HeadByAttribute))))
a.log.Info(logs.AddedPathGetByAttributeCidAttrKeyAttrVal)
r.GET("/zip/{cid}/{prefix:*}", a.logger(a.tokenizer(a.tracer(downloadRoutes.DownloadZipped))))
a.log.Info(logs.AddedPathZipCidPrefix)
a.webServer.Handler = r.Handler
}
func (a *app) logger(h fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(req *fasthttp.RequestCtx) {
a.log.Info(logs.Request, zap.String("remote", req.RemoteAddr().String()),
zap.ByteString("method", req.Method()),
zap.ByteString("path", req.Path()),
zap.ByteString("query", req.QueryArgs().QueryString()),
zap.Uint64("id", req.ID()))
h(req)
}
}
func (a *app) tokenizer(h fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(req *fasthttp.RequestCtx) {
appCtx, err := tokens.StoreBearerTokenAppCtx(a.ctx, req)
if err != nil {
a.log.Error(logs.CouldNotFetchAndStoreBearerToken, zap.Error(err))
response.Error(req, "could not fetch and store bearer token: "+err.Error(), fasthttp.StatusBadRequest)
}
utils.SetContextToRequest(appCtx, req)
h(req)
}
}
func (a *app) tracer(h fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(req *fasthttp.RequestCtx) {
appCtx := utils.GetContextFromRequest(req)
appCtx, span := utils.StartHTTPServerSpan(appCtx, req, "REQUEST")
defer func() {
utils.SetHTTPTraceInfo(appCtx, span, req)
span.End()
}()
utils.SetContextToRequest(appCtx, req)
h(req)
}
}
func (a *app) AppParams() *utils.AppParams {
return &utils.AppParams{
Logger: a.log,
Pool: a.pool,
Owner: a.owner,
Resolver: a.resolver,
}
}
func (a *app) initServers(ctx context.Context) {
serversInfo := fetchServers(a.cfg)
a.servers = make([]Server, 0, len(serversInfo))
for _, serverInfo := range serversInfo {
fields := []zap.Field{
zap.String("address", serverInfo.Address), zap.Bool("tls enabled", serverInfo.TLS.Enabled),
zap.String("tls cert", serverInfo.TLS.CertFile), zap.String("tls key", serverInfo.TLS.KeyFile),
}
srv, err := newServer(ctx, serverInfo)
if err != nil {
a.metrics.MarkUnhealthy(serverInfo.Address)
a.log.Warn(logs.FailedToAddServer, append(fields, zap.Error(err))...)
continue
}
a.metrics.MarkHealthy(serverInfo.Address)
a.servers = append(a.servers, srv)
a.log.Info(logs.AddServer, fields...)
}
if len(a.servers) == 0 {
a.log.Fatal(logs.NoHealthyServers)
}
}
func (a *app) updateServers() error {
serversInfo := fetchServers(a.cfg)
var found bool
for _, serverInfo := range serversInfo {
index := a.serverIndex(serverInfo.Address)
if index == -1 {
continue
}
if serverInfo.TLS.Enabled {
if err := a.servers[index].UpdateCert(serverInfo.TLS.CertFile, serverInfo.TLS.KeyFile); err != nil {
return fmt.Errorf("failed to update tls certs: %w", err)
}
}
found = true
}
if !found {
return fmt.Errorf("invalid servers configuration: no known server found")
}
return nil
}
func (a *app) serverIndex(address string) int {
for i := range a.servers {
if a.servers[i].Address() == address {
return i
}
}
return -1
}
func (a *app) initTracing(ctx context.Context) {
instanceID := ""
if len(a.servers) > 0 {
instanceID = a.servers[0].Address()
}
cfg := tracing.Config{
Enabled: a.cfg.GetBool(cfgTracingEnabled),
Exporter: tracing.Exporter(a.cfg.GetString(cfgTracingExporter)),
Endpoint: a.cfg.GetString(cfgTracingEndpoint),
Service: "frostfs-http-gw",
InstanceID: instanceID,
Version: Version,
}
updated, err := tracing.Setup(ctx, cfg)
if err != nil {
a.log.Warn(logs.FailedToInitializeTracing, zap.Error(err))
}
if updated {
a.log.Info(logs.TracingConfigUpdated)
}
}

1052
cmd/http-gw/app.go Normal file

File diff suppressed because it is too large Load diff

View file

@ -6,26 +6,32 @@ import (
"archive/zip"
"bytes"
"context"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"mime/multipart"
"net/http"
"os"
"sort"
"strings"
"testing"
"time"
containerv2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/container"
containerv2 "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
docker "github.com/docker/docker/api/types/container"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/spf13/viper"
"github.com/nspcc-dev/neo-go/pkg/wallet"
"github.com/stretchr/testify/require"
"github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/wait"
@ -44,13 +50,21 @@ const (
func TestIntegration(t *testing.T) {
rootCtx := context.Background()
aioImage := "truecloudlab/frostfs-aio:"
aioImage := "git.frostfs.info/truecloudlab/frostfs-aio:"
versions := []string{
"1.2.7", // frostfs-storage v0.36.0 RC
"1.2.7",
"1.3.0",
"1.5.0",
"1.6.5",
}
key, err := keys.NewPrivateKeyFromHex("1dd37fba80fec4e6a6f13fd708d8dcb3b29def768017052f6c930fa1c5d90bbb")
require.NoError(t, err)
file, err := os.CreateTemp("", "wallet")
require.NoError(t, err)
defer os.Remove(file.Name())
makeTempWallet(t, key, file.Name())
var ownerID user.ID
user.IDFromKey(&ownerID, key.PrivateKey.PublicKey)
@ -58,16 +72,28 @@ func TestIntegration(t *testing.T) {
ctx, cancel2 := context.WithCancel(rootCtx)
aioContainer := createDockerContainer(ctx, t, aioImage+version)
server, cancel := runServer()
if strings.HasPrefix(version, "1.6") {
registerUser(t, ctx, aioContainer, file.Name())
}
// See the logs from the command execution.
server, cancel := runServer(file.Name())
clientPool := getPool(ctx, t, key)
CID, err := createContainer(ctx, t, clientPool, ownerID, version)
CID, err := createContainer(ctx, t, clientPool, ownerID)
require.NoError(t, err, version)
t.Run("simple put "+version, func(t *testing.T) { simplePut(ctx, t, clientPool, CID, version) })
jsonToken, binaryToken := makeBearerTokens(t, key, ownerID, version)
t.Run("simple put "+version, func(t *testing.T) { simplePut(ctx, t, clientPool, CID) })
t.Run("put with json bearer token in header"+version, func(t *testing.T) { putWithBearerTokenInHeader(ctx, t, clientPool, CID, jsonToken) })
t.Run("put with json bearer token in cookie"+version, func(t *testing.T) { putWithBearerTokenInCookie(ctx, t, clientPool, CID, jsonToken) })
t.Run("put with binary bearer token in header"+version, func(t *testing.T) { putWithBearerTokenInHeader(ctx, t, clientPool, CID, binaryToken) })
t.Run("put with binary bearer token in cookie"+version, func(t *testing.T) { putWithBearerTokenInCookie(ctx, t, clientPool, CID, binaryToken) })
t.Run("put with duplicate keys "+version, func(t *testing.T) { putWithDuplicateKeys(t, CID) })
t.Run("simple get "+version, func(t *testing.T) { simpleGet(ctx, t, clientPool, ownerID, CID, version) })
t.Run("get by attribute "+version, func(t *testing.T) { getByAttr(ctx, t, clientPool, ownerID, CID, version) })
t.Run("get zip "+version, func(t *testing.T) { getZip(ctx, t, clientPool, ownerID, CID, version) })
t.Run("simple get "+version, func(t *testing.T) { simpleGet(ctx, t, clientPool, ownerID, CID) })
t.Run("get by attribute "+version, func(t *testing.T) { getByAttr(ctx, t, clientPool, ownerID, CID) })
t.Run("get zip "+version, func(t *testing.T) { getZip(ctx, t, clientPool, ownerID, CID) })
t.Run("test namespaces "+version, func(t *testing.T) { checkNamespaces(ctx, t, clientPool, ownerID, CID) })
cancel()
server.Wait()
@ -77,18 +103,20 @@ func TestIntegration(t *testing.T) {
}
}
func runServer() (App, context.CancelFunc) {
func runServer(pathToWallet string) (App, context.CancelFunc) {
cancelCtx, cancel := context.WithCancel(context.Background())
v := getDefaultConfig()
l, lvl := newLogger(v)
application := newApp(cancelCtx, WithConfig(v), WithLogger(l, lvl))
go application.Serve(cancelCtx)
v.config().Set(cfgWalletPath, pathToWallet)
v.config().Set(cfgWalletPassphrase, "")
application := newApp(cancelCtx, v)
go application.Serve()
return application, cancel
}
func simplePut(ctx context.Context, t *testing.T, p *pool.Pool, CID cid.ID, version string) {
func simplePut(ctx context.Context, t *testing.T, p *pool.Pool, CID cid.ID) {
url := testHost + "/upload/" + CID.String()
makePutRequestAndCheck(ctx, t, p, CID, url)
@ -96,7 +124,38 @@ func simplePut(ctx context.Context, t *testing.T, p *pool.Pool, CID cid.ID, vers
makePutRequestAndCheck(ctx, t, p, CID, url)
}
func putWithBearerTokenInHeader(ctx context.Context, t *testing.T, p *pool.Pool, CID cid.ID, token string) {
url := testHost + "/upload/" + CID.String()
request, content, attributes := makePutRequest(t, url)
request.Header.Set("Authorization", "Bearer "+token)
resp, err := http.DefaultClient.Do(request)
require.NoError(t, err)
checkPutResponse(ctx, t, p, CID, resp, content, attributes)
}
func putWithBearerTokenInCookie(ctx context.Context, t *testing.T, p *pool.Pool, CID cid.ID, token string) {
url := testHost + "/upload/" + CID.String()
request, content, attributes := makePutRequest(t, url)
request.AddCookie(&http.Cookie{Name: "Bearer", Value: token})
resp, err := http.DefaultClient.Do(request)
require.NoError(t, err)
checkPutResponse(ctx, t, p, CID, resp, content, attributes)
}
func makePutRequestAndCheck(ctx context.Context, t *testing.T, p *pool.Pool, cnrID cid.ID, url string) {
request, content, attributes := makePutRequest(t, url)
resp, err := http.DefaultClient.Do(request)
require.NoError(t, err)
checkPutResponse(ctx, t, p, cnrID, resp, content, attributes)
}
func makePutRequest(t *testing.T, url string) (*http.Request, string, map[string]string) {
content := "content of file"
keyAttr, valAttr := "User-Attribute", "user value"
attributes := map[string]string{
@ -118,9 +177,10 @@ func makePutRequestAndCheck(ctx context.Context, t *testing.T, p *pool.Pool, cnr
request.Header.Set("Content-Type", w.FormDataContentType())
request.Header.Set("X-Attribute-"+keyAttr, valAttr)
resp, err := http.DefaultClient.Do(request)
require.NoError(t, err)
return request, content, attributes
}
func checkPutResponse(ctx context.Context, t *testing.T, p *pool.Pool, cnrID cid.ID, resp *http.Response, content string, attributes map[string]string) {
defer func() {
err := resp.Body.Close()
require.NoError(t, err)
@ -204,7 +264,7 @@ func putWithDuplicateKeys(t *testing.T, CID cid.ID) {
require.Equal(t, http.StatusBadRequest, resp.StatusCode)
}
func simpleGet(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID, version string) {
func simpleGet(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID) {
content := "content of file"
attributes := map[string]string{
"some-attr": "some-get-value",
@ -251,7 +311,7 @@ func checkGetByAttrResponse(t *testing.T, resp *http.Response, content string, a
}
}
func getByAttr(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID, version string) {
func getByAttr(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID) {
keyAttr, valAttr := "some-attr", "some-get-by-attr-value"
content := "content of file"
attributes := map[string]string{keyAttr: valAttr}
@ -273,7 +333,7 @@ func getByAttr(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID
checkGetByAttrResponse(t, resp, content, expectedAttr)
}
func getZip(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID, version string) {
func getZip(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID) {
names := []string{"zipfolder/dir/name1.txt", "zipfolder/name2.txt"}
contents := []string{"content of file1", "content of file2"}
attributes1 := map[string]string{object.AttributeFilePath: names[0]}
@ -338,13 +398,49 @@ func checkZip(t *testing.T, data []byte, length int64, names, contents []string)
}
}
func checkNamespaces(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID) {
content := "content of file"
attributes := map[string]string{
"some-attr": "some-get-value",
}
id := putObject(ctx, t, clientPool, ownerID, CID, content, attributes)
req, err := http.NewRequest(http.MethodGet, testHost+"/get/"+testContainerName+"/"+id.String(), nil)
require.NoError(t, err)
req.Header.Set(defaultNamespaceHeader, "")
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
checkGetResponse(t, resp, content, attributes)
req, err = http.NewRequest(http.MethodGet, testHost+"/get/"+testContainerName+"/"+id.String(), nil)
require.NoError(t, err)
req.Header.Set(defaultNamespaceHeader, "root")
resp, err = http.DefaultClient.Do(req)
require.NoError(t, err)
checkGetResponse(t, resp, content, attributes)
req, err = http.NewRequest(http.MethodGet, testHost+"/get/"+testContainerName+"/"+id.String(), nil)
require.NoError(t, err)
req.Header.Set(defaultNamespaceHeader, "root2")
resp, err = http.DefaultClient.Do(req)
require.NoError(t, err)
require.Equal(t, http.StatusNotFound, resp.StatusCode)
}
func createDockerContainer(ctx context.Context, t *testing.T, image string) testcontainers.Container {
req := testcontainers.ContainerRequest{
Image: image,
WaitingFor: wait.NewLogStrategy("aio container started").WithStartupTimeout(30 * time.Second),
Name: "aio",
Hostname: "aio",
NetworkMode: "host",
Image: image,
WaitingFor: wait.NewLogStrategy("aio container started").WithStartupTimeout(2 * time.Minute),
Name: "aio",
Hostname: "aio",
HostConfigModifier: func(hc *docker.HostConfig) {
hc.NetworkMode = "host"
},
}
aioC, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
@ -355,14 +451,14 @@ func createDockerContainer(ctx context.Context, t *testing.T, image string) test
return aioC
}
func getDefaultConfig() *viper.Viper {
func getDefaultConfig() *appCfg {
v := settings()
v.SetDefault(cfgPeers+".0.address", "localhost:8080")
v.SetDefault(cfgPeers+".0.weight", 1)
v.SetDefault(cfgPeers+".0.priority", 1)
v.config().SetDefault(cfgPeers+".0.address", "localhost:8080")
v.config().SetDefault(cfgPeers+".0.weight", 1)
v.config().SetDefault(cfgPeers+".0.priority", 1)
v.SetDefault(cfgRPCEndpoint, "http://localhost:30333")
v.SetDefault("server.0.address", testListenAddress)
v.config().SetDefault(cfgRPCEndpoint, "http://localhost:30333")
v.config().SetDefault("server.0.address", testListenAddress)
return v
}
@ -381,7 +477,7 @@ func getPool(ctx context.Context, t *testing.T, key *keys.PrivateKey) *pool.Pool
return clientPool
}
func createContainer(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, version string) (cid.ID, error) {
func createContainer(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID) (cid.ID, error) {
var policy netmap.PlacementPolicy
err := policy.DecodeString("REP 1")
require.NoError(t, err)
@ -420,7 +516,7 @@ func createContainer(ctx context.Context, t *testing.T, clientPool *pool.Pool, o
func putObject(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID user.ID, CID cid.ID, content string, attributes map[string]string) oid.ID {
obj := object.New()
obj.SetContainerID(CID)
obj.SetOwnerID(&ownerID)
obj.SetOwnerID(ownerID)
var attrs []object.Attribute
for key, val := range attributes {
@ -438,5 +534,57 @@ func putObject(ctx context.Context, t *testing.T, clientPool *pool.Pool, ownerID
id, err := clientPool.PutObject(ctx, prm)
require.NoError(t, err)
return id
return id.ObjectID
}
func registerUser(t *testing.T, ctx context.Context, aioContainer testcontainers.Container, pathToWallet string) {
err := aioContainer.CopyFileToContainer(ctx, pathToWallet, "/usr/wallet.json", 644)
require.NoError(t, err)
_, _, err = aioContainer.Exec(ctx, []string{
"/usr/bin/frostfs-s3-authmate", "register-user",
"--wallet", "/usr/wallet.json",
"--rpc-endpoint", "http://localhost:30333",
"--contract-wallet", "/config/s3-gw-wallet.json"})
require.NoError(t, err)
}
func makeBearerTokens(t *testing.T, key *keys.PrivateKey, ownerID user.ID, version string) (jsonTokenBase64, binaryTokenBase64 string) {
tkn := new(bearer.Token)
tkn.ForUser(ownerID)
tkn.SetExp(10000)
if version == "1.2.7" {
tkn.SetEACLTable(*eacl.NewTable())
} else {
tkn.SetImpersonate(true)
}
err := tkn.Sign(key.PrivateKey)
require.NoError(t, err)
jsonToken, err := tkn.MarshalJSON()
require.NoError(t, err)
jsonTokenBase64 = base64.StdEncoding.EncodeToString(jsonToken)
binaryTokenBase64 = base64.StdEncoding.EncodeToString(tkn.Marshal())
require.NotEmpty(t, jsonTokenBase64)
require.NotEmpty(t, binaryTokenBase64)
return
}
func makeTempWallet(t *testing.T, key *keys.PrivateKey, path string) {
w, err := wallet.NewWallet(path)
require.NoError(t, err)
acc := wallet.NewAccountFromPrivateKey(key)
err = acc.Encrypt("", w.Scrypt)
require.NoError(t, err)
w.AddAccount(acc)
err = w.Save()
require.NoError(t, err)
}

View file

@ -8,10 +8,9 @@ import (
func main() {
globalContext, _ := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
v := settings()
logger, atomicLevel := newLogger(v)
cfg := settings()
application := newApp(globalContext, WithLogger(logger, atomicLevel), WithConfig(v))
application := newApp(globalContext, cfg)
go application.Serve()
application.Wait()
}

View file

@ -68,11 +68,13 @@ func newServer(ctx context.Context, serverInfo ServerInfo) (*server, error) {
if serverInfo.TLS.Enabled {
if err = tlsProvider.UpdateCert(serverInfo.TLS.CertFile, serverInfo.TLS.KeyFile); err != nil {
return nil, fmt.Errorf("failed to update cert: %w", err)
lnErr := ln.Close()
return nil, fmt.Errorf("failed to update cert (listener close: %v): %w", lnErr, err)
}
ln = tls.NewListener(ln, &tls.Config{
GetCertificate: tlsProvider.GetCertificate,
NextProtos: []string{"h2"}, // required to enable HTTP/2 requests in `http.Serve`
})
}

119
cmd/http-gw/server_test.go Normal file
View file

@ -0,0 +1,119 @@
package main
import (
"context"
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"fmt"
"math/big"
"net"
"net/http"
"os"
"path"
"testing"
"time"
"github.com/stretchr/testify/require"
"golang.org/x/net/http2"
)
const (
expHeaderKey = "Foo"
expHeaderValue = "Bar"
)
func TestHTTP2TLS(t *testing.T) {
ctx := context.Background()
certPath, keyPath := prepareTestCerts(t)
srv := &http.Server{
Handler: http.HandlerFunc(testHandler),
}
tlsListener, err := newServer(ctx, ServerInfo{
Address: ":0",
TLS: ServerTLSInfo{
Enabled: true,
CertFile: certPath,
KeyFile: keyPath,
},
})
require.NoError(t, err)
port := tlsListener.Listener().Addr().(*net.TCPAddr).Port
addr := fmt.Sprintf("https://localhost:%d", port)
go func() {
_ = srv.Serve(tlsListener.Listener())
}()
// Server is running, now send HTTP/2 request
tlsClientConfig := &tls.Config{
InsecureSkipVerify: true,
}
cliHTTP1 := http.Client{Transport: &http.Transport{TLSClientConfig: tlsClientConfig}}
cliHTTP2 := http.Client{Transport: &http2.Transport{TLSClientConfig: tlsClientConfig}}
req, err := http.NewRequest("GET", addr, nil)
require.NoError(t, err)
req.Header[expHeaderKey] = []string{expHeaderValue}
resp, err := cliHTTP1.Do(req)
require.NoError(t, err)
require.Equal(t, http.StatusOK, resp.StatusCode)
resp, err = cliHTTP2.Do(req)
require.NoError(t, err)
require.Equal(t, http.StatusOK, resp.StatusCode)
}
func testHandler(resp http.ResponseWriter, req *http.Request) {
hdr, ok := req.Header[expHeaderKey]
if !ok || len(hdr) != 1 || hdr[0] != expHeaderValue {
resp.WriteHeader(http.StatusBadRequest)
} else {
resp.WriteHeader(http.StatusOK)
}
}
func prepareTestCerts(t *testing.T) (certPath, keyPath string) {
privateKey, err := rsa.GenerateKey(rand.Reader, 2048)
require.NoError(t, err)
template := x509.Certificate{
SerialNumber: big.NewInt(1),
Subject: pkix.Name{CommonName: "localhost"},
NotBefore: time.Now(),
NotAfter: time.Now().Add(time.Hour * 24 * 365),
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
BasicConstraintsValid: true,
}
derBytes, err := x509.CreateCertificate(rand.Reader, &template, &template, &privateKey.PublicKey, privateKey)
require.NoError(t, err)
dir := t.TempDir()
certPath = path.Join(dir, "cert.pem")
keyPath = path.Join(dir, "key.pem")
certFile, err := os.Create(certPath)
require.NoError(t, err)
defer certFile.Close()
keyFile, err := os.Create(keyPath)
require.NoError(t, err)
defer keyFile.Close()
err = pem.Encode(certFile, &pem.Block{Type: "CERTIFICATE", Bytes: derBytes})
require.NoError(t, err)
err = pem.Encode(keyFile, &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(privateKey)})
require.NoError(t, err)
return certPath, keyPath
}

924
cmd/http-gw/settings.go Normal file
View file

@ -0,0 +1,924 @@
package main
import (
"context"
"encoding/hex"
"fmt"
"io"
"math"
"os"
"path"
"runtime"
"sort"
"strconv"
"strings"
"sync"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/cache"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
internalnet "git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/net"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/service/frostfs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/resolver"
grpctracing "git.frostfs.info/TrueCloudLab/frostfs-observability/tracing/grpc"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
treepool "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool/tree"
"git.frostfs.info/TrueCloudLab/zapjournald"
"github.com/spf13/pflag"
"github.com/spf13/viper"
"github.com/ssgreg/journald"
"github.com/valyala/fasthttp"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
"google.golang.org/grpc"
)
const (
destinationStdout = "stdout"
destinationJournald = "journald"
)
const (
defaultRebalanceTimer = 60 * time.Second
defaultRequestTimeout = 15 * time.Second
defaultConnectTimeout = 10 * time.Second
defaultStreamTimeout = 10 * time.Second
defaultLoggerSamplerInterval = 1 * time.Second
defaultShutdownTimeout = 15 * time.Second
defaultPoolErrorThreshold uint32 = 100
defaultSoftMemoryLimit = math.MaxInt64
defaultBufferMaxSizeForPut = 1024 * 1024 // 1mb
defaultNamespaceHeader = "X-Frostfs-Namespace"
defaultReconnectInterval = time.Minute
defaultCORSMaxAge = 600 // seconds
defaultMultinetFallbackDelay = 300 * time.Millisecond
cfgServer = "server"
cfgTLSEnabled = "tls.enabled"
cfgTLSCertFile = "tls.cert_file"
cfgTLSKeyFile = "tls.key_file"
cfgReconnectInterval = "reconnect_interval"
cfgIndexPageEnabled = "index_page.enabled"
cfgIndexPageTemplatePath = "index_page.template_path"
cfgWorkerPoolSize = "worker_pool_size"
// Web.
cfgWebReadBufferSize = "web.read_buffer_size"
cfgWebWriteBufferSize = "web.write_buffer_size"
cfgWebReadTimeout = "web.read_timeout"
cfgWebWriteTimeout = "web.write_timeout"
cfgWebStreamRequestBody = "web.stream_request_body"
cfgWebMaxRequestBodySize = "web.max_request_body_size"
// Metrics / Profiler.
cfgPrometheusEnabled = "prometheus.enabled"
cfgPrometheusAddress = "prometheus.address"
cfgPprofEnabled = "pprof.enabled"
cfgPprofAddress = "pprof.address"
// Tracing ...
cfgTracingEnabled = "tracing.enabled"
cfgTracingExporter = "tracing.exporter"
cfgTracingEndpoint = "tracing.endpoint"
cfgTracingTrustedCa = "tracing.trusted_ca"
cfgTracingAttributes = "tracing.attributes"
// Pool config.
cfgConTimeout = "connect_timeout"
cfgStreamTimeout = "stream_timeout"
cfgReqTimeout = "request_timeout"
cfgRebalance = "rebalance_timer"
cfgPoolErrorThreshold = "pool_error_threshold"
// Logger.
cfgLoggerLevel = "logger.level"
cfgLoggerDestination = "logger.destination"
cfgLoggerSamplingEnabled = "logger.sampling.enabled"
cfgLoggerSamplingInitial = "logger.sampling.initial"
cfgLoggerSamplingThereafter = "logger.sampling.thereafter"
cfgLoggerSamplingInterval = "logger.sampling.interval"
// Wallet.
cfgWalletPassphrase = "wallet.passphrase"
cfgWalletPath = "wallet.path"
cfgWalletAddress = "wallet.address"
// Uploader Header.
cfgUploaderHeaderEnableDefaultTimestamp = "upload_header.use_default_timestamp"
// Peers.
cfgPeers = "peers"
// NeoGo.
cfgRPCEndpoint = "rpc_endpoint"
// Resolving.
cfgResolveOrder = "resolve_order"
// Zip compression.
//
// Deprecated: Use cfgArchiveCompression instead.
cfgZipCompression = "zip.compression"
// Archive compression.
cfgArchiveCompression = "archive.compression"
// Runtime.
cfgSoftMemoryLimit = "runtime.soft_memory_limit"
// Enabling client side object preparing for PUT operations.
cfgClientCut = "frostfs.client_cut"
// Sets max buffer size for read payload in put operations.
cfgBufferMaxSizeForPut = "frostfs.buffer_max_size_for_put"
// Configuration of parameters of requests to FrostFS.
// Sets max attempt to make successful tree request.
cfgTreePoolMaxAttempts = "frostfs.tree_pool_max_attempts"
// Caching.
cfgBucketsCacheLifetime = "cache.buckets.lifetime"
cfgBucketsCacheSize = "cache.buckets.size"
cfgNetmapCacheLifetime = "cache.netmap.lifetime"
// Bucket resolving options.
cfgResolveNamespaceHeader = "resolve_bucket.namespace_header"
cfgResolveDefaultNamespaces = "resolve_bucket.default_namespaces"
// CORS.
cfgCORSAllowOrigin = "cors.allow_origin"
cfgCORSAllowMethods = "cors.allow_methods"
cfgCORSAllowHeaders = "cors.allow_headers"
cfgCORSExposeHeaders = "cors.expose_headers"
cfgCORSAllowCredentials = "cors.allow_credentials"
cfgCORSMaxAge = "cors.max_age"
// Multinet.
cfgMultinetEnabled = "multinet.enabled"
cfgMultinetBalancer = "multinet.balancer"
cfgMultinetRestrict = "multinet.restrict"
cfgMultinetFallbackDelay = "multinet.fallback_delay"
cfgMultinetSubnets = "multinet.subnets"
// Feature.
cfgFeaturesEnableFilepathFallback = "features.enable_filepath_fallback"
cfgFeaturesTreePoolNetmapSupport = "features.tree_pool_netmap_support"
// Command line args.
cmdHelp = "help"
cmdVersion = "version"
cmdPprof = "pprof"
cmdMetrics = "metrics"
cmdWallet = "wallet"
cmdAddress = "address"
cmdConfig = "config"
cmdConfigDir = "config-dir"
cmdListenAddress = "listen_address"
)
var ignore = map[string]struct{}{
cfgPeers: {},
cmdHelp: {},
cmdVersion: {},
}
type Logger struct {
logger *zap.Logger
lvl zap.AtomicLevel
}
type appCfg struct {
flags *pflag.FlagSet
mu sync.RWMutex
settings *viper.Viper
}
func (a *appCfg) reload() error {
old := a.config()
v, err := newViper(a.flags)
if err != nil {
return err
}
if old.IsSet(cmdConfig) {
v.Set(cmdConfig, old.Get(cmdConfig))
}
if old.IsSet(cmdConfigDir) {
v.Set(cmdConfigDir, old.Get(cmdConfigDir))
}
if err = readInConfig(v); err != nil {
return err
}
a.setConfig(v)
return nil
}
func (a *appCfg) config() *viper.Viper {
a.mu.RLock()
defer a.mu.RUnlock()
return a.settings
}
func (a *appCfg) setConfig(v *viper.Viper) {
a.mu.Lock()
a.settings = v
a.mu.Unlock()
}
func newViper(flags *pflag.FlagSet) (*viper.Viper, error) {
v := viper.New()
v.AutomaticEnv()
v.SetEnvPrefix(Prefix)
v.AllowEmptyEnv(true)
v.SetConfigType("yaml")
v.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
if err := bindFlags(v, flags); err != nil {
return nil, err
}
setDefaults(v, flags)
if v.IsSet(cfgServer+".0."+cfgTLSKeyFile) && v.IsSet(cfgServer+".0."+cfgTLSCertFile) {
v.Set(cfgServer+".0."+cfgTLSEnabled, true)
}
return v, nil
}
func settings() *appCfg {
// flags setup:
flags := pflag.NewFlagSet("commandline", pflag.ExitOnError)
flags.SetOutput(os.Stdout)
flags.SortFlags = false
flags.Bool(cmdPprof, false, "enable pprof")
flags.Bool(cmdMetrics, false, "enable prometheus")
help := flags.BoolP(cmdHelp, "h", false, "show help")
version := flags.BoolP(cmdVersion, "v", false, "show version")
flags.StringP(cmdWallet, "w", "", `path to the wallet`)
flags.String(cmdAddress, "", `address of wallet account`)
flags.StringArray(cmdConfig, nil, "config paths")
flags.String(cmdConfigDir, "", "config dir path")
flags.Duration(cfgConTimeout, defaultConnectTimeout, "gRPC connect timeout")
flags.Duration(cfgStreamTimeout, defaultStreamTimeout, "gRPC individual message timeout")
flags.Duration(cfgReqTimeout, defaultRequestTimeout, "gRPC request timeout")
flags.Duration(cfgRebalance, defaultRebalanceTimer, "gRPC connection rebalance timer")
flags.String(cmdListenAddress, "0.0.0.0:8080", "addresses to listen")
flags.String(cfgTLSCertFile, "", "TLS certificate path")
flags.String(cfgTLSKeyFile, "", "TLS key path")
flags.StringArrayP(cfgPeers, "p", nil, "FrostFS nodes")
flags.StringSlice(cfgResolveOrder, []string{resolver.NNSResolver, resolver.DNSResolver}, "set container name resolve order")
if err := flags.Parse(os.Args); err != nil {
panic(err)
}
v, err := newViper(flags)
if err != nil {
panic(fmt.Errorf("bind flags: %w", err))
}
switch {
case help != nil && *help:
fmt.Printf("FrostFS HTTP Gateway %s\n", Version)
flags.PrintDefaults()
fmt.Println()
fmt.Println("Default environments:")
fmt.Println()
keys := v.AllKeys()
sort.Strings(keys)
for i := range keys {
if _, ok := ignore[keys[i]]; ok {
continue
}
defaultValue := v.GetString(keys[i])
if len(defaultValue) == 0 {
continue
}
k := strings.Replace(keys[i], ".", "_", -1)
fmt.Printf("%s_%s = %s\n", Prefix, strings.ToUpper(k), defaultValue)
}
fmt.Println()
fmt.Println("Peers preset:")
fmt.Println()
fmt.Printf("%s_%s_[N]_ADDRESS = string\n", Prefix, strings.ToUpper(cfgPeers))
fmt.Printf("%s_%s_[N]_WEIGHT = float\n", Prefix, strings.ToUpper(cfgPeers))
os.Exit(0)
case version != nil && *version:
fmt.Printf("FrostFS HTTP Gateway\nVersion: %s\nGoVersion: %s\n", Version, runtime.Version())
os.Exit(0)
}
if err := readInConfig(v); err != nil {
panic(err)
}
return &appCfg{
flags: flags,
settings: v,
}
}
func setDefaults(v *viper.Viper, flags *pflag.FlagSet) {
// set defaults:
// logger:
v.SetDefault(cfgLoggerLevel, "debug")
v.SetDefault(cfgLoggerDestination, "stdout")
v.SetDefault(cfgLoggerSamplingEnabled, false)
v.SetDefault(cfgLoggerSamplingThereafter, 100)
v.SetDefault(cfgLoggerSamplingInitial, 100)
v.SetDefault(cfgLoggerSamplingInterval, defaultLoggerSamplerInterval)
// pool:
v.SetDefault(cfgPoolErrorThreshold, defaultPoolErrorThreshold)
// frostfs:
v.SetDefault(cfgBufferMaxSizeForPut, defaultBufferMaxSizeForPut)
// web-server:
v.SetDefault(cfgWebReadBufferSize, 4096)
v.SetDefault(cfgWebWriteBufferSize, 4096)
v.SetDefault(cfgWebReadTimeout, time.Minute*10)
v.SetDefault(cfgWebWriteTimeout, time.Minute*5)
v.SetDefault(cfgWebStreamRequestBody, true)
v.SetDefault(cfgWebMaxRequestBodySize, fasthttp.DefaultMaxRequestBodySize)
v.SetDefault(cfgWorkerPoolSize, 1000)
// upload header
v.SetDefault(cfgUploaderHeaderEnableDefaultTimestamp, false)
// metrics
v.SetDefault(cfgPprofAddress, "localhost:8083")
v.SetDefault(cfgPrometheusAddress, "localhost:8084")
// resolve bucket
v.SetDefault(cfgResolveNamespaceHeader, defaultNamespaceHeader)
v.SetDefault(cfgResolveDefaultNamespaces, []string{"", "root"})
// multinet
v.SetDefault(cfgMultinetFallbackDelay, defaultMultinetFallbackDelay)
if resolveMethods, err := flags.GetStringSlice(cfgResolveOrder); err == nil {
v.SetDefault(cfgResolveOrder, resolveMethods)
}
if peers, err := flags.GetStringArray(cfgPeers); err == nil {
for i := range peers {
v.SetDefault(cfgPeers+"."+strconv.Itoa(i)+".address", peers[i])
v.SetDefault(cfgPeers+"."+strconv.Itoa(i)+".weight", 1)
v.SetDefault(cfgPeers+"."+strconv.Itoa(i)+".priority", 1)
}
}
}
func bindFlags(v *viper.Viper, flags *pflag.FlagSet) error {
// Binding flags
if err := v.BindPFlag(cfgPprofEnabled, flags.Lookup(cmdPprof)); err != nil {
return err
}
if err := v.BindPFlag(cfgPrometheusEnabled, flags.Lookup(cmdMetrics)); err != nil {
return err
}
if err := v.BindPFlag(cfgWalletPath, flags.Lookup(cmdWallet)); err != nil {
return err
}
if err := v.BindPFlag(cfgWalletAddress, flags.Lookup(cmdAddress)); err != nil {
return err
}
if err := v.BindPFlags(flags); err != nil {
return err
}
if err := v.BindPFlag(cfgServer+".0.address", flags.Lookup(cmdListenAddress)); err != nil {
return err
}
if err := v.BindPFlag(cfgServer+".0."+cfgTLSKeyFile, flags.Lookup(cfgTLSKeyFile)); err != nil {
return err
}
if err := v.BindPFlag(cfgServer+".0."+cfgTLSCertFile, flags.Lookup(cfgTLSCertFile)); err != nil {
return err
}
return nil
}
func readInConfig(v *viper.Viper) error {
if v.IsSet(cmdConfig) {
if err := readConfig(v); err != nil {
return err
}
}
if v.IsSet(cmdConfigDir) {
if err := readConfigDir(v); err != nil {
return err
}
}
return nil
}
func readConfigDir(v *viper.Viper) error {
cfgSubConfigDir := v.GetString(cmdConfigDir)
entries, err := os.ReadDir(cfgSubConfigDir)
if err != nil {
return err
}
for _, entry := range entries {
if entry.IsDir() {
continue
}
ext := path.Ext(entry.Name())
if ext != ".yaml" && ext != ".yml" {
continue
}
if err = mergeConfig(v, path.Join(cfgSubConfigDir, entry.Name())); err != nil {
return err
}
}
return nil
}
func readConfig(v *viper.Viper) error {
for _, fileName := range v.GetStringSlice(cmdConfig) {
if err := mergeConfig(v, fileName); err != nil {
return err
}
}
return nil
}
func mergeConfig(v *viper.Viper, fileName string) error {
cfgFile, err := os.Open(fileName)
if err != nil {
return err
}
defer func() {
if errClose := cfgFile.Close(); errClose != nil {
panic(errClose)
}
}()
return v.MergeConfig(cfgFile)
}
type LoggerAppSettings interface {
DroppedLogsInc()
}
func pickLogger(v *viper.Viper, settings LoggerAppSettings) *Logger {
lvl, err := getLogLevel(v)
if err != nil {
panic(err)
}
dest := v.GetString(cfgLoggerDestination)
switch dest {
case destinationStdout:
return newStdoutLogger(v, lvl, settings)
case destinationJournald:
return newJournaldLogger(v, lvl, settings)
default:
panic(fmt.Sprintf("wrong destination for logger: %s", dest))
}
}
// newStdoutLogger constructs a zap.Logger instance for current application.
// Panics on failure.
//
// Logger is built from zap's production logging configuration with:
// - parameterized level (debug by default)
// - console encoding
// - ISO8601 time encoding
//
// Logger records a stack trace for all messages at or above fatal level.
//
// See also zapcore.Level, zap.NewProductionConfig, zap.AddStacktrace.
func newStdoutLogger(v *viper.Viper, lvl zapcore.Level, settings LoggerAppSettings) *Logger {
stdout := zapcore.AddSync(os.Stderr)
level := zap.NewAtomicLevelAt(lvl)
consoleOutCore := zapcore.NewCore(newLogEncoder(), stdout, level)
consoleOutCore = applyZapCoreMiddlewares(consoleOutCore, v, settings)
return &Logger{
logger: zap.New(consoleOutCore, zap.AddStacktrace(zap.NewAtomicLevelAt(zap.FatalLevel))),
lvl: level,
}
}
func newJournaldLogger(v *viper.Viper, lvl zapcore.Level, settings LoggerAppSettings) *Logger {
level := zap.NewAtomicLevelAt(lvl)
encoder := zapjournald.NewPartialEncoder(newLogEncoder(), zapjournald.SyslogFields)
core := zapjournald.NewCore(level, encoder, &journald.Journal{}, zapjournald.SyslogFields)
coreWithContext := core.With([]zapcore.Field{
zapjournald.SyslogFacility(zapjournald.LogDaemon),
zapjournald.SyslogIdentifier(),
zapjournald.SyslogPid(),
})
coreWithContext = applyZapCoreMiddlewares(coreWithContext, v, settings)
return &Logger{
logger: zap.New(coreWithContext, zap.AddStacktrace(zap.NewAtomicLevelAt(zap.FatalLevel))),
lvl: level,
}
}
func newLogEncoder() zapcore.Encoder {
c := zap.NewProductionEncoderConfig()
c.EncodeTime = zapcore.ISO8601TimeEncoder
return zapcore.NewConsoleEncoder(c)
}
func applyZapCoreMiddlewares(core zapcore.Core, v *viper.Viper, settings LoggerAppSettings) zapcore.Core {
if v.GetBool(cfgLoggerSamplingEnabled) {
core = zapcore.NewSamplerWithOptions(core,
v.GetDuration(cfgLoggerSamplingInterval),
v.GetInt(cfgLoggerSamplingInitial),
v.GetInt(cfgLoggerSamplingThereafter),
zapcore.SamplerHook(func(_ zapcore.Entry, dec zapcore.SamplingDecision) {
if dec&zapcore.LogDropped > 0 {
settings.DroppedLogsInc()
}
}))
}
return core
}
func getLogLevel(v *viper.Viper) (zapcore.Level, error) {
var lvl zapcore.Level
lvlStr := v.GetString(cfgLoggerLevel)
err := lvl.UnmarshalText([]byte(lvlStr))
if err != nil {
return lvl, fmt.Errorf("incorrect logger level configuration %s (%v), "+
"value should be one of %v", lvlStr, err, [...]zapcore.Level{
zapcore.DebugLevel,
zapcore.InfoLevel,
zapcore.WarnLevel,
zapcore.ErrorLevel,
zapcore.DPanicLevel,
zapcore.PanicLevel,
zapcore.FatalLevel,
})
}
return lvl, nil
}
func fetchReconnectInterval(cfg *viper.Viper) time.Duration {
reconnect := cfg.GetDuration(cfgReconnectInterval)
if reconnect <= 0 {
reconnect = defaultReconnectInterval
}
return reconnect
}
func fetchIndexPageTemplate(v *viper.Viper, l *zap.Logger) (string, bool) {
if !v.GetBool(cfgIndexPageEnabled) {
return "", false
}
reader, err := os.Open(v.GetString(cfgIndexPageTemplatePath))
if err != nil {
l.Warn(logs.FailedToReadIndexPageTemplate, zap.Error(err))
return "", true
}
tmpl, err := io.ReadAll(reader)
if err != nil {
l.Warn(logs.FailedToReadIndexPageTemplate, zap.Error(err))
return "", true
}
l.Info(logs.SetCustomIndexPageTemplate)
return string(tmpl), true
}
func fetchDefaultNamespaces(v *viper.Viper) []string {
namespaces := v.GetStringSlice(cfgResolveDefaultNamespaces)
for i := range namespaces { // to be set namespaces in env variable as `HTTP_GW_RESOLVE_BUCKET_DEFAULT_NAMESPACES="" "root"`
namespaces[i] = strings.Trim(namespaces[i], "\"")
}
return namespaces
}
func fetchCORSMaxAge(v *viper.Viper) int {
maxAge := v.GetInt(cfgCORSMaxAge)
if maxAge <= 0 {
maxAge = defaultCORSMaxAge
}
return maxAge
}
func fetchServers(v *viper.Viper, log *zap.Logger) []ServerInfo {
var servers []ServerInfo
seen := make(map[string]struct{})
for i := 0; ; i++ {
key := cfgServer + "." + strconv.Itoa(i) + "."
var serverInfo ServerInfo
serverInfo.Address = v.GetString(key + "address")
serverInfo.TLS.Enabled = v.GetBool(key + cfgTLSEnabled)
serverInfo.TLS.KeyFile = v.GetString(key + cfgTLSKeyFile)
serverInfo.TLS.CertFile = v.GetString(key + cfgTLSCertFile)
if serverInfo.Address == "" {
break
}
if _, ok := seen[serverInfo.Address]; ok {
log.Warn(logs.WarnDuplicateAddress, zap.String("address", serverInfo.Address))
continue
}
seen[serverInfo.Address] = struct{}{}
servers = append(servers, serverInfo)
}
return servers
}
func (a *app) initPools(ctx context.Context) {
key, err := getFrostFSKey(a.config(), a.log)
if err != nil {
a.log.Fatal(logs.CouldNotLoadFrostFSPrivateKey, zap.Error(err))
}
var prm pool.InitParameters
var prmTree treepool.InitParameters
prm.SetKey(&key.PrivateKey)
prmTree.SetKey(key)
a.log.Info(logs.UsingCredentials, zap.String("FrostFS", hex.EncodeToString(key.PublicKey().Bytes())))
for _, peer := range fetchPeers(a.log, a.config()) {
prm.AddNode(peer)
prmTree.AddNode(peer)
}
connTimeout := a.config().GetDuration(cfgConTimeout)
if connTimeout <= 0 {
connTimeout = defaultConnectTimeout
}
prm.SetNodeDialTimeout(connTimeout)
prmTree.SetNodeDialTimeout(connTimeout)
streamTimeout := a.config().GetDuration(cfgStreamTimeout)
if streamTimeout <= 0 {
streamTimeout = defaultStreamTimeout
}
prm.SetNodeStreamTimeout(streamTimeout)
prmTree.SetNodeStreamTimeout(streamTimeout)
healthCheckTimeout := a.config().GetDuration(cfgReqTimeout)
if healthCheckTimeout <= 0 {
healthCheckTimeout = defaultRequestTimeout
}
prm.SetHealthcheckTimeout(healthCheckTimeout)
prmTree.SetHealthcheckTimeout(healthCheckTimeout)
rebalanceInterval := a.config().GetDuration(cfgRebalance)
if rebalanceInterval <= 0 {
rebalanceInterval = defaultRebalanceTimer
}
prm.SetClientRebalanceInterval(rebalanceInterval)
prmTree.SetClientRebalanceInterval(rebalanceInterval)
errorThreshold := a.config().GetUint32(cfgPoolErrorThreshold)
if errorThreshold <= 0 {
errorThreshold = defaultPoolErrorThreshold
}
prm.SetErrorThreshold(errorThreshold)
prm.SetLogger(a.log)
prmTree.SetLogger(a.log)
prmTree.SetMaxRequestAttempts(a.config().GetInt(cfgTreePoolMaxAttempts))
interceptors := []grpc.DialOption{
grpc.WithUnaryInterceptor(grpctracing.NewUnaryClientInteceptor()),
grpc.WithStreamInterceptor(grpctracing.NewStreamClientInterceptor()),
grpc.WithContextDialer(a.settings.dialerSource.GrpcContextDialer()),
}
prm.SetGRPCDialOptions(interceptors...)
prmTree.SetGRPCDialOptions(interceptors...)
p, err := pool.NewPool(prm)
if err != nil {
a.log.Fatal(logs.FailedToCreateConnectionPool, zap.Error(err))
}
if err = p.Dial(ctx); err != nil {
a.log.Fatal(logs.FailedToDialConnectionPool, zap.Error(err))
}
if a.config().GetBool(cfgFeaturesTreePoolNetmapSupport) {
prmTree.SetNetMapInfoSource(frostfs.NewSource(frostfs.NewFrostFS(p), cache.NewNetmapCache(getNetmapCacheOptions(a.config(), a.log)), a.bucketCache, a.log))
}
treePool, err := treepool.NewPool(prmTree)
if err != nil {
a.log.Fatal(logs.FailedToCreateTreePool, zap.Error(err))
}
if err = treePool.Dial(ctx); err != nil {
a.log.Fatal(logs.FailedToDialTreePool, zap.Error(err))
}
a.pool = p
a.treePool = treePool
a.key = key
}
func fetchPeers(l *zap.Logger, v *viper.Viper) []pool.NodeParam {
var nodes []pool.NodeParam
for i := 0; ; i++ {
key := cfgPeers + "." + strconv.Itoa(i) + "."
address := v.GetString(key + "address")
weight := v.GetFloat64(key + "weight")
priority := v.GetInt(key + "priority")
if address == "" {
break
}
if weight <= 0 { // unspecified or wrong
weight = 1
}
if priority <= 0 { // unspecified or wrong
priority = 1
}
nodes = append(nodes, pool.NewNodeParam(priority, address, weight))
l.Info(logs.AddedStoragePeer,
zap.Int("priority", priority),
zap.String("address", address),
zap.Float64("weight", weight))
}
return nodes
}
func fetchSoftMemoryLimit(cfg *viper.Viper) int64 {
softMemoryLimit := cfg.GetSizeInBytes(cfgSoftMemoryLimit)
if softMemoryLimit <= 0 {
softMemoryLimit = defaultSoftMemoryLimit
}
return int64(softMemoryLimit)
}
func getBucketCacheOptions(v *viper.Viper, l *zap.Logger) *cache.Config {
cacheCfg := cache.DefaultBucketConfig(l)
cacheCfg.Lifetime = fetchCacheLifetime(v, l, cfgBucketsCacheLifetime, cacheCfg.Lifetime)
cacheCfg.Size = fetchCacheSize(v, l, cfgBucketsCacheSize, cacheCfg.Size)
return cacheCfg
}
func getNetmapCacheOptions(v *viper.Viper, l *zap.Logger) *cache.NetmapCacheConfig {
cacheCfg := cache.DefaultNetmapConfig(l)
cacheCfg.Lifetime = fetchCacheLifetime(v, l, cfgNetmapCacheLifetime, cacheCfg.Lifetime)
return cacheCfg
}
func fetchCacheLifetime(v *viper.Viper, l *zap.Logger, cfgEntry string, defaultValue time.Duration) time.Duration {
if v.IsSet(cfgEntry) {
lifetime := v.GetDuration(cfgEntry)
if lifetime <= 0 {
l.Error(logs.InvalidLifetimeUsingDefaultValue,
zap.String("parameter", cfgEntry),
zap.Duration("value in config", lifetime),
zap.Duration("default", defaultValue))
} else {
return lifetime
}
}
return defaultValue
}
func fetchCacheSize(v *viper.Viper, l *zap.Logger, cfgEntry string, defaultValue int) int {
if v.IsSet(cfgEntry) {
size := v.GetInt(cfgEntry)
if size <= 0 {
l.Error(logs.InvalidCacheSizeUsingDefaultValue,
zap.String("parameter", cfgEntry),
zap.Int("value in config", size),
zap.Int("default", defaultValue))
} else {
return size
}
}
return defaultValue
}
func getDialerSource(logger *zap.Logger, cfg *viper.Viper) *internalnet.DialerSource {
source, err := internalnet.NewDialerSource(fetchMultinetConfig(cfg, logger))
if err != nil {
logger.Fatal(logs.FailedToLoadMultinetConfig, zap.Error(err))
}
return source
}
func fetchMultinetConfig(v *viper.Viper, l *zap.Logger) (cfg internalnet.Config) {
cfg.Enabled = v.GetBool(cfgMultinetEnabled)
cfg.Balancer = v.GetString(cfgMultinetBalancer)
cfg.Restrict = v.GetBool(cfgMultinetRestrict)
cfg.FallbackDelay = v.GetDuration(cfgMultinetFallbackDelay)
cfg.Subnets = make([]internalnet.Subnet, 0, 5)
cfg.EventHandler = internalnet.NewLogEventHandler(l)
for i := 0; ; i++ {
key := cfgMultinetSubnets + "." + strconv.Itoa(i) + "."
subnet := internalnet.Subnet{}
subnet.Prefix = v.GetString(key + "mask")
if subnet.Prefix == "" {
break
}
subnet.SourceIPs = v.GetStringSlice(key + "source_ips")
cfg.Subnets = append(cfg.Subnets, subnet)
}
return
}
func fetchTracingAttributes(v *viper.Viper) (map[string]string, error) {
attributes := make(map[string]string)
for i := 0; ; i++ {
key := cfgTracingAttributes + "." + strconv.Itoa(i) + "."
attrKey := v.GetString(key + "key")
attrValue := v.GetString(key + "value")
if attrKey == "" {
break
}
if _, ok := attributes[attrKey]; ok {
return nil, fmt.Errorf("tracing attribute key %s defined more than once", attrKey)
}
if attrValue == "" {
return nil, fmt.Errorf("empty tracing attribute value for key %s", attrKey)
}
attributes[attrKey] = attrValue
}
return attributes, nil
}
func fetchArchiveCompression(v *viper.Viper) bool {
if v.IsSet(cfgZipCompression) {
return v.GetBool(cfgZipCompression)
}
return v.GetBool(cfgArchiveCompression)
}

View file

@ -0,0 +1,60 @@
package main
import (
"os"
"testing"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/resolver"
"github.com/stretchr/testify/require"
)
func TestConfigReload(t *testing.T) {
f, err := os.CreateTemp("", "conf")
require.NoError(t, err)
defer func() {
require.NoError(t, os.Remove(f.Name()))
}()
confData := `
pprof:
enabled: true
resolve_bucket:
default_namespaces: [""]
resolve_order:
- nns
`
_, err = f.WriteString(confData)
require.NoError(t, err)
require.NoError(t, f.Close())
cfg := settings()
require.NoError(t, cfg.flags.Parse([]string{"--config", f.Name(), "--connect_timeout", "15s"}))
require.NoError(t, cfg.reload())
require.True(t, cfg.config().GetBool(cfgPprofEnabled))
require.Equal(t, []string{""}, cfg.config().GetStringSlice(cfgResolveDefaultNamespaces))
require.Equal(t, []string{resolver.NNSResolver}, cfg.config().GetStringSlice(cfgResolveOrder))
require.Equal(t, 15*time.Second, cfg.config().GetDuration(cfgConTimeout))
require.NoError(t, os.Truncate(f.Name(), 0))
require.NoError(t, cfg.reload())
require.False(t, cfg.config().GetBool(cfgPprofEnabled))
require.Equal(t, []string{"", "root"}, cfg.config().GetStringSlice(cfgResolveDefaultNamespaces))
require.Equal(t, []string{resolver.NNSResolver, resolver.DNSResolver}, cfg.config().GetStringSlice(cfgResolveOrder))
require.Equal(t, 15*time.Second, cfg.config().GetDuration(cfgConTimeout))
}
func TestSetTLSEnabled(t *testing.T) {
cfg := settings()
require.NoError(t, cfg.flags.Parse([]string{"--" + cfgTLSCertFile, "tls.crt", "--" + cfgTLSKeyFile, "tls.key"}))
require.NoError(t, cfg.reload())
require.True(t, cfg.config().GetBool(cfgServer+".0."+cfgTLSEnabled))
}

View file

@ -14,8 +14,12 @@ HTTP_GW_PPROF_ADDRESS=localhost:8083
HTTP_GW_PROMETHEUS_ENABLED=true
HTTP_GW_PROMETHEUS_ADDRESS=localhost:8084
# Log level.
# Logger.
HTTP_GW_LOGGER_LEVEL=debug
HTTP_GW_LOGGER_SAMPLING_ENABLED=false
HTTP_GW_LOGGER_SAMPLING_INITIAL=100
HTTP_GW_LOGGER_SAMPLING_THEREAFTER=100
HTTP_GW_LOGGER_SAMPLING_INTERVAL=1s
HTTP_GW_SERVER_0_ADDRESS=0.0.0.0:443
HTTP_GW_SERVER_0_TLS_ENABLED=false
@ -26,6 +30,9 @@ HTTP_GW_SERVER_1_TLS_ENABLED=true
HTTP_GW_SERVER_1_TLS_CERT_FILE=/path/to/tls/cert
HTTP_GW_SERVER_1_TLS_KEY_FILE=/path/to/tls/key
# How often to reconnect to the servers
HTTP_GW_RECONNECT_INTERVAL: 1m
# Nodes configuration.
# This configuration make the gateway use the first node (grpc://s01.frostfs.devenv:8080)
# while it's healthy. Otherwise, the gateway use the second node (grpc://s01.frostfs.devenv:8080)
@ -90,9 +97,76 @@ HTTP_GW_REBALANCE_TIMER=30s
# The number of errors on connection after which node is considered as unhealthy
HTTP_GW_POOL_ERROR_THRESHOLD=100
# Enable zip compression to download files by common prefix.
# Enable archive compression to download files by common prefix.
# DEPRECATED: Use HTTP_GW_ARCHIVE_COMPRESSION instead.
HTTP_GW_ZIP_COMPRESSION=false
# Enable archive compression to download files by common prefix.
HTTP_GW_ARCHIVE_COMPRESSION=false
HTTP_GW_TRACING_ENABLED=true
HTTP_GW_TRACING_ENDPOINT="localhost:4317"
HTTP_GW_TRACING_EXPORTER="otlp_grpc"
HTTP_GW_TRACING_TRUSTED_CA=""
HTTP_GW_TRACING_ATTRIBUTES_0_KEY=key0
HTTP_GW_TRACING_ATTRIBUTES_0_VALUE=value
HTTP_GW_TRACING_ATTRIBUTES_1_KEY=key1
HTTP_GW_TRACING_ATTRIBUTES_1_VALUE=value
HTTP_GW_RUNTIME_SOFT_MEMORY_LIMIT=1073741824
# Parameters of requests to FrostFS
# This flag enables client side object preparing.
HTTP_GW_FROSTFS_CLIENT_CUT=false
# Sets max buffer size for read payload in put operations.
HTTP_GW_FROSTFS_BUFFER_MAX_SIZE_FOR_PUT=1048576
# Caching
# Cache which contains mapping of bucket name to bucket info
HTTP_GW_CACHE_BUCKETS_LIFETIME=1m
HTTP_GW_CACHE_BUCKETS_SIZE=1000
# Cache which stores netmap
HTTP_GW_CACHE_NETMAP_LIFETIME=1m
# Header to determine zone to resolve bucket name
HTTP_GW_RESOLVE_BUCKET_NAMESPACE_HEADER=X-Frostfs-Namespace
# Namespaces that should be handled as default
HTTP_GW_RESOLVE_BUCKET_DEFAULT_NAMESPACES="" "root"
# Max attempt to make successful tree request.
# default value is 0 that means the number of attempts equals to number of nodes in pool.
HTTP_GW_FROSTFS_TREE_POOL_MAX_ATTEMPTS=0
HTTP_GW_CORS_ALLOW_ORIGIN="*"
HTTP_GW_CORS_ALLOW_METHODS="GET" "POST"
HTTP_GW_CORS_ALLOW_HEADERS="*"
HTTP_GW_CORS_EXPOSE_HEADERS="*"
HTTP_GW_CORS_ALLOW_CREDENTIALS=false
HTTP_GW_CORS_MAX_AGE=600
# Multinet properties
# Enable multinet support
HTTP_GW_MULTINET_ENABLED=false
# Strategy to pick source IP address
HTTP_GW_MULTINET_BALANCER=roundrobin
# Restrict requests with unknown destination subnet
HTTP_GW_MULTINET_RESTRICT=false
# Delay between ipv6 to ipv4 fallback switch
HTTP_GW_MULTINET_FALLBACK_DELAY=300ms
# List of subnets and IP addresses to use as source for those subnets
HTTP_GW_MULTINET_SUBNETS_1_MASK=1.2.3.4/24
HTTP_GW_MULTINET_SUBNETS_1_SOURCE_IPS=1.2.3.4 1.2.3.5
# Number of workers in handler's worker pool
HTTP_GW_WORKER_POOL_SIZE=1000
# Index page
# Enable index page support
HTTP_GW_INDEX_PAGE_ENABLED=false
# Index page template path
HTTP_GW_INDEX_PAGE_TEMPLATE_PATH=internal/handler/templates/index.gotmpl
# Enable using fallback path to search for a object by attribute
HTTP_GW_FEATURES_ENABLE_FILEPATH_FALLBACK=false
# Enable using new version of tree pool, which uses netmap to select nodes, for requests to tree service
HTTP_GW_FEATURES_TREE_POOL_NETMAP_SUPPORT=true

View file

@ -9,13 +9,26 @@ pprof:
prometheus:
enabled: false # Enable metrics.
address: localhost:8084
tracing:
enabled: true
exporter: "otlp_grpc"
endpoint: "localhost:4317"
trusted_ca: ""
attributes:
- key: key0
value: value
- key: key1
value: value
logger:
level: debug # Log level.
destination: stdout
sampling:
enabled: false
initial: 100
thereafter: 100
interval: 1s
server:
- address: 0.0.0.0:8080
@ -54,6 +67,7 @@ peers:
priority: 2
weight: 9
reconnect_interval: 1m
web:
# Per-connection buffer size for requests' reading.
@ -99,5 +113,77 @@ request_timeout: 5s # Timeout to check node health during rebalance.
rebalance_timer: 30s # Interval to check nodes health.
pool_error_threshold: 100 # The number of errors on connection after which node is considered as unhealthy.
# Number of workers in handler's worker pool
worker_pool_size: 1000
# Enables index page to see objects list for specified container and prefix
index_page:
enabled: false
template_path: internal/handler/templates/index.gotmpl
# Deprecated: Use archive.compression instead
zip:
compression: false # Enable zip compression to download files by common prefix.
# Enables zip compression to download files by common prefix.
compression: false
archive:
# Enables archive compression to download files by common prefix.
compression: false
runtime:
soft_memory_limit: 1gb
# Parameters of requests to FrostFS
frostfs:
# This flag enables client side object preparing.
client_cut: false
# Sets max buffer size for read payload in put operations.
buffer_max_size_for_put: 1048576
# Max attempt to make successful tree request.
# default value is 0 that means the number of attempts equals to number of nodes in pool.
tree_pool_max_attempts: 0
# Caching
cache:
# Cache which contains mapping of bucket name to bucket info
buckets:
lifetime: 1m
size: 1000
# Cache which stores netmap
netmap:
lifetime: 1m
resolve_bucket:
namespace_header: X-Frostfs-Namespace
default_namespaces: [ "", "root" ]
cors:
allow_origin: ""
allow_methods: []
allow_headers: []
expose_headers: []
allow_credentials: false
max_age: 600
# Multinet properties
multinet:
# Enable multinet support
enabled: false
# Strategy to pick source IP address
balancer: roundrobin
# Restrict requests with unknown destination subnet
restrict: false
# Delay between ipv6 to ipv4 fallback switch
fallback_delay: 300ms
# List of subnets and IP addresses to use as source for those subnets
subnets:
- mask: 1.2.3.4/24
source_ips:
- 1.2.3.4
- 1.2.3.5
features:
# Enable using fallback path to search for a object by attribute
enable_filepath_fallback: false
# Enable using new version of tree pool, which uses netmap to select nodes, for requests to tree service
tree_pool_netmap_support: true

View file

@ -21,7 +21,7 @@ set -e
case "$1" in
configure)
USERNAME=http
id -u frostfs-$USERNAME >/dev/null 2>&1 || useradd -s /usr/sbin/nologin -d /var/lib/frostfs/$USERNAME --system -M -U -c "FrostFS HTTP gateway" frostfs-$USERNAME
id -u frostfs-$USERNAME >/dev/null 2>&1 || useradd -s /usr/sbin/nologin -d /var/lib/frostfs/$USERNAME --system -m -U -c "FrostFS HTTP gateway" frostfs-$USERNAME
if ! dpkg-statoverride --list /etc/frostfs/$USERNAME >/dev/null; then
chown -f root:frostfs-$USERNAME /etc/frostfs/$USERNAME
chown -f root:frostfs-$USERNAME /etc/frostfs/$USERNAME/config.yaml || true

View file

@ -1,14 +1,14 @@
# HTTP Gateway Specification
| Route | Description |
|-------------------------------------------------|----------------------------------------------|
| `/upload/{cid}` | [Put object](#put-object) |
| `/get/{cid}/{oid}` | [Get object](#get-object) |
| `/get_by_attribute/{cid}/{attr_key}/{attr_val}` | [Search object](#search-object) |
| `/zip/{cid}/{prefix}` | [Download objects in archive](#download-zip) |
| Route | Description |
|-------------------------------------------------|--------------------------------------------------|
| `/upload/{cid}` | [Put object](#put-object) |
| `/get/{cid}/{oid}` | [Get object](#get-object) |
| `/get_by_attribute/{cid}/{attr_key}/{attr_val}` | [Search object](#search-object) |
| `/zip/{cid}/{prefix}`, `/tar/{cid}/{prefix}` | [Download objects in archive](#download-archive) |
**Note:** `cid` parameter can be base58 encoded container ID or container name
(the name must be registered in NNS, see appropriate section in [README](../README.md#nns)).
(the name must be registered in NNS, see appropriate section in [nns.md](./nns.md)).
Route parameters can be:
@ -18,7 +18,7 @@ Route parameters can be:
### Bearer token
All routes can accept [bearer token](../README.md#authentication) from:
All routes can accept [bearer token](./authentication.md) from:
* `Authorization` header with `Bearer` type and base64-encoded token in
credentials field
@ -56,12 +56,14 @@ Upload file as object with attributes to FrostFS.
###### Headers
| Header | Description |
|------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|
| Common headers | See [bearer token](#bearer-token). |
| `X-Attribute-System-*` | Used to set system FrostFS object attributes <br/> (e.g. use "X-Attribute-System-Expiration-Epoch" to set `__SYSTEM__EXPIRATION_EPOCH` attribute). |
| `X-Attribute-*` | Used to set regular object attributes <br/> (e.g. use "X-Attribute-My-Tag" to set `My-Tag` attribute). |
| `Date` | This header is used to calculate the right `__SYSTEM__EXPIRATION` attribute for object. If the header is missing, the current server time is used. |
| Header | Description |
|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Common headers | See [bearer token](#bearer-token). |
| `X-Attribute-System-*` | Used to set system FrostFS object attributes <br/> (e.g. use "X-Attribute-System-Expiration-Epoch" to set `__SYSTEM__EXPIRATION_EPOCH` attribute). |
| `X-Attribute-*` | Used to set regular object attributes <br/> (e.g. use "X-Attribute-My-Tag" to set `My-Tag` attribute). |
| `X-Explode-Archive` | If set, gate tries to read files from uploading `tar` archive and creates an object for each file in it. Uploading `tar` could be compressed via Gzip by setting a `Content-Encoding` header. Sets a `FilePath` attribute as a relative path from archive root and a `FileName` as the last path element of the `FilePath`. |
| `Content-Encoding` | If set and value is `gzip`, gate will handle uploading file as a `Gzip` compressed `tar` file. |
| `Date` | This header is used to calculate the right `__SYSTEM__EXPIRATION` attribute for object. If the header is missing, the current server time is used. |
There are some reserved headers type of `X-Attribute-FROSTFS-*` (headers are arranged in descending order of priority):
@ -95,12 +97,12 @@ The `filename` field from the multipart form will be set as `FileName` attribute
## Get object
Route: `/get/{cid}/{oid}?[download=true]`
Route: `/get/{cid}/{oid}?[download=false]`
| Route parameter | Type | Description |
|-----------------|--------|------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `cid` | Single | Base58 encoded container ID or container name from NNS. |
| `oid` | Single | Base58 encoded object ID. |
| `cid` | Single | Base58 encoded `container ID` or `container name` from NNS or `bucket name`. |
| `oid` | Single | Base58 encoded `object ID`. Also could be `S3 object name` if `cid` is specified as bucket name. |
| `download` | Query | Set the `Content-Disposition` header as `attachment` in response.<br/> This make the browser to download object as file instead of showing it on the page. |
### Methods
@ -141,6 +143,13 @@ Get an object (payload and attributes) by an address.
| 400 | Some error occurred during object downloading. |
| 404 | Container or object not found. |
###### Body
Returns object data. If request performed from browser, either displays raw data or downloads it as
attachment if `download` query parameter is set to `true`.
If `index_page.enabled` is set to `true`, returns HTML with index-page if no object with specified
S3-name was found.
#### HEAD
Get an object attributes by an address.
@ -262,9 +271,9 @@ If more than one object is found, an arbitrary one will be used to get attribute
| 400 | Some error occurred during operation. |
| 404 | Container or object not found. |
## Download zip
## Download archive
Route: `/zip/{cid}/{prefix}`
Route: `/zip/{cid}/{prefix}`, `/tar/{cid}/{prefix}`
| Route parameter | Type | Description |
|-----------------|-----------|---------------------------------------------------------|
@ -275,12 +284,13 @@ Route: `/zip/{cid}/{prefix}`
#### GET
Find objects by prefix for `FilePath` attributes. Return found objects in zip archive.
Find objects by prefix for `FilePath` attributes. Return found objects in zip or tar archive.
Name of files in archive sets to `FilePath` attribute of objects.
Time of files sets to time when object has started downloading.
You can download all files in container that have `FilePath` attribute by `/zip/{cid}/` route.
You can download all files in container that have `FilePath` attribute by `/zip/{cid}/` or
`/tar/{cid}/` route.
Archive can be compressed (see http-gw [configuration](gate-configuration.md#zip-section)).
Archive can be compressed (see http-gw [configuration](gate-configuration.md#archive-section)).
##### Request

108
docs/authentication.md Normal file
View file

@ -0,0 +1,108 @@
# Request authentication
HTTP Gateway does not authorize requests. Gateway converts HTTP request to a
FrostFS request and signs it with its own private key.
You can always upload files to public containers (open for anyone to put
objects into), but for restricted containers you need to explicitly allow PUT
operations for a request signed with your HTTP Gateway keys.
If you don't want to manage gateway's secret keys and adjust policies when
gateway configuration changes (new gate, key rotation, etc) or you plan to use
public services, there is an option to let your application backend (or you) to
issue Bearer Tokens and pass them from the client via gate down to FrostFS level
to grant access.
FrostFS Bearer Token basically is a container owner-signed policy (refer to FrostFS
documentation for more details). There are two options to pass them to gateway:
* "Authorization" header with "Bearer" type and base64-encoded token in
credentials field
* "Bearer" cookie with base64-encoded token contents
For example, you have a mobile application frontend with a backend part storing
data in FrostFS. When a user authorizes in the mobile app, the backend issues a FrostFS
Bearer token and provides it to the frontend. Then, the mobile app may generate
some data and upload it via any available FrostFS HTTP Gateway by adding
the corresponding header to the upload request. Accessing policy protected data
works the same way.
##### Example
In order to generate a bearer token, you need to have wallet (which will be used to sign the token)
1. Suppose you have a container with private policy for wallet key
```
$ frostfs-cli container create -r <endpoint> --wallet <wallet> -policy <policy> --basic-acl 0 --await
CID: 9dfzyvq82JnFqp5svxcREf2iy6XNuifYcJPusEDnGK9Z
$ frostfs-cli ape-manager add -r <endpoint> --wallet <wallet> \
--target-type container --target-name 9dfzyvq82JnFqp5svxcREf2iy6XNuifYcJPusEDnGK9Z \
--rule "allow Object.* RequestCondition:"\$Actor:publicKey"=03b09baabff3f6107c7e9acb8721a6fc5618d45b50247a314d82e548702cce8cd5 *" \
--chain-id <chainID>
```
2. Form a Bearer token (10000 is lifetime expiration in epoch) to impersonate
HTTP Gateway request as wallet signed request and save it to **bearer.json**:
```
{
"body": {
"allowImpersonate": true,
"lifetime": {
"exp": "10000",
"nbf": "0",
"iat": "0"
}
},
"signature": null
}
```
3. Sign it with the wallet:
```
$ frostfs-cli util sign bearer-token --from bearer.json --to signed.json -w <wallet>
```
4. Encode to base64 to use in header:
```
$ base64 -w 0 signed.json
# output: Ck4KKgoECAIQBhIiCiCZGdlbN7DPGPMg9rsWqV+p2XdMzUqknRiexewSFp8kmBIbChk17MUri6OJ0X5ftsHzy7NERDNFB4C92PcaGgMIkE4SZgohAxpsb7vfAso1F0X6hrm6WpRS14WsT3/Ct1SMoqRsT89KEkEEGxKi8GjKSf52YqhppgaOTQHbUsL3jn7SHLqS3ndAQ7NtAATnmRHleZw2V2xRRSRBQdjDC05KK83LhdSax72Fsw==
```
After that, the Bearer token can be used:
```
$ curl -F 'file=@cat.jpeg;filename=cat.jpeg' -H "Authorization: Bearer Ck4KKgoECAIQBhIiCiCZGdlbN7DPGPMg9rsWqV+p2XdMzUqknRiexewSFp8kmBIbChk17MUri6OJ0X5ftsHzy7NERDNFB4C92PcaGgMIkE4SZgohAxpsb7vfAso1F0X6hrm6WpRS14WsT3/Ct1SMoqRsT89KEkEEGxKi8GjKSf52YqhppgaOTQHbUsL3jn7SHLqS3ndAQ7NtAATnmRHleZw2V2xRRSRBQdjDC05KK83LhdSax72Fsw==" \
http://localhost:8082/upload/BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K
# output:
# {
# "object_id": "DhfES9nVrFksxGDD2jQLunGADfrXExxNwqXbDafyBn9X",
# "container_id": "BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K"
# }
```
##### Note: Bearer Token owner
You can specify exact key who can use Bearer Token (gateway wallet address).
To do this, encode wallet address in base64 format
```
$ echo 'NhVtreTTCoqsMQV5Wp55fqnriiUCpEaKm3' | base58 --decode | base64
# output: NezFK4ujidF+X7bB88uzREQzRQeAvdj3Gg==
```
Then specify this value in Bearer Token Json
```
{
"body": {
"ownerID": {
"value": "NezFK4ujidF+X7bB88uzREQzRQeAvdj3Gg=="
},
...
```
##### Note: Policy override
Instead of impersonation, you can define the set of policies that will be applied
to the request sender. This allows to restrict access to specific operation and
specific objects without giving full impersonation control to the token user.

View file

@ -40,20 +40,26 @@ $ cat http.log
# Structure
| Section | Description |
|-----------------|-------------------------------------------------------|
| no section | [General parameters](#general-section) |
| `wallet` | [Wallet configuration](#wallet-section) |
| `peers` | [Nodes configuration](#peers-section) |
| `logger` | [Logger configuration](#logger-section) |
| `web` | [Web configuration](#web-section) |
| `server` | [Server configuration](#server-section) |
| `upload-header` | [Upload header configuration](#upload-header-section) |
| `zip` | [ZIP configuration](#zip-section) |
| `pprof` | [Pprof configuration](#pprof-section) |
| `prometheus` | [Prometheus configuration](#prometheus-section) |
| `tracing` | [Tracing configuration](#tracing-section) |
| Section | Description |
|------------------|----------------------------------------------------------------|
| no section | [General parameters](#general-section) |
| `wallet` | [Wallet configuration](#wallet-section) |
| `peers` | [Nodes configuration](#peers-section) |
| `logger` | [Logger configuration](#logger-section) |
| `web` | [Web configuration](#web-section) |
| `server` | [Server configuration](#server-section) |
| `upload-header` | [Upload header configuration](#upload-header-section) |
| `zip` | [ZIP configuration](#zip-section) |
| `pprof` | [Pprof configuration](#pprof-section) |
| `prometheus` | [Prometheus configuration](#prometheus-section) |
| `tracing` | [Tracing configuration](#tracing-section) |
| `runtime` | [Runtime configuration](#runtime-section) |
| `frostfs` | [Frostfs configuration](#frostfs-section) |
| `cache` | [Cache configuration](#cache-section) |
| `resolve_bucket` | [Bucket name resolving configuration](#resolve_bucket-section) |
| `index_page` | [Index page configuration](#index_page-section) |
| `multinet` | [Multinet configuration](#multinet-section) |
| `features` | [Features configuration](#features-section) |
# General section
@ -68,17 +74,22 @@ stream_timeout: 10s
request_timeout: 5s
rebalance_timer: 30s
pool_error_threshold: 100
reconnect_interval: 1m
worker_pool_size: 1000
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|------------------------|------------|---------------|----------------|------------------------------------------------------------------------------------|
| `rpc_endpoint` | `string` | yes | | The address of the RPC host to which the gateway connects to resolve bucket names. |
| `resolve_order` | `[]string` | yes | `[nns, dns]` | Order of bucket name resolvers to use. |
| `connect_timeout` | `duration` | | `10s` | Timeout to connect to a node. |
| `stream_timeout` | `duration` | | `10s` | Timeout for individual operations in streaming RPC. |
| `request_timeout` | `duration` | | `15s` | Timeout to check node health during rebalance. |
| `rebalance_timer` | `duration` | | `60s` | Interval to check node health. |
| `pool_error_threshold` | `uint32` | | `100` | The number of errors on connection after which node is considered as unhealthy. |
| Parameter | Type | SIGHUP reload | Default value | Description |
|------------------------|------------|---------------|---------------|------------------------------------------------------------------------------------|
| `rpc_endpoint` | `string` | yes | | The address of the RPC host to which the gateway connects to resolve bucket names. |
| `resolve_order` | `[]string` | yes | `[nns, dns]` | Order of bucket name resolvers to use. |
| `connect_timeout` | `duration` | | `10s` | Timeout to connect to a node. |
| `stream_timeout` | `duration` | | `10s` | Timeout for individual operations in streaming RPC. |
| `request_timeout` | `duration` | | `15s` | Timeout to check node health during rebalance. |
| `rebalance_timer` | `duration` | | `60s` | Interval to check node health. |
| `pool_error_threshold` | `uint32` | | `100` | The number of errors on connection after which node is considered as unhealthy. |
| `reconnect_interval` | `duration` | no | `1m` | Listeners reconnection interval. |
| `worker_pool_size` | `int` | no | `1000` | Maximum worker count in handler's worker pool. |
# `wallet` section
@ -157,12 +168,22 @@ server:
```yaml
logger:
level: debug
destination: stdout
sampling:
enabled: false
initial: 100
thereafter: 100
interval: 1s
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|-----------|----------|---------------|---------------|----------------------------------------------------------------------------------------------------|
| `level` | `string` | yes | `debug` | Logging level.<br/>Possible values: `debug`, `info`, `warn`, `error`, `dpanic`, `panic`, `fatal`. |
| Parameter | Type | SIGHUP reload | Default value | Description |
|-----------------------|------------|---------------|---------------|----------------------------------------------------------------------------------------------------|
| `level` | `string` | yes | `debug` | Logging level.<br/>Possible values: `debug`, `info`, `warn`, `error`, `dpanic`, `panic`, `fatal`. |
| `destination` | `string` | no | `stdout` | Destination for logger: `stdout` or `journald` |
| `sampling.enabled` | `bool` | no | false | Sampling enabling flag. |
| `sampling.initial` | `int` | no | '100' | Sampling count of first log entries. |
| `sampling.thereafter` | `int` | no | '100' | Sampling count of entries after an `interval`. |
| `sampling.interval` | `duration` | no | '1s' | Sampling interval of messaging similar entries. |
# `web` section
@ -197,9 +218,10 @@ upload_header:
|-------------------------|--------|---------------|---------------|-------------------------------------------------------------|
| `use_default_timestamp` | `bool` | yes | `false` | Create timestamp for object if it isn't provided by header. |
# `zip` section
> **_DEPRECATED:_** Use archive section instead
```yaml
zip:
compression: false
@ -209,6 +231,17 @@ zip:
|---------------|--------|---------------|---------------|--------------------------------------------------------------|
| `compression` | `bool` | yes | `false` | Enable zip compression when download files by common prefix. |
# `archive` section
```yaml
archive:
compression: false
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|---------------|--------|---------------|---------------|------------------------------------------------------------------|
| `compression` | `bool` | yes | `false` | Enable archive compression when download files by common prefix. |
# `pprof` section
@ -249,10 +282,207 @@ tracing:
enabled: true
exporter: "otlp_grpc"
endpoint: "localhost:4317"
trusted_ca: "/etc/ssl/telemetry-trusted-ca.pem"
attributes:
- key: key0
value: value
- key: key1
value: value
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|------------|----------|---------------|------------------|---------------------------------------------------------------|
| `enabled` | `bool` | yes | `false` | Flag to enable the tracing. |
| `exporter` | `string` | yes | | Trace collector type (`stdout` or `otlp_grpc` are supported). |
| `endpoint` | `string` | yes | | Address of collector endpoint for OTLP exporters. |
| Parameter | Type | SIGHUP reload | Default value | Description |
| ------------ | -------------------------------------- | ------------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| `enabled` | `bool` | yes | `false` | Flag to enable the tracing. |
| `exporter` | `string` | yes | | Trace collector type (`stdout` or `otlp_grpc` are supported). |
| `endpoint` | `string` | yes | | Address of collector endpoint for OTLP exporters. |
| `trusted_ca` | `string` | yes | | Path to certificate of a certification authority in pem format, that issued the TLS certificate of the telemetry remote server. |
| `attributes` | [[]Attributes](#attributes-subsection) | yes | | An array of configurable attributes in key-value format. |
#### `attributes` subsection
```yaml
attributes:
- key: key0
value: value
- key: key1
value: value
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|-----------------------|----------|---------------|---------------|----------------------------------------------------------|
| `key` | `string` | yes | | Attribute key. |
| `value` | `string` | yes | | Attribute value. |
# `runtime` section
Contains runtime parameters.
```yaml
runtime:
soft_memory_limit: 1gb
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|---------------------|--------|---------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `soft_memory_limit` | `size` | yes | maxint64 | Soft memory limit for the runtime. Zero or no value stands for no limit. If `GOMEMLIMIT` environment variable is set, the value from the configuration file will be ignored. |
# `frostfs` section
Contains parameters of requests to FrostFS.
```yaml
frostfs:
client_cut: false
buffer_max_size_for_put: 1048576 # 1mb
tree_pool_max_attempts: 0
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|---------------------------|----------|---------------|---------------|---------------------------------------------------------------------------------------------------------------------------|
| `client_cut` | `bool` | yes | `false` | This flag enables client side object preparing. |
| `buffer_max_size_for_put` | `uint64` | yes | `1048576` | Sets max buffer size for read payload in put operations. |
| `tree_pool_max_attempts` | `uint32` | no | `0` | Sets max attempt to make successful tree request. Value 0 means the number of attempts equals to number of nodes in pool. |
### `cache` section
```yaml
cache:
buckets:
lifetime: 1m
size: 1000
netmap:
lifetime: 1m
```
| Parameter | Type | Default value | Description |
|-----------|-----------------------------------|---------------------------------|---------------------------------------------------------------------------|
| `buckets` | [Cache config](#cache-subsection) | `lifetime: 60s`<br>`size: 1000` | Cache which contains mapping of bucket name to bucket info. |
| `netmap` | [Cache config](#cache-subsection) | `lifetime: 1m` | Cache which stores netmap. `netmap.size` isn't applicable for this cache. |
#### `cache` subsection
```yaml
lifetime: 1m
size: 1000
```
| Parameter | Type | Default value | Description |
|------------|------------|------------------|-------------------------------|
| `lifetime` | `duration` | depends on cache | Lifetime of entries in cache. |
| `size` | `int` | depends on cache | LRU cache size. |
# `resolve_bucket` section
Bucket name resolving parameters from and to container ID.
```yaml
resolve_bucket:
namespace_header: X-Frostfs-Namespace
default_namespaces: [ "", "root" ]
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|----------------------|------------|---------------|-----------------------|--------------------------------------------------------------------------------------------------------------------------|
| `namespace_header` | `string` | yes | `X-Frostfs-Namespace` | Header to determine zone to resolve bucket name. |
| `default_namespaces` | `[]string` | yes | ["","root"] | Namespaces that should be handled as default. |
# `index_page` section
Parameters for index HTML-page output. Activates if `GetObject` request returns `not found`. Two
index page modes available:
* `s3` mode uses tree service for listing objects,
* `native` sends requests to nodes via native protocol.
If request pass S3-bucket name instead of CID, `s3` mode will be used, otherwise `native`.
```yaml
index_page:
enabled: false
template_path: ""
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|-----------------|----------|---------------|---------------|---------------------------------------------------------------------------------|
| `enabled` | `bool` | yes | `false` | Flag to enable index_page return if no object with specified S3-name was found. |
| `template_path` | `string` | yes | `""` | Path to .gotmpl file with html template for index_page. |
# `cors` section
Parameters for CORS (used in OPTIONS requests and responses in all handlers).
If values are not set, headers will not be included to response.
```yaml
cors:
allow_origin: "*"
allow_methods: ["GET", "HEAD"]
allow_headers: ["Authorization"]
expose_headers: ["*"]
allow_credentials: false
max_age: 600
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|---------------------|------------|---------------|---------------|--------------------------------------------------------|
| `allow_origin` | `string` | yes | | Values for `Access-Control-Allow-Origin` headers. |
| `allow_methods` | `[]string` | yes | | Values for `Access-Control-Allow-Methods` headers. |
| `allow_headers` | `[]string` | yes | | Values for `Access-Control-Allow-Headers` headers. |
| `expose_headers` | `[]string` | yes | | Values for `Access-Control-Expose-Headers` headers. |
| `allow_credentials` | `bool` | yes | `false` | Values for `Access-Control-Allow-Credentials` headers. |
| `max_age` | `int` | yes | `600` | Values for `Access-Control-Max-Age ` headers. |
# `multinet` section
Configuration of multinet support.
```yaml
multinet:
enabled: false
balancer: roundrobin
restrict: false
fallback_delay: 300ms
subnets:
- mask: 1.2.3.4/24
source_ips:
- 1.2.3.4
- 1.2.3.5
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|------------------|--------------------------------|---------------|---------------|--------------------------------------------------------------------------------------------|
| `enabled` | `bool` | yes | `false` | Enables multinet setting to manage source ip of outcoming requests. |
| `balancer` | `string` | yes | `""` | Strategy to pick source IP. By default picks first address. Supports `roundrobin` setting. |
| `restrict` | `bool` | yes | `false` | Restricts requests to an undefined subnets. |
| `fallback_delay` | `duration` | yes | `300ms` | Delay between IPv6 and IPv4 fallback stack switch. |
| `subnets` | [[]Subnet](#subnet-subsection) | yes | | Set of subnets to apply multinet dial settings. |
#### `subnet` subsection
```yaml
- mask: 1.2.3.4/24
source_ips:
- 1.2.3.4
- 1.2.3.5
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|--------------|------------|---------------|---------------|----------------------------------------------------------------------|
| `mask` | `string` | yes | | Destination subnet. |
| `source_ips` | `[]string` | yes | | Array of source IP addresses to use when dialing destination subnet. |
# `features` section
Contains parameters for enabling features.
```yaml
features:
enable_filepath_fallback: true
tree_pool_netmap_support: true
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|-------------------------------------|--------|---------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `features.enable_filepath_fallback` | `bool` | yes | `false` | Enable using fallback path to search for a object by attribute. If the value of the `FilePath` attribute in the request contains no `/` symbols or single leading `/` symbol and the object was not found, then an attempt is made to search for the object by the attribute `FileName`. |
| `features.tree_pool_netmap_support` | `bool` | no | `false` | Enable using new version of tree pool, which uses netmap to select nodes, for requests to tree service. |

36
docs/nns.md Normal file
View file

@ -0,0 +1,36 @@
# Nicename Resolving with NNS
Steps to start using name resolving:
1. Enable NNS resolving in config (`rpc_endpoint` must be a valid neo rpc node, see [configs](./config) for other examples):
```yaml
rpc_endpoint: http://morph-chain.frostfs.devenv:30333
resolve_order:
- nns
```
2. Make sure your container is registered in NNS contract. If you use [frostfs-dev-env](https://git.frostfs.info/TrueCloudLab/frostfs-dev-env)
you can check if your container (e.g. with `container-name` name) is registered in NNS:
```shell
$ curl -s --data '{"id":1,"jsonrpc":"2.0","method":"getcontractstate","params":[1]}' \
http://morph-chain.frostfs.devenv:30333 | jq -r '.result.hash'
0x8e6c3cd4b976b28e84a3788f6ea9e2676c15d667
$ docker exec -it morph_chain neo-go \
contract testinvokefunction \
-r http://morph-chain.frostfs.devenv:30333 0x8e6c3cd4b976b28e84a3788f6ea9e2676c15d667 \
resolve string:container-name.container int:16 \
| jq -r '.stack[0].value | if type=="array" then .[0].value else . end' \
| base64 -d && echo
7f3vvkw4iTiS5ZZbu5BQXEmJtETWbi3uUjLNaSs29xrL
```
3. Use container name instead of its `$CID`. For example:
```shell
$ curl http://localhost:8082/get_by_attribute/container-name/FileName/object-name
```

View file

@ -1,537 +0,0 @@
package downloader
import (
"archive/zip"
"bufio"
"bytes"
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"path"
"strconv"
"strings"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/resolver"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/response"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tokens"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tree"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
"github.com/valyala/fasthttp"
"go.uber.org/atomic"
"go.uber.org/zap"
)
type request struct {
*fasthttp.RequestCtx
log *zap.Logger
}
func isValidToken(s string) bool {
for _, c := range s {
if c <= ' ' || c > 127 {
return false
}
if strings.ContainsRune("()<>@,;:\\\"/[]?={}", c) {
return false
}
}
return true
}
func isValidValue(s string) bool {
for _, c := range s {
// HTTP specification allows for more technically, but we don't want to escape things.
if c < ' ' || c > 127 || c == '"' {
return false
}
}
return true
}
type readCloser struct {
io.Reader
io.Closer
}
// initializes io.Reader with the limited size and detects Content-Type from it.
// Returns r's error directly. Also returns the processed data.
func readContentType(maxSize uint64, rInit func(uint64) (io.Reader, error)) (string, []byte, error) {
if maxSize > sizeToDetectType {
maxSize = sizeToDetectType
}
buf := make([]byte, maxSize) // maybe sync-pool the slice?
r, err := rInit(maxSize)
if err != nil {
return "", nil, err
}
n, err := r.Read(buf)
if err != nil && err != io.EOF {
return "", nil, err
}
buf = buf[:n]
return http.DetectContentType(buf), buf, err // to not lose io.EOF
}
func receiveFile(ctx context.Context, req request, clnt *pool.Pool, objectAddress oid.Address) {
var (
err error
dis = "inline"
start = time.Now()
filename string
)
var prm pool.PrmObjectGet
prm.SetAddress(objectAddress)
if btoken := bearerToken(ctx); btoken != nil {
prm.UseBearer(*btoken)
}
rObj, err := clnt.GetObject(ctx, prm)
if err != nil {
req.handleFrostFSErr(err, start)
return
}
// we can't close reader in this function, so how to do it?
if req.Request.URI().QueryArgs().GetBool("download") {
dis = "attachment"
}
payloadSize := rObj.Header.PayloadSize()
req.Response.Header.Set(fasthttp.HeaderContentLength, strconv.FormatUint(payloadSize, 10))
var contentType string
for _, attr := range rObj.Header.Attributes() {
key := attr.Key()
val := attr.Value()
if !isValidToken(key) || !isValidValue(val) {
continue
}
key = utils.BackwardTransformIfSystem(key)
req.Response.Header.Set(utils.UserAttributeHeaderPrefix+key, val)
switch key {
case object.AttributeFileName:
filename = val
case object.AttributeTimestamp:
value, err := strconv.ParseInt(val, 10, 64)
if err != nil {
req.log.Info(logs.CouldntParseCreationDate,
zap.String("key", key),
zap.String("val", val),
zap.Error(err))
continue
}
req.Response.Header.Set(fasthttp.HeaderLastModified,
time.Unix(value, 0).UTC().Format(http.TimeFormat))
case object.AttributeContentType:
contentType = val
}
}
idsToResponse(&req.Response, &rObj.Header)
if len(contentType) == 0 {
// determine the Content-Type from the payload head
var payloadHead []byte
contentType, payloadHead, err = readContentType(payloadSize, func(uint64) (io.Reader, error) {
return rObj.Payload, nil
})
if err != nil && err != io.EOF {
req.log.Error(logs.CouldNotDetectContentTypeFromPayload, zap.Error(err))
response.Error(req.RequestCtx, "could not detect Content-Type from payload: "+err.Error(), fasthttp.StatusBadRequest)
return
}
// reset payload reader since a part of the data has been read
var headReader io.Reader = bytes.NewReader(payloadHead)
if err != io.EOF { // otherwise, we've already read full payload
headReader = io.MultiReader(headReader, rObj.Payload)
}
// note: we could do with io.Reader, but SetBodyStream below closes body stream
// if it implements io.Closer and that's useful for us.
rObj.Payload = readCloser{headReader, rObj.Payload}
}
req.SetContentType(contentType)
req.Response.Header.Set(fasthttp.HeaderContentDisposition, dis+"; filename="+path.Base(filename))
req.Response.SetBodyStream(rObj.Payload, int(payloadSize))
}
func bearerToken(ctx context.Context) *bearer.Token {
if tkn, err := tokens.LoadBearerToken(ctx); err == nil {
return tkn
}
return nil
}
func (r *request) handleFrostFSErr(err error, start time.Time) {
logFields := []zap.Field{
zap.Stringer("elapsed", time.Since(start)),
zap.Error(err),
}
statusCode, msg, additionalFields := response.FormErrorResponse("could not receive object", err)
logFields = append(logFields, additionalFields...)
r.log.Error(logs.CouldNotReceiveObject, logFields...)
response.Error(r.RequestCtx, msg, statusCode)
}
// Downloader is a download request handler.
type Downloader struct {
log *zap.Logger
pool *pool.Pool
containerResolver *resolver.ContainerResolver
settings *Settings
tree *tree.Tree
}
// Settings stores reloading parameters, so it has to provide atomic getters and setters.
type Settings struct {
zipCompression atomic.Bool
}
func (s *Settings) ZipCompression() bool {
return s.zipCompression.Load()
}
func (s *Settings) SetZipCompression(val bool) {
s.zipCompression.Store(val)
}
// New creates an instance of Downloader using specified options.
func New(params *utils.AppParams, settings *Settings, tree *tree.Tree) *Downloader {
return &Downloader{
log: params.Logger,
pool: params.Pool,
settings: settings,
containerResolver: params.Resolver,
tree: tree,
}
}
func (d *Downloader) newRequest(ctx *fasthttp.RequestCtx, log *zap.Logger) *request {
return &request{
RequestCtx: ctx,
log: log,
}
}
// DownloadByAddressOrBucketName handles download requests using simple cid/oid or bucketname/key format.
func (d *Downloader) DownloadByAddressOrBucketName(c *fasthttp.RequestCtx) {
test, _ := c.UserValue("oid").(string)
var id oid.ID
err := id.DecodeString(test)
if err != nil {
d.byBucketname(c, receiveFile)
} else {
d.byAddress(c, receiveFile)
}
}
// byAddress is a wrapper for function (e.g. request.headObject, request.receiveFile) that
// prepares request and object address to it.
func (d *Downloader) byAddress(c *fasthttp.RequestCtx, f func(context.Context, request, *pool.Pool, oid.Address)) {
var (
idCnr, _ = c.UserValue("cid").(string)
idObj, _ = c.UserValue("oid").(string)
log = d.log.With(zap.String("cid", idCnr), zap.String("oid", idObj))
)
ctx := utils.GetContextFromRequest(c)
cnrID, err := utils.GetContainerID(ctx, idCnr, d.containerResolver)
if err != nil {
log.Error(logs.WrongContainerID, zap.Error(err))
response.Error(c, "wrong container id", fasthttp.StatusBadRequest)
return
}
objID := new(oid.ID)
if err = objID.DecodeString(idObj); err != nil {
log.Error(logs.WrongObjectID, zap.Error(err))
response.Error(c, "wrong object id", fasthttp.StatusBadRequest)
return
}
var addr oid.Address
addr.SetContainer(*cnrID)
addr.SetObject(*objID)
f(ctx, *d.newRequest(c, log), d.pool, addr)
}
// byBucketname is a wrapper for function (e.g. request.headObject, request.receiveFile) that
// prepares request and object address to it.
func (d *Downloader) byBucketname(req *fasthttp.RequestCtx, f func(context.Context, request, *pool.Pool, oid.Address)) {
var (
bucketname = req.UserValue("cid").(string)
key = req.UserValue("oid").(string)
log = d.log.With(zap.String("bucketname", bucketname), zap.String("key", key))
)
ctx := utils.GetContextFromRequest(req)
cnrID, err := utils.GetContainerID(ctx, bucketname, d.containerResolver)
if err != nil {
log.Error(logs.WrongContainerID, zap.Error(err))
response.Error(req, "wrong container id", fasthttp.StatusBadRequest)
return
}
foundOid, err := d.tree.GetLatestVersion(ctx, cnrID, key)
if err != nil {
log.Error(logs.ObjectWasntFound, zap.Error(err))
response.Error(req, "object wasn't found", fasthttp.StatusNotFound)
return
}
if foundOid.DeleteMarker {
log.Error(logs.ObjectWasDeleted)
response.Error(req, "object deleted", fasthttp.StatusNotFound)
return
}
var addr oid.Address
addr.SetContainer(*cnrID)
addr.SetObject(foundOid.OID)
f(ctx, *d.newRequest(req, log), d.pool, addr)
}
// DownloadByAttribute handles attribute-based download requests.
func (d *Downloader) DownloadByAttribute(c *fasthttp.RequestCtx) {
d.byAttribute(c, receiveFile)
}
// byAttribute is a wrapper similar to byAddress.
func (d *Downloader) byAttribute(c *fasthttp.RequestCtx, f func(context.Context, request, *pool.Pool, oid.Address)) {
var (
scid, _ = c.UserValue("cid").(string)
key, _ = url.QueryUnescape(c.UserValue("attr_key").(string))
val, _ = url.QueryUnescape(c.UserValue("attr_val").(string))
log = d.log.With(zap.String("cid", scid), zap.String("attr_key", key), zap.String("attr_val", val))
)
ctx := utils.GetContextFromRequest(c)
containerID, err := utils.GetContainerID(ctx, scid, d.containerResolver)
if err != nil {
log.Error(logs.WrongContainerID, zap.Error(err))
response.Error(c, "wrong container id", fasthttp.StatusBadRequest)
return
}
res, err := d.search(ctx, containerID, key, val, object.MatchStringEqual)
if err != nil {
log.Error(logs.CouldNotSearchForObjects, zap.Error(err))
response.Error(c, "could not search for objects: "+err.Error(), fasthttp.StatusBadRequest)
return
}
defer res.Close()
buf := make([]oid.ID, 1)
n, err := res.Read(buf)
if n == 0 {
if errors.Is(err, io.EOF) {
log.Error(logs.ObjectNotFound, zap.Error(err))
response.Error(c, "object not found", fasthttp.StatusNotFound)
return
}
log.Error(logs.ReadObjectListFailed, zap.Error(err))
response.Error(c, "read object list failed: "+err.Error(), fasthttp.StatusBadRequest)
return
}
var addrObj oid.Address
addrObj.SetContainer(*containerID)
addrObj.SetObject(buf[0])
f(ctx, *d.newRequest(c, log), d.pool, addrObj)
}
func (d *Downloader) search(ctx context.Context, cid *cid.ID, key, val string, op object.SearchMatchType) (pool.ResObjectSearch, error) {
filters := object.NewSearchFilters()
filters.AddRootFilter()
filters.AddFilter(key, val, op)
var prm pool.PrmObjectSearch
prm.SetContainerID(*cid)
prm.SetFilters(filters)
if btoken := bearerToken(ctx); btoken != nil {
prm.UseBearer(*btoken)
}
return d.pool.SearchObjects(ctx, prm)
}
func (d *Downloader) getContainer(ctx context.Context, cnrID cid.ID) (container.Container, error) {
var prm pool.PrmContainerGet
prm.SetContainerID(cnrID)
return d.pool.GetContainer(ctx, prm)
}
func (d *Downloader) addObjectToZip(zw *zip.Writer, obj *object.Object) (io.Writer, error) {
method := zip.Store
if d.settings.ZipCompression() {
method = zip.Deflate
}
filePath := getZipFilePath(obj)
if len(filePath) == 0 || filePath[len(filePath)-1] == '/' {
return nil, fmt.Errorf("invalid filepath '%s'", filePath)
}
return zw.CreateHeader(&zip.FileHeader{
Name: filePath,
Method: method,
Modified: time.Now(),
})
}
// DownloadZipped handles zip by prefix requests.
func (d *Downloader) DownloadZipped(c *fasthttp.RequestCtx) {
scid, _ := c.UserValue("cid").(string)
prefix, _ := url.QueryUnescape(c.UserValue("prefix").(string))
log := d.log.With(zap.String("cid", scid), zap.String("prefix", prefix))
ctx := utils.GetContextFromRequest(c)
containerID, err := utils.GetContainerID(ctx, scid, d.containerResolver)
if err != nil {
log.Error(logs.WrongContainerID, zap.Error(err))
response.Error(c, "wrong container id", fasthttp.StatusBadRequest)
return
}
// check if container exists here to be able to return 404 error,
// otherwise we get this error only in object iteration step
// and client get 200 OK.
if _, err = d.getContainer(ctx, *containerID); err != nil {
log.Error(logs.CouldNotCheckContainerExistence, zap.Error(err))
if client.IsErrContainerNotFound(err) {
response.Error(c, "Not Found", fasthttp.StatusNotFound)
return
}
response.Error(c, "could not check container existence: "+err.Error(), fasthttp.StatusBadRequest)
return
}
resSearch, err := d.search(ctx, containerID, object.AttributeFilePath, prefix, object.MatchCommonPrefix)
if err != nil {
log.Error(logs.CouldNotSearchForObjects, zap.Error(err))
response.Error(c, "could not search for objects: "+err.Error(), fasthttp.StatusBadRequest)
return
}
c.Response.Header.Set(fasthttp.HeaderContentType, "application/zip")
c.Response.Header.Set(fasthttp.HeaderContentDisposition, "attachment; filename=\"archive.zip\"")
c.Response.SetStatusCode(http.StatusOK)
c.SetBodyStreamWriter(func(w *bufio.Writer) {
defer resSearch.Close()
zipWriter := zip.NewWriter(w)
var bufZip []byte
var addr oid.Address
empty := true
called := false
btoken := bearerToken(ctx)
addr.SetContainer(*containerID)
errIter := resSearch.Iterate(func(id oid.ID) bool {
called = true
if empty {
bufZip = make([]byte, 3<<20) // the same as for upload
}
empty = false
addr.SetObject(id)
if err = d.zipObject(ctx, zipWriter, addr, btoken, bufZip); err != nil {
log.Error(logs.FailedToAddObjectToArchive, zap.String("oid", id.EncodeToString()), zap.Error(err))
}
return false
})
if errIter != nil {
log.Error(logs.IteratingOverSelectedObjectsFailed, zap.Error(errIter))
} else if !called {
log.Error(logs.ObjectsNotFound)
}
if err = zipWriter.Close(); err != nil {
log.Error(logs.CloseZipWriter, zap.Error(err))
}
})
}
func (d *Downloader) zipObject(ctx context.Context, zipWriter *zip.Writer, addr oid.Address, btoken *bearer.Token, bufZip []byte) error {
var prm pool.PrmObjectGet
prm.SetAddress(addr)
if btoken != nil {
prm.UseBearer(*btoken)
}
resGet, err := d.pool.GetObject(ctx, prm)
if err != nil {
return fmt.Errorf("get FrostFS object: %v", err)
}
objWriter, err := d.addObjectToZip(zipWriter, &resGet.Header)
if err != nil {
return fmt.Errorf("zip create header: %v", err)
}
if _, err = io.CopyBuffer(objWriter, resGet.Payload, bufZip); err != nil {
return fmt.Errorf("copy object payload to zip file: %v", err)
}
if err = resGet.Payload.Close(); err != nil {
return fmt.Errorf("object body close error: %w", err)
}
if err = zipWriter.Flush(); err != nil {
return fmt.Errorf("flush zip writer: %v", err)
}
return nil
}
func getZipFilePath(obj *object.Object) string {
for _, attr := range obj.Attributes() {
if attr.Key() == object.AttributeFilePath {
return attr.Value()
}
}
return ""
}

View file

@ -1,48 +0,0 @@
//go:build !integration
package downloader
import (
"io"
"strings"
"testing"
"github.com/stretchr/testify/require"
)
func TestDetector(t *testing.T) {
txtContentType := "text/plain; charset=utf-8"
sb := strings.Builder{}
for i := 0; i < 10; i++ {
sb.WriteString("Some txt content. Content-Type must be detected properly by detector.")
}
for _, tc := range []struct {
Name string
ContentType string
Expected string
}{
{
Name: "less than 512b",
ContentType: txtContentType,
Expected: sb.String()[:256],
},
{
Name: "more than 512b",
ContentType: txtContentType,
Expected: sb.String(),
},
} {
t.Run(tc.Name, func(t *testing.T) {
contentType, data, err := readContentType(uint64(len(tc.Expected)),
func(sz uint64) (io.Reader, error) {
return strings.NewReader(tc.Expected), nil
},
)
require.NoError(t, err)
require.Equal(t, tc.ContentType, contentType)
require.True(t, strings.HasPrefix(tc.Expected, string(data)))
})
}
}

159
go.mod
View file

@ -1,113 +1,138 @@
module git.frostfs.info/TrueCloudLab/frostfs-http-gw
go 1.20
go 1.22
require (
git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.15.1-0.20230802075510-964c3edb3f44
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20230802103237-363f153eafa6
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20241112082307-f17779933e88
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20250130095343-593dd77d841a
git.frostfs.info/TrueCloudLab/multinet v0.0.0-20241015075604-6cb0d80e0972
git.frostfs.info/TrueCloudLab/zapjournald v0.0.0-20240124114243-cb2e66427d02
github.com/bluele/gcache v0.0.2
github.com/docker/docker v27.1.1+incompatible
github.com/docker/go-units v0.5.0
github.com/fasthttp/router v1.4.1
github.com/nspcc-dev/neo-go v0.101.2-0.20230601131642-a0117042e8fc
github.com/prometheus/client_golang v1.15.1
github.com/prometheus/client_model v0.3.0
github.com/nspcc-dev/neo-go v0.106.2
github.com/panjf2000/ants/v2 v2.5.0
github.com/prometheus/client_golang v1.19.0
github.com/prometheus/client_model v0.5.0
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.15.0
github.com/stretchr/testify v1.8.3
github.com/testcontainers/testcontainers-go v0.13.0
github.com/ssgreg/journald v1.0.0
github.com/stretchr/testify v1.9.0
github.com/testcontainers/testcontainers-go v0.35.0
github.com/trailofbits/go-fuzz-utils v0.0.0-20230413173806-58c38daa3cb4
github.com/valyala/fasthttp v1.34.0
go.opentelemetry.io/otel v1.16.0
go.opentelemetry.io/otel/trace v1.16.0
go.uber.org/atomic v1.10.0
go.uber.org/zap v1.24.0
google.golang.org/grpc v1.55.0
go.opentelemetry.io/otel v1.31.0
go.opentelemetry.io/otel/trace v1.31.0
go.uber.org/zap v1.27.0
golang.org/x/exp v0.0.0-20240506185415-9bf2ced13842
golang.org/x/net v0.30.0
golang.org/x/sys v0.28.0
google.golang.org/grpc v1.69.2
)
require (
git.frostfs.info/TrueCloudLab/frostfs-contract v0.0.0-20230307110621-19a8ef2d02fb // indirect
dario.cat/mergo v1.0.0 // indirect
git.frostfs.info/TrueCloudLab/frostfs-contract v0.19.3-0.20240621131249-49e5270f673e // indirect
git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0 // indirect
git.frostfs.info/TrueCloudLab/hrw v1.2.1 // indirect
git.frostfs.info/TrueCloudLab/rfc6979 v0.4.0 // indirect
git.frostfs.info/TrueCloudLab/tzhash v1.8.0 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/Microsoft/hcsshim v0.9.2 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/VictoriaMetrics/easyproto v0.1.4 // indirect
github.com/andybalholm/brotli v1.0.4 // indirect
github.com/antlr4-go/antlr/v4 v4.13.0 // indirect
github.com/antlr4-go/antlr/v4 v4.13.1 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/containerd/cgroups v1.0.3 // indirect
github.com/containerd/containerd v1.6.2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/containerd/containerd v1.7.18 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/containerd/platforms v0.2.1 // indirect
github.com/cpuguy83/dockercfg v0.3.2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.2.0 // indirect
github.com/docker/distribution v2.8.1+incompatible // indirect
github.com/docker/docker v20.10.14+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/docker/go-connections v0.5.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.6.0 // indirect
github.com/go-logr/logr v1.2.4 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/gorilla/mux v1.8.0 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.15.2 // indirect
github.com/hashicorp/golang-lru v0.6.0 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.2 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gorilla/websocket v1.5.1 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/klauspost/compress v1.16.4 // indirect
github.com/ipfs/go-cid v0.0.7 // indirect
github.com/klauspost/compress v1.17.4 // indirect
github.com/klauspost/cpuid/v2 v2.2.6 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/magiconair/properties v1.8.7 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/minio/sha256-simd v1.0.1 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/moby/sys/mount v0.3.2 // indirect
github.com/moby/sys/mountinfo v0.6.1 // indirect
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
github.com/moby/patternmatcher v0.6.0 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect
github.com/moby/sys/user v0.1.0 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect
github.com/nspcc-dev/go-ordered-json v0.0.0-20220111165707-25110be27d22 // indirect
github.com/nspcc-dev/neo-go/pkg/interop v0.0.0-20230615193820-9185820289ce // indirect
github.com/nspcc-dev/rfc6979 v0.2.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/multiformats/go-multiaddr v0.14.0 // indirect
github.com/multiformats/go-multibase v0.2.0 // indirect
github.com/multiformats/go-multihash v0.2.3 // indirect
github.com/multiformats/go-varint v0.0.7 // indirect
github.com/nspcc-dev/go-ordered-json v0.0.0-20240301084351-0246b013f8b2 // indirect
github.com/nspcc-dev/neo-go/pkg/interop v0.0.0-20240521091047-78685785716d // indirect
github.com/nspcc-dev/rfc6979 v0.2.1 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.2 // indirect
github.com/opencontainers/runc v1.1.1 // indirect
github.com/opencontainers/image-spec v1.1.0 // indirect
github.com/pelletier/go-toml/v2 v2.0.6 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/common v0.42.0 // indirect
github.com/prometheus/procfs v0.9.0 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/prometheus/common v0.48.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/savsgio/gotils v0.0.0-20210617111740-97865ed5a873 // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
github.com/shirou/gopsutil/v3 v3.23.12 // indirect
github.com/shoenig/go-m1cpu v0.1.6 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/spf13/afero v1.9.3 // indirect
github.com/spf13/cast v1.5.0 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/subosito/gotenv v1.4.2 // indirect
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/twmb/murmur3 v1.1.8 // indirect
github.com/urfave/cli v1.22.5 // indirect
github.com/urfave/cli v1.22.12 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.16.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.16.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.16.0 // indirect
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.16.0 // indirect
go.opentelemetry.io/otel/metric v1.16.0 // indirect
go.opentelemetry.io/otel/sdk v1.16.0 // indirect
go.opentelemetry.io/proto/otlp v0.19.0 // indirect
github.com/yusufpapurcu/wmi v1.2.3 // indirect
go.etcd.io/bbolt v1.3.9 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0 // indirect
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.28.0 // indirect
go.opentelemetry.io/otel/metric v1.31.0 // indirect
go.opentelemetry.io/otel/sdk v1.31.0 // indirect
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.9.0 // indirect
golang.org/x/exp v0.0.0-20230515195305-f3d0a9c9a5cc // indirect
golang.org/x/net v0.10.0 // indirect
golang.org/x/sync v0.2.0 // indirect
golang.org/x/sys v0.8.0 // indirect
golang.org/x/term v0.8.0 // indirect
golang.org/x/text v0.9.0 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.3.0 // indirect
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect
google.golang.org/protobuf v1.30.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20241015192408-796eee8c2d53 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20241015192408-796eee8c2d53 // indirect
google.golang.org/protobuf v1.36.1 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
lukechampine.com/blake3 v1.2.1 // indirect
)

1215
go.sum

File diff suppressed because it is too large Load diff

111
internal/cache/buckets.go vendored Normal file
View file

@ -0,0 +1,111 @@
package cache
import (
"fmt"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"github.com/bluele/gcache"
"go.uber.org/zap"
)
// BucketCache contains cache with objects and the lifetime of cache entries.
type BucketCache struct {
cache gcache.Cache
cidCache gcache.Cache
logger *zap.Logger
}
// Config stores expiration params for cache.
type Config struct {
Size int
Lifetime time.Duration
Logger *zap.Logger
}
const (
// DefaultBucketCacheSize is a default maximum number of entries in cache.
DefaultBucketCacheSize = 1e3
// DefaultBucketCacheLifetime is a default lifetime of entries in cache.
DefaultBucketCacheLifetime = time.Minute
)
// DefaultBucketConfig returns new default cache expiration values.
func DefaultBucketConfig(logger *zap.Logger) *Config {
return &Config{
Size: DefaultBucketCacheSize,
Lifetime: DefaultBucketCacheLifetime,
Logger: logger,
}
}
// NewBucketCache creates an object of BucketCache.
func NewBucketCache(config *Config, cidCache bool) *BucketCache {
cache := &BucketCache{
cache: gcache.New(config.Size).LRU().Expiration(config.Lifetime).Build(),
logger: config.Logger,
}
if cidCache {
cache.cidCache = gcache.New(config.Size).LRU().Expiration(config.Lifetime).Build()
}
return cache
}
// Get returns a cached object.
func (o *BucketCache) Get(ns, bktName string) *data.BucketInfo {
return o.get(formKey(ns, bktName))
}
func (o *BucketCache) GetByCID(cnrID cid.ID) *data.BucketInfo {
if o.cidCache == nil {
return nil
}
entry, err := o.cidCache.Get(cnrID)
if err != nil {
return nil
}
key, ok := entry.(string)
if !ok {
o.logger.Warn(logs.InvalidCacheEntryType, zap.String("actual", fmt.Sprintf("%T", entry)),
zap.String("expected", fmt.Sprintf("%T", key)))
return nil
}
return o.get(key)
}
func (o *BucketCache) get(key string) *data.BucketInfo {
entry, err := o.cache.Get(key)
if err != nil {
return nil
}
result, ok := entry.(*data.BucketInfo)
if !ok {
o.logger.Warn(logs.InvalidCacheEntryType, zap.String("actual", fmt.Sprintf("%T", entry)),
zap.String("expected", fmt.Sprintf("%T", result)))
return nil
}
return result
}
// Put puts an object to cache.
func (o *BucketCache) Put(bkt *data.BucketInfo) error {
if o.cidCache != nil {
if err := o.cidCache.Set(bkt.CID, formKey(bkt.Zone, bkt.Name)); err != nil {
return err
}
}
return o.cache.Set(formKey(bkt.Zone, bkt.Name), bkt)
}
func formKey(ns, name string) string {
return name + "." + ns
}

65
internal/cache/netmap.go vendored Normal file
View file

@ -0,0 +1,65 @@
package cache
import (
"fmt"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"github.com/bluele/gcache"
"go.uber.org/zap"
)
type (
// NetmapCache provides cache for netmap.
NetmapCache struct {
cache gcache.Cache
logger *zap.Logger
}
// NetmapCacheConfig stores expiration params for cache.
NetmapCacheConfig struct {
Lifetime time.Duration
Logger *zap.Logger
}
)
const (
DefaultNetmapCacheLifetime = time.Minute
netmapCacheSize = 1
netmapKey = "netmap"
)
// DefaultNetmapConfig returns new default cache expiration values.
func DefaultNetmapConfig(logger *zap.Logger) *NetmapCacheConfig {
return &NetmapCacheConfig{
Lifetime: DefaultNetmapCacheLifetime,
Logger: logger,
}
}
// NewNetmapCache creates an object of NetmapCache.
func NewNetmapCache(config *NetmapCacheConfig) *NetmapCache {
gc := gcache.New(netmapCacheSize).LRU().Expiration(config.Lifetime).Build()
return &NetmapCache{cache: gc, logger: config.Logger}
}
func (c *NetmapCache) Get() *netmap.NetMap {
entry, err := c.cache.Get(netmapKey)
if err != nil {
return nil
}
result, ok := entry.(netmap.NetMap)
if !ok {
c.logger.Warn(logs.InvalidCacheEntryType, zap.String("actual", fmt.Sprintf("%T", entry)),
zap.String("expected", fmt.Sprintf("%T", result)))
return nil
}
return &result
}
func (c *NetmapCache) Put(nm netmap.NetMap) error {
return c.cache.Set(netmapKey, nm)
}

14
internal/data/info.go Normal file
View file

@ -0,0 +1,14 @@
package data
import (
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
)
type BucketInfo struct {
Name string // container name from system attribute
Zone string // container zone from system attribute
CID cid.ID
HomomorphicHashDisabled bool
PlacementPolicy netmap.PlacementPolicy
}

View file

@ -1,4 +1,4 @@
package api
package data
import (
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
@ -7,11 +7,21 @@ import (
// NodeVersion represent node from tree service.
type NodeVersion struct {
BaseNodeVersion
DeleteMarker bool
}
// BaseNodeVersion is minimal node info from tree service.
// Basically used for "system" object.
type BaseNodeVersion struct {
OID oid.ID
ID uint64
OID oid.ID
IsDeleteMarker bool
}
type NodeInfo struct {
Meta []NodeMeta
}
type NodeMeta interface {
GetKey() string
GetValue() []byte
}

View file

@ -1,115 +0,0 @@
package services
import (
"context"
"errors"
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tokens"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tree"
treepool "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool/tree"
grpcService "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool/tree/service"
)
type GetNodeByPathResponseInfoWrapper struct {
response *grpcService.GetNodeByPathResponse_Info
}
func (n GetNodeByPathResponseInfoWrapper) GetNodeID() uint64 {
return n.response.GetNodeId()
}
func (n GetNodeByPathResponseInfoWrapper) GetParentID() uint64 {
return n.response.GetParentId()
}
func (n GetNodeByPathResponseInfoWrapper) GetTimestamp() uint64 {
return n.response.GetTimestamp()
}
func (n GetNodeByPathResponseInfoWrapper) GetMeta() []tree.Meta {
res := make([]tree.Meta, len(n.response.Meta))
for i, value := range n.response.Meta {
res[i] = value
}
return res
}
type GetSubTreeResponseBodyWrapper struct {
response *grpcService.GetSubTreeResponse_Body
}
func (n GetSubTreeResponseBodyWrapper) GetNodeID() uint64 {
return n.response.GetNodeId()
}
func (n GetSubTreeResponseBodyWrapper) GetParentID() uint64 {
return n.response.GetParentId()
}
func (n GetSubTreeResponseBodyWrapper) GetTimestamp() uint64 {
return n.response.GetTimestamp()
}
func (n GetSubTreeResponseBodyWrapper) GetMeta() []tree.Meta {
res := make([]tree.Meta, len(n.response.Meta))
for i, value := range n.response.Meta {
res[i] = value
}
return res
}
type PoolWrapper struct {
p *treepool.Pool
}
func NewPoolWrapper(p *treepool.Pool) *PoolWrapper {
return &PoolWrapper{p: p}
}
func (w *PoolWrapper) GetNodes(ctx context.Context, prm *tree.GetNodesParams) ([]tree.NodeResponse, error) {
poolPrm := treepool.GetNodesParams{
CID: prm.CnrID,
TreeID: prm.TreeID,
Path: prm.Path,
Meta: prm.Meta,
PathAttribute: tree.FileNameKey,
LatestOnly: prm.LatestOnly,
AllAttrs: prm.AllAttrs,
BearerToken: getBearer(ctx),
}
nodes, err := w.p.GetNodes(ctx, poolPrm)
if err != nil {
return nil, handleError(err)
}
res := make([]tree.NodeResponse, len(nodes))
for i, info := range nodes {
res[i] = GetNodeByPathResponseInfoWrapper{info}
}
return res, nil
}
func getBearer(ctx context.Context) []byte {
token, err := tokens.LoadBearerToken(ctx)
if err != nil {
return nil
}
return token.Marshal()
}
func handleError(err error) error {
if err == nil {
return nil
}
if errors.Is(err, treepool.ErrNodeNotFound) {
return fmt.Errorf("%w: %s", tree.ErrNodeNotFound, err.Error())
}
if errors.Is(err, treepool.ErrNodeAccessDenied) {
return fmt.Errorf("%w: %s", tree.ErrNodeAccessDenied, err.Error())
}
return err
}

382
internal/handler/browse.go Normal file
View file

@ -0,0 +1,382 @@
package handler
import (
"context"
"html/template"
"net/url"
"sort"
"strconv"
"strings"
"sync"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/docker/go-units"
"github.com/valyala/fasthttp"
"go.uber.org/zap"
)
const (
dateFormat = "02-01-2006 15:04"
attrOID = "OID"
attrCreated = "Created"
attrFileName = "FileName"
attrFilePath = "FilePath"
attrSize = "Size"
attrDeleteMarker = "IsDeleteMarker"
)
type (
BrowsePageData struct {
HasErrors bool
Container string
Prefix string
Protocol string
Objects []ResponseObject
}
ResponseObject struct {
OID string
Created string
FileName string
FilePath string
Size string
IsDir bool
GetURL string
IsDeleteMarker bool
}
)
func newListObjectsResponseS3(attrs map[string]string) ResponseObject {
return ResponseObject{
Created: formatTimestamp(attrs[attrCreated]),
OID: attrs[attrOID],
FileName: attrs[attrFileName],
Size: attrs[attrSize],
IsDir: attrs[attrOID] == "",
IsDeleteMarker: attrs[attrDeleteMarker] == "true",
}
}
func newListObjectsResponseNative(attrs map[string]string) ResponseObject {
filename := lastPathElement(attrs[object.AttributeFilePath])
if filename == "" {
filename = attrs[attrFileName]
}
return ResponseObject{
OID: attrs[attrOID],
Created: formatTimestamp(attrs[object.AttributeTimestamp] + "000"),
FileName: filename,
FilePath: attrs[object.AttributeFilePath],
Size: attrs[attrSize],
IsDir: false,
}
}
func getNextDir(filepath, prefix string) string {
restPath := strings.Replace(filepath, prefix, "", 1)
index := strings.Index(restPath, "/")
if index == -1 {
return ""
}
return restPath[:index]
}
func lastPathElement(path string) string {
if path == "" {
return path
}
index := strings.LastIndex(path, "/")
if index == len(path)-1 {
index = strings.LastIndex(path[:index], "/")
}
return path[index+1:]
}
func parseTimestamp(tstamp string) (time.Time, error) {
millis, err := strconv.ParseInt(tstamp, 10, 64)
if err != nil {
return time.Time{}, err
}
return time.UnixMilli(millis), nil
}
func formatTimestamp(strdate string) string {
date, err := parseTimestamp(strdate)
if err != nil || date.IsZero() {
return ""
}
return date.Format(dateFormat)
}
func formatSize(strsize string) string {
size, err := strconv.ParseFloat(strsize, 64)
if err != nil {
return "0B"
}
return units.HumanSize(size)
}
func parentDir(prefix string) string {
index := strings.LastIndex(prefix, "/")
if index == -1 {
return prefix
}
return prefix[index:]
}
func trimPrefix(encPrefix string) string {
prefix, err := url.PathUnescape(encPrefix)
if err != nil {
return ""
}
slashIndex := strings.LastIndex(prefix, "/")
if slashIndex == -1 {
return ""
}
return prefix[:slashIndex]
}
func urlencode(path string) string {
var res strings.Builder
prefixParts := strings.Split(path, "/")
for _, prefixPart := range prefixParts {
prefixPart = "/" + url.PathEscape(prefixPart)
if prefixPart == "/." || prefixPart == "/.." {
prefixPart = url.PathEscape(prefixPart)
}
res.WriteString(prefixPart)
}
return res.String()
}
type GetObjectsResponse struct {
objects []ResponseObject
hasErrors bool
}
func (h *Handler) getDirObjectsS3(ctx context.Context, bucketInfo *data.BucketInfo, prefix string) (*GetObjectsResponse, error) {
nodes, _, err := h.tree.GetSubTreeByPrefix(ctx, bucketInfo, prefix, true)
if err != nil {
return nil, err
}
result := &GetObjectsResponse{
objects: make([]ResponseObject, 0, len(nodes)),
}
for _, node := range nodes {
meta := node.Meta
if meta == nil {
continue
}
var attrs = make(map[string]string, len(meta))
for _, m := range meta {
attrs[m.GetKey()] = string(m.GetValue())
}
obj := newListObjectsResponseS3(attrs)
if obj.IsDeleteMarker {
continue
}
obj.FilePath = prefix + obj.FileName
obj.GetURL = "/get/" + bucketInfo.Name + urlencode(obj.FilePath)
result.objects = append(result.objects, obj)
}
return result, nil
}
func (h *Handler) getDirObjectsNative(ctx context.Context, bucketInfo *data.BucketInfo, prefix string) (*GetObjectsResponse, error) {
var basePath string
if ind := strings.LastIndex(prefix, "/"); ind != -1 {
basePath = prefix[:ind+1]
}
filters := object.NewSearchFilters()
filters.AddRootFilter()
if prefix != "" {
filters.AddFilter(object.AttributeFilePath, prefix, object.MatchCommonPrefix)
}
prm := PrmObjectSearch{
PrmAuth: PrmAuth{
BearerToken: bearerToken(ctx),
},
Container: bucketInfo.CID,
Filters: filters,
}
objectIDs, err := h.frostfs.SearchObjects(ctx, prm)
if err != nil {
return nil, err
}
defer objectIDs.Close()
resp, err := h.headDirObjects(ctx, bucketInfo.CID, objectIDs, basePath)
if err != nil {
return nil, err
}
log := utils.GetReqLogOrDefault(ctx, h.log)
dirs := make(map[string]struct{})
result := &GetObjectsResponse{
objects: make([]ResponseObject, 0, 100),
}
for objExt := range resp {
if objExt.Error != nil {
log.Error(logs.FailedToHeadObject, zap.Error(objExt.Error))
result.hasErrors = true
continue
}
if objExt.Object.IsDir {
if _, ok := dirs[objExt.Object.FileName]; ok {
continue
}
objExt.Object.GetURL = "/get/" + bucketInfo.CID.EncodeToString() + urlencode(objExt.Object.FilePath)
dirs[objExt.Object.FileName] = struct{}{}
} else {
objExt.Object.GetURL = "/get/" + bucketInfo.CID.EncodeToString() + "/" + objExt.Object.OID
}
result.objects = append(result.objects, objExt.Object)
}
return result, nil
}
type ResponseObjectExtended struct {
Object ResponseObject
Error error
}
func (h *Handler) headDirObjects(ctx context.Context, cnrID cid.ID, objectIDs ResObjectSearch, basePath string) (<-chan ResponseObjectExtended, error) {
res := make(chan ResponseObjectExtended)
go func() {
defer close(res)
log := utils.GetReqLogOrDefault(ctx, h.log).With(
zap.String("cid", cnrID.EncodeToString()),
zap.String("path", basePath),
)
var wg sync.WaitGroup
err := objectIDs.Iterate(func(id oid.ID) bool {
wg.Add(1)
err := h.workerPool.Submit(func() {
defer wg.Done()
var obj ResponseObjectExtended
obj.Object, obj.Error = h.headDirObject(ctx, cnrID, id, basePath)
res <- obj
})
if err != nil {
wg.Done()
log.Warn(logs.FailedToSumbitTaskToPool, zap.Error(err))
}
select {
case <-ctx.Done():
return true
default:
return false
}
})
if err != nil {
log.Error(logs.FailedToIterateOverResponse, zap.Error(err))
}
wg.Wait()
}()
return res, nil
}
func (h *Handler) headDirObject(ctx context.Context, cnrID cid.ID, objID oid.ID, basePath string) (ResponseObject, error) {
addr := newAddress(cnrID, objID)
obj, err := h.frostfs.HeadObject(ctx, PrmObjectHead{
PrmAuth: PrmAuth{BearerToken: bearerToken(ctx)},
Address: addr,
})
if err != nil {
return ResponseObject{}, err
}
attrs := loadAttributes(obj.Attributes())
attrs[attrOID] = objID.EncodeToString()
if multipartSize, ok := attrs[attributeMultipartObjectSize]; ok {
attrs[attrSize] = multipartSize
} else {
attrs[attrSize] = strconv.FormatUint(obj.PayloadSize(), 10)
}
dirname := getNextDir(attrs[object.AttributeFilePath], basePath)
if dirname == "" {
return newListObjectsResponseNative(attrs), nil
}
return ResponseObject{
FileName: dirname,
FilePath: basePath + dirname,
IsDir: true,
}, nil
}
type browseParams struct {
bucketInfo *data.BucketInfo
prefix string
isNative bool
listObjects func(ctx context.Context, bucketName *data.BucketInfo, prefix string) (*GetObjectsResponse, error)
}
func (h *Handler) browseObjects(c *fasthttp.RequestCtx, p browseParams) {
const S3Protocol = "s3"
const FrostfsProtocol = "frostfs"
ctx := utils.GetContextFromRequest(c)
reqLog := utils.GetReqLogOrDefault(ctx, h.log)
log := reqLog.With(
zap.String("bucket", p.bucketInfo.Name),
zap.String("container", p.bucketInfo.CID.EncodeToString()),
zap.String("prefix", p.prefix),
)
resp, err := p.listObjects(ctx, p.bucketInfo, p.prefix)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
objects := resp.objects
sort.Slice(objects, func(i, j int) bool {
if objects[i].IsDir == objects[j].IsDir {
return objects[i].FileName < objects[j].FileName
}
return objects[i].IsDir
})
tmpl, err := template.New("index").Funcs(template.FuncMap{
"formatSize": formatSize,
"trimPrefix": trimPrefix,
"urlencode": urlencode,
"parentDir": parentDir,
}).Parse(h.config.IndexPageTemplate())
if err != nil {
logAndSendBucketError(c, log, err)
return
}
bucketName := p.bucketInfo.Name
protocol := S3Protocol
if p.isNative {
bucketName = p.bucketInfo.CID.EncodeToString()
protocol = FrostfsProtocol
}
if err = tmpl.Execute(c, &BrowsePageData{
Container: bucketName,
Prefix: p.prefix,
Objects: objects,
Protocol: protocol,
HasErrors: resp.hasErrors,
}); err != nil {
logAndSendBucketError(c, log, err)
return
}
}

View file

@ -0,0 +1,304 @@
package handler
import (
"archive/tar"
"archive/zip"
"bufio"
"compress/gzip"
"context"
"errors"
"fmt"
"io"
"net/url"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/valyala/fasthttp"
"go.uber.org/zap"
)
// DownloadByAddressOrBucketName handles download requests using simple cid/oid or bucketname/key format.
func (h *Handler) DownloadByAddressOrBucketName(c *fasthttp.RequestCtx) {
cidParam := c.UserValue("cid").(string)
oidParam := c.UserValue("oid").(string)
downloadParam := c.QueryArgs().GetBool("download")
ctx := utils.GetContextFromRequest(c)
log := utils.GetReqLogOrDefault(ctx, h.log).With(
zap.String("cid", cidParam),
zap.String("oid", oidParam),
)
bktInfo, err := h.getBucketInfo(ctx, cidParam, log)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
checkS3Err := h.tree.CheckSettingsNodeExists(ctx, bktInfo)
if checkS3Err != nil && !errors.Is(checkS3Err, layer.ErrNodeNotFound) {
logAndSendBucketError(c, log, checkS3Err)
return
}
req := newRequest(c, log)
var objID oid.ID
if checkS3Err == nil && shouldDownload(oidParam, downloadParam) {
h.byS3Path(ctx, req, bktInfo.CID, oidParam, h.receiveFile)
} else if err = objID.DecodeString(oidParam); err == nil {
h.byNativeAddress(ctx, req, bktInfo.CID, objID, h.receiveFile)
} else {
h.browseIndex(c, checkS3Err != nil)
}
}
func shouldDownload(oidParam string, downloadParam bool) bool {
return !isDir(oidParam) || downloadParam
}
// DownloadByAttribute handles attribute-based download requests.
func (h *Handler) DownloadByAttribute(c *fasthttp.RequestCtx) {
h.byAttribute(c, h.receiveFile)
}
func (h *Handler) search(ctx context.Context, cnrID cid.ID, key, val string, op object.SearchMatchType) (ResObjectSearch, error) {
filters := object.NewSearchFilters()
filters.AddRootFilter()
filters.AddFilter(key, val, op)
prm := PrmObjectSearch{
PrmAuth: PrmAuth{
BearerToken: bearerToken(ctx),
},
Container: cnrID,
Filters: filters,
}
return h.frostfs.SearchObjects(ctx, prm)
}
// DownloadZip handles zip by prefix requests.
func (h *Handler) DownloadZip(c *fasthttp.RequestCtx) {
scid, _ := c.UserValue("cid").(string)
ctx := utils.GetContextFromRequest(c)
log := utils.GetReqLogOrDefault(ctx, h.log)
bktInfo, err := h.getBucketInfo(ctx, scid, log)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
resSearch, err := h.searchObjectsByPrefix(c, log, bktInfo.CID)
if err != nil {
return
}
c.Response.Header.Set(fasthttp.HeaderContentType, "application/zip")
c.Response.Header.Set(fasthttp.HeaderContentDisposition, "attachment; filename=\"archive.zip\"")
c.SetBodyStreamWriter(h.getZipResponseWriter(ctx, log, resSearch, bktInfo))
}
func (h *Handler) getZipResponseWriter(ctx context.Context, log *zap.Logger, resSearch ResObjectSearch, bktInfo *data.BucketInfo) func(w *bufio.Writer) {
return func(w *bufio.Writer) {
defer resSearch.Close()
buf := make([]byte, 3<<20)
zipWriter := zip.NewWriter(w)
var objectsWritten int
errIter := resSearch.Iterate(h.putObjectToArchive(ctx, log, bktInfo.CID, buf,
func(obj *object.Object) (io.Writer, error) {
objectsWritten++
return h.createZipFile(zipWriter, obj)
}),
)
if errIter != nil {
log.Error(logs.IteratingOverSelectedObjectsFailed, zap.Error(errIter))
return
} else if objectsWritten == 0 {
log.Warn(logs.ObjectsNotFound)
}
if err := zipWriter.Close(); err != nil {
log.Error(logs.CloseZipWriter, zap.Error(err))
}
}
}
func (h *Handler) createZipFile(zw *zip.Writer, obj *object.Object) (io.Writer, error) {
method := zip.Store
if h.config.ArchiveCompression() {
method = zip.Deflate
}
filePath := getFilePath(obj)
if len(filePath) == 0 || filePath[len(filePath)-1] == '/' {
return nil, fmt.Errorf("invalid filepath '%s'", filePath)
}
return zw.CreateHeader(&zip.FileHeader{
Name: filePath,
Method: method,
Modified: time.Now(),
})
}
// DownloadTar forms tar.gz from objects by prefix.
func (h *Handler) DownloadTar(c *fasthttp.RequestCtx) {
scid, _ := c.UserValue("cid").(string)
ctx := utils.GetContextFromRequest(c)
log := utils.GetReqLogOrDefault(ctx, h.log)
bktInfo, err := h.getBucketInfo(ctx, scid, log)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
resSearch, err := h.searchObjectsByPrefix(c, log, bktInfo.CID)
if err != nil {
return
}
c.Response.Header.Set(fasthttp.HeaderContentType, "application/gzip")
c.Response.Header.Set(fasthttp.HeaderContentDisposition, "attachment; filename=\"archive.tar.gz\"")
c.SetBodyStreamWriter(h.getTarResponseWriter(ctx, log, resSearch, bktInfo))
}
func (h *Handler) getTarResponseWriter(ctx context.Context, log *zap.Logger, resSearch ResObjectSearch, bktInfo *data.BucketInfo) func(w *bufio.Writer) {
return func(w *bufio.Writer) {
defer resSearch.Close()
compressionLevel := gzip.NoCompression
if h.config.ArchiveCompression() {
compressionLevel = gzip.DefaultCompression
}
// ignore error because it's not nil only if compressionLevel argument is invalid
gzipWriter, _ := gzip.NewWriterLevel(w, compressionLevel)
tarWriter := tar.NewWriter(gzipWriter)
defer func() {
if err := tarWriter.Close(); err != nil {
log.Error(logs.CloseTarWriter, zap.Error(err))
}
if err := gzipWriter.Close(); err != nil {
log.Error(logs.CloseGzipWriter, zap.Error(err))
}
}()
var objectsWritten int
buf := make([]byte, 3<<20) // the same as for upload
errIter := resSearch.Iterate(h.putObjectToArchive(ctx, log, bktInfo.CID, buf,
func(obj *object.Object) (io.Writer, error) {
objectsWritten++
return h.createTarFile(tarWriter, obj)
}),
)
if errIter != nil {
log.Error(logs.IteratingOverSelectedObjectsFailed, zap.Error(errIter))
} else if objectsWritten == 0 {
log.Warn(logs.ObjectsNotFound)
}
}
}
func (h *Handler) createTarFile(tw *tar.Writer, obj *object.Object) (io.Writer, error) {
filePath := getFilePath(obj)
if len(filePath) == 0 || filePath[len(filePath)-1] == '/' {
return nil, fmt.Errorf("invalid filepath '%s'", filePath)
}
return tw, tw.WriteHeader(&tar.Header{
Name: filePath,
Mode: 0655,
Size: int64(obj.PayloadSize()),
})
}
func (h *Handler) putObjectToArchive(ctx context.Context, log *zap.Logger, cnrID cid.ID, buf []byte, createArchiveHeader func(obj *object.Object) (io.Writer, error)) func(id oid.ID) bool {
return func(id oid.ID) bool {
log = log.With(zap.String("oid", id.EncodeToString()))
prm := PrmObjectGet{
PrmAuth: PrmAuth{
BearerToken: bearerToken(ctx),
},
Address: newAddress(cnrID, id),
}
resGet, err := h.frostfs.GetObject(ctx, prm)
if err != nil {
log.Error(logs.FailedToGetObject, zap.Error(err))
return false
}
fileWriter, err := createArchiveHeader(&resGet.Header)
if err != nil {
log.Error(logs.FailedToAddObjectToArchive, zap.Error(err))
return false
}
if err = writeToArchive(resGet, fileWriter, buf); err != nil {
log.Error(logs.FailedToAddObjectToArchive, zap.Error(err))
return false
}
return false
}
}
func (h *Handler) searchObjectsByPrefix(c *fasthttp.RequestCtx, log *zap.Logger, cnrID cid.ID) (ResObjectSearch, error) {
scid, _ := c.UserValue("cid").(string)
prefix, _ := c.UserValue("prefix").(string)
ctx := utils.GetContextFromRequest(c)
prefix, err := url.QueryUnescape(prefix)
if err != nil {
log.Error(logs.FailedToUnescapeQuery, zap.String("cid", scid), zap.String("prefix", prefix), zap.Error(err))
ResponseError(c, "could not unescape prefix: "+err.Error(), fasthttp.StatusBadRequest)
return nil, err
}
log = log.With(zap.String("cid", scid), zap.String("prefix", prefix))
resSearch, err := h.search(ctx, cnrID, object.AttributeFilePath, prefix, object.MatchCommonPrefix)
if err != nil {
log.Error(logs.CouldNotSearchForObjects, zap.Error(err))
ResponseError(c, "could not search for objects: "+err.Error(), fasthttp.StatusBadRequest)
return nil, err
}
return resSearch, nil
}
func writeToArchive(resGet *Object, objWriter io.Writer, buf []byte) error {
var err error
if _, err = io.CopyBuffer(objWriter, resGet.Payload, buf); err != nil {
return fmt.Errorf("copy object payload to zip file: %v", err)
}
if err = resGet.Payload.Close(); err != nil {
return fmt.Errorf("object body close error: %w", err)
}
return nil
}
func getFilePath(obj *object.Object) string {
for _, attr := range obj.Attributes() {
if attr.Key() == object.AttributeFilePath {
return attr.Value()
}
}
return ""
}

View file

@ -1,4 +1,4 @@
package uploader
package handler
import (
"bytes"

View file

@ -1,6 +1,6 @@
//go:build !integration
package uploader
package handler
import (
"testing"

View file

@ -0,0 +1,275 @@
package handler
import (
"bytes"
"context"
"crypto/rand"
"crypto/sha256"
"fmt"
"io"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/checksum"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
)
type TestFrostFS struct {
objects map[string]*object.Object
containers map[string]*container.Container
accessList map[string]bool
key *keys.PrivateKey
}
func NewTestFrostFS(key *keys.PrivateKey) *TestFrostFS {
return &TestFrostFS{
objects: make(map[string]*object.Object),
containers: make(map[string]*container.Container),
accessList: make(map[string]bool),
key: key,
}
}
func (t *TestFrostFS) ContainerID(name string) (*cid.ID, error) {
for id, cnr := range t.containers {
if container.Name(*cnr) == name {
var cnrID cid.ID
return &cnrID, cnrID.DecodeString(id)
}
}
return nil, fmt.Errorf("not found")
}
func (t *TestFrostFS) SetContainer(cnrID cid.ID, cnr *container.Container) {
t.containers[cnrID.EncodeToString()] = cnr
}
// AllowUserOperation grants access to object operations.
// Empty userID and objID means any user and object respectively.
func (t *TestFrostFS) AllowUserOperation(cnrID cid.ID, userID user.ID, op acl.Op, objID oid.ID) {
t.accessList[fmt.Sprintf("%s/%s/%s/%s", cnrID, userID, op, objID)] = true
}
func (t *TestFrostFS) Container(_ context.Context, prm PrmContainer) (*container.Container, error) {
for k, v := range t.containers {
if k == prm.ContainerID.EncodeToString() {
return v, nil
}
}
return nil, fmt.Errorf("container not found %s", prm.ContainerID)
}
func (t *TestFrostFS) requestOwner(btoken *bearer.Token) user.ID {
if btoken != nil {
return bearer.ResolveIssuer(*btoken)
}
var owner user.ID
user.IDFromKey(&owner, t.key.PrivateKey.PublicKey)
return owner
}
func (t *TestFrostFS) retrieveObject(addr oid.Address, btoken *bearer.Token) (*object.Object, error) {
sAddr := addr.EncodeToString()
if obj, ok := t.objects[sAddr]; ok {
owner := t.requestOwner(btoken)
if !t.isAllowed(addr.Container(), owner, acl.OpObjectGet, addr.Object()) {
return nil, ErrAccessDenied
}
return obj, nil
}
return nil, fmt.Errorf("%w: %s", &apistatus.ObjectNotFound{}, addr)
}
func (t *TestFrostFS) HeadObject(_ context.Context, prm PrmObjectHead) (*object.Object, error) {
return t.retrieveObject(prm.Address, prm.BearerToken)
}
func (t *TestFrostFS) GetObject(_ context.Context, prm PrmObjectGet) (*Object, error) {
obj, err := t.retrieveObject(prm.Address, prm.BearerToken)
if err != nil {
return nil, err
}
return &Object{
Header: *obj,
Payload: io.NopCloser(bytes.NewReader(obj.Payload())),
}, nil
}
func (t *TestFrostFS) RangeObject(_ context.Context, prm PrmObjectRange) (io.ReadCloser, error) {
obj, err := t.retrieveObject(prm.Address, prm.BearerToken)
if err != nil {
return nil, err
}
off := prm.PayloadRange[0]
payload := obj.Payload()[off : off+prm.PayloadRange[1]]
return io.NopCloser(bytes.NewReader(payload)), nil
}
func (t *TestFrostFS) CreateObject(_ context.Context, prm PrmObjectCreate) (oid.ID, error) {
b := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, b); err != nil {
return oid.ID{}, err
}
var id oid.ID
id.SetSHA256(sha256.Sum256(b))
prm.Object.SetID(id)
attrs := prm.Object.Attributes()
if prm.ClientCut {
a := object.NewAttribute()
a.SetKey("s3-client-cut")
a.SetValue("true")
attrs = append(attrs, *a)
}
prm.Object.SetAttributes(attrs...)
if prm.Payload != nil {
all, err := io.ReadAll(prm.Payload)
if err != nil {
return oid.ID{}, err
}
prm.Object.SetPayload(all)
prm.Object.SetPayloadSize(uint64(len(all)))
var hash checksum.Checksum
checksum.Calculate(&hash, checksum.SHA256, all)
prm.Object.SetPayloadChecksum(hash)
}
cnrID, _ := prm.Object.ContainerID()
objID, _ := prm.Object.ID()
owner := t.requestOwner(prm.BearerToken)
if !t.isAllowed(cnrID, owner, acl.OpObjectPut, objID) {
return oid.ID{}, ErrAccessDenied
}
addr := newAddress(cnrID, objID)
t.objects[addr.EncodeToString()] = prm.Object
return objID, nil
}
type resObjectSearchMock struct {
res []oid.ID
}
func (r *resObjectSearchMock) Read(buf []oid.ID) (int, error) {
for i := range buf {
if i > len(r.res)-1 {
return len(r.res), io.EOF
}
buf[i] = r.res[i]
}
r.res = r.res[len(buf):]
return len(buf), nil
}
func (r *resObjectSearchMock) Iterate(f func(oid.ID) bool) error {
for _, id := range r.res {
if f(id) {
return nil
}
}
return nil
}
func (r *resObjectSearchMock) Close() {}
func (t *TestFrostFS) SearchObjects(_ context.Context, prm PrmObjectSearch) (ResObjectSearch, error) {
if !t.isAllowed(prm.Container, t.requestOwner(prm.BearerToken), acl.OpObjectSearch, oid.ID{}) {
return nil, ErrAccessDenied
}
cidStr := prm.Container.EncodeToString()
var res []oid.ID
if len(prm.Filters) == 1 { // match root filter
for k, v := range t.objects {
if strings.Contains(k, cidStr) {
id, _ := v.ID()
res = append(res, id)
}
}
return &resObjectSearchMock{res: res}, nil
}
filter := prm.Filters[1]
if len(prm.Filters) != 2 ||
filter.Operation() != object.MatchCommonPrefix && filter.Operation() != object.MatchStringEqual {
return nil, fmt.Errorf("usupported filters")
}
for k, v := range t.objects {
if strings.Contains(k, cidStr) && isMatched(v.Attributes(), filter) {
id, _ := v.ID()
res = append(res, id)
}
}
return &resObjectSearchMock{res: res}, nil
}
func (t *TestFrostFS) InitMultiObjectReader(context.Context, PrmInitMultiObjectReader) (io.Reader, error) {
return nil, nil
}
func isMatched(attributes []object.Attribute, filter object.SearchFilter) bool {
for _, attr := range attributes {
if attr.Key() == filter.Header() {
switch filter.Operation() {
case object.MatchStringEqual:
return attr.Value() == filter.Value()
case object.MatchCommonPrefix:
return strings.HasPrefix(attr.Value(), filter.Value())
default:
return false
}
}
}
return false
}
func (t *TestFrostFS) GetEpochDurations(context.Context) (*utils.EpochDurations, error) {
return &utils.EpochDurations{
CurrentEpoch: 10,
MsPerBlock: 1000,
BlockPerEpoch: 100,
}, nil
}
func (t *TestFrostFS) isAllowed(cnrID cid.ID, userID user.ID, op acl.Op, objID oid.ID) bool {
keysToCheck := []string{
fmt.Sprintf("%s/%s/%s/%s", cnrID, userID, op, objID),
fmt.Sprintf("%s/%s/%s/%s", cnrID, userID, op, oid.ID{}),
fmt.Sprintf("%s/%s/%s/%s", cnrID, user.ID{}, op, objID),
fmt.Sprintf("%s/%s/%s/%s", cnrID, user.ID{}, op, oid.ID{}),
}
for _, key := range keysToCheck {
if t.accessList[key] {
return true
}
}
return false
}

411
internal/handler/handler.go Normal file
View file

@ -0,0 +1,411 @@
package handler
import (
"context"
"errors"
"fmt"
"io"
"net/url"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/cache"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/handler/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/panjf2000/ants/v2"
"github.com/valyala/fasthttp"
"go.uber.org/zap"
)
type Config interface {
DefaultTimestamp() bool
ArchiveCompression() bool
ClientCut() bool
IndexPageEnabled() bool
IndexPageTemplate() string
BufferMaxSizeForPut() uint64
NamespaceHeader() string
EnableFilepathFallback() bool
}
// PrmContainer groups parameters of FrostFS.Container operation.
type PrmContainer struct {
// Container identifier.
ContainerID cid.ID
}
// PrmAuth groups authentication parameters for the FrostFS operation.
type PrmAuth struct {
// Bearer token to be used for the operation. Overlaps PrivateKey. Optional.
BearerToken *bearer.Token
}
// PrmObjectHead groups parameters of FrostFS.HeadObject operation.
type PrmObjectHead struct {
// Authentication parameters.
PrmAuth
// Address to read the object header from.
Address oid.Address
}
// PrmObjectGet groups parameters of FrostFS.GetObject operation.
type PrmObjectGet struct {
// Authentication parameters.
PrmAuth
// Address to read the object header from.
Address oid.Address
}
// PrmObjectRange groups parameters of FrostFS.RangeObject operation.
type PrmObjectRange struct {
// Authentication parameters.
PrmAuth
// Address to read the object header from.
Address oid.Address
// Offset-length range of the object payload to be read.
PayloadRange [2]uint64
}
// Object represents FrostFS object.
type Object struct {
// Object header (doesn't contain payload).
Header object.Object
// Object payload part encapsulated in io.Reader primitive.
// Returns ErrAccessDenied on read access violation.
Payload io.ReadCloser
}
// PrmObjectCreate groups parameters of FrostFS.CreateObject operation.
type PrmObjectCreate struct {
// Authentication parameters.
PrmAuth
Object *object.Object
// Object payload encapsulated in io.Reader primitive.
Payload io.Reader
// Enables client side object preparing.
ClientCut bool
// Disables using Tillich-Zémor hash for payload.
WithoutHomomorphicHash bool
// Sets max buffer size to read payload.
BufferMaxSize uint64
}
// PrmObjectSearch groups parameters of FrostFS.sear SearchObjects operation.
type PrmObjectSearch struct {
// Authentication parameters.
PrmAuth
// Container to select the objects from.
Container cid.ID
Filters object.SearchFilters
}
type PrmInitMultiObjectReader struct {
// payload range
Off, Ln uint64
Addr oid.Address
Bearer *bearer.Token
}
type ResObjectSearch interface {
Read(buf []oid.ID) (int, error)
Iterate(f func(oid.ID) bool) error
Close()
}
var (
// ErrAccessDenied is returned from FrostFS in case of access violation.
ErrAccessDenied = errors.New("access denied")
// ErrGatewayTimeout is returned from FrostFS in case of timeout, deadline exceeded etc.
ErrGatewayTimeout = errors.New("gateway timeout")
// ErrQuotaLimitReached is returned from FrostFS in case of quota exceeded.
ErrQuotaLimitReached = errors.New("quota limit reached")
)
// FrostFS represents virtual connection to FrostFS network.
type FrostFS interface {
Container(context.Context, PrmContainer) (*container.Container, error)
HeadObject(context.Context, PrmObjectHead) (*object.Object, error)
GetObject(context.Context, PrmObjectGet) (*Object, error)
RangeObject(context.Context, PrmObjectRange) (io.ReadCloser, error)
CreateObject(context.Context, PrmObjectCreate) (oid.ID, error)
SearchObjects(context.Context, PrmObjectSearch) (ResObjectSearch, error)
InitMultiObjectReader(ctx context.Context, p PrmInitMultiObjectReader) (io.Reader, error)
utils.EpochInfoFetcher
}
type ContainerResolver interface {
Resolve(ctx context.Context, name string) (*cid.ID, error)
}
type Handler struct {
log *zap.Logger
frostfs FrostFS
ownerID *user.ID
config Config
containerResolver ContainerResolver
tree layer.TreeService
cache *cache.BucketCache
workerPool *ants.Pool
}
type AppParams struct {
Logger *zap.Logger
FrostFS FrostFS
Owner *user.ID
Resolver ContainerResolver
Cache *cache.BucketCache
}
func New(params *AppParams, config Config, tree layer.TreeService, workerPool *ants.Pool) *Handler {
return &Handler{
log: params.Logger,
frostfs: params.FrostFS,
ownerID: params.Owner,
config: config,
containerResolver: params.Resolver,
tree: tree,
cache: params.Cache,
workerPool: workerPool,
}
}
// byNativeAddress is a wrapper for function (e.g. request.headObject, request.receiveFile) that
// prepares request and object address to it.
func (h *Handler) byNativeAddress(ctx context.Context, req request, cnrID cid.ID, objID oid.ID, handler func(context.Context, request, oid.Address)) {
addr := newAddress(cnrID, objID)
handler(ctx, req, addr)
}
// byS3Path is a wrapper for function (e.g. request.headObject, request.receiveFile) that
// resolves object address from S3-like path <bucket name>/<object key>.
func (h *Handler) byS3Path(ctx context.Context, req request, cnrID cid.ID, path string, handler func(context.Context, request, oid.Address)) {
c, log := req.RequestCtx, req.log
foundOID, err := h.tree.GetLatestVersion(ctx, &cnrID, path)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
if foundOID.IsDeleteMarker {
log.Error(logs.ObjectWasDeleted)
ResponseError(c, "object deleted", fasthttp.StatusNotFound)
return
}
addr := newAddress(cnrID, foundOID.OID)
handler(ctx, newRequest(c, log), addr)
}
// byAttribute is a wrapper similar to byNativeAddress.
func (h *Handler) byAttribute(c *fasthttp.RequestCtx, handler func(context.Context, request, oid.Address)) {
cidParam, _ := c.UserValue("cid").(string)
key, _ := c.UserValue("attr_key").(string)
val, _ := c.UserValue("attr_val").(string)
ctx := utils.GetContextFromRequest(c)
log := utils.GetReqLogOrDefault(ctx, h.log)
key, err := url.QueryUnescape(key)
if err != nil {
log.Error(logs.FailedToUnescapeQuery, zap.String("cid", cidParam), zap.String("attr_key", key), zap.Error(err))
ResponseError(c, "could not unescape attr_key: "+err.Error(), fasthttp.StatusBadRequest)
return
}
val, err = url.QueryUnescape(val)
if err != nil {
log.Error(logs.FailedToUnescapeQuery, zap.String("cid", cidParam), zap.String("attr_val", val), zap.Error(err))
ResponseError(c, "could not unescape attr_val: "+err.Error(), fasthttp.StatusBadRequest)
return
}
log = log.With(zap.String("cid", cidParam), zap.String("attr_key", key), zap.String("attr_val", val))
bktInfo, err := h.getBucketInfo(ctx, cidParam, log)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
objID, err := h.findObjectByAttribute(ctx, log, bktInfo.CID, key, val)
if err != nil {
if errors.Is(err, io.EOF) {
ResponseError(c, err.Error(), fasthttp.StatusNotFound)
return
}
ResponseError(c, err.Error(), fasthttp.StatusBadRequest)
return
}
var addr oid.Address
addr.SetContainer(bktInfo.CID)
addr.SetObject(objID)
handler(ctx, newRequest(c, log), addr)
}
func (h *Handler) findObjectByAttribute(ctx context.Context, log *zap.Logger, cnrID cid.ID, attrKey, attrVal string) (oid.ID, error) {
res, err := h.search(ctx, cnrID, attrKey, attrVal, object.MatchStringEqual)
if err != nil {
log.Error(logs.CouldNotSearchForObjects, zap.Error(err))
return oid.ID{}, fmt.Errorf("could not search for objects: %w", err)
}
defer res.Close()
buf := make([]oid.ID, 1)
n, err := res.Read(buf)
if n == 0 {
switch {
case errors.Is(err, io.EOF) && h.needSearchByFileName(attrKey, attrVal):
log.Debug(logs.ObjectNotFoundByFilePathTrySearchByFileName)
return h.findObjectByAttribute(ctx, log, cnrID, attrFileName, attrVal)
case errors.Is(err, io.EOF):
log.Error(logs.ObjectNotFound, zap.Error(err))
return oid.ID{}, fmt.Errorf("object not found: %w", err)
default:
log.Error(logs.ReadObjectListFailed, zap.Error(err))
return oid.ID{}, fmt.Errorf("read object list failed: %w", err)
}
}
return buf[0], nil
}
func (h *Handler) needSearchByFileName(key, val string) bool {
if key != attrFilePath || !h.config.EnableFilepathFallback() {
return false
}
return strings.HasPrefix(val, "/") && strings.Count(val, "/") == 1 || !strings.Contains(val, "/")
}
// resolveContainer decode container id, if it's not a valid container id
// then trey to resolve name using provided resolver.
func (h *Handler) resolveContainer(ctx context.Context, containerID string) (*cid.ID, error) {
cnrID := new(cid.ID)
err := cnrID.DecodeString(containerID)
if err != nil {
cnrID, err = h.containerResolver.Resolve(ctx, containerID)
if err != nil && strings.Contains(err.Error(), "not found") {
err = fmt.Errorf("%w: %s", new(apistatus.ContainerNotFound), err.Error())
}
}
return cnrID, err
}
func (h *Handler) getBucketInfo(ctx context.Context, containerName string, log *zap.Logger) (*data.BucketInfo, error) {
ns, err := middleware.GetNamespace(ctx)
if err != nil {
return nil, err
}
if bktInfo := h.cache.Get(ns, containerName); bktInfo != nil {
return bktInfo, nil
}
cnrID, err := h.resolveContainer(ctx, containerName)
if err != nil {
return nil, err
}
bktInfo, err := h.readContainer(ctx, *cnrID)
if err != nil {
return nil, err
}
if err = h.cache.Put(bktInfo); err != nil {
log.Warn(logs.CouldntPutBucketIntoCache,
zap.String("bucket name", bktInfo.Name),
zap.Stringer("bucket cid", bktInfo.CID),
zap.Error(err))
}
return bktInfo, nil
}
func (h *Handler) readContainer(ctx context.Context, cnrID cid.ID) (*data.BucketInfo, error) {
prm := PrmContainer{ContainerID: cnrID}
res, err := h.frostfs.Container(ctx, prm)
if err != nil {
return nil, fmt.Errorf("get frostfs container '%s': %w", cnrID.String(), err)
}
bktInfo := &data.BucketInfo{
CID: cnrID,
Name: cnrID.EncodeToString(),
}
if domain := container.ReadDomain(*res); domain.Name() != "" {
bktInfo.Name = domain.Name()
bktInfo.Zone = domain.Zone()
}
bktInfo.HomomorphicHashDisabled = container.IsHomomorphicHashingDisabled(*res)
bktInfo.PlacementPolicy = res.PlacementPolicy()
return bktInfo, err
}
func (h *Handler) browseIndex(c *fasthttp.RequestCtx, isNativeList bool) {
if !h.config.IndexPageEnabled() {
c.SetStatusCode(fasthttp.StatusNotFound)
return
}
cidURLParam := c.UserValue("cid").(string)
oidURLParam := c.UserValue("oid").(string)
ctx := utils.GetContextFromRequest(c)
reqLog := utils.GetReqLogOrDefault(ctx, h.log)
log := reqLog.With(zap.String("cid", cidURLParam), zap.String("oid", oidURLParam))
unescapedKey, err := url.QueryUnescape(oidURLParam)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
bktInfo, err := h.getBucketInfo(ctx, cidURLParam, log)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
listFunc := h.getDirObjectsS3
if isNativeList {
// tree probe failed, trying to use native
listFunc = h.getDirObjectsNative
}
h.browseObjects(c, browseParams{
bucketInfo: bktInfo,
prefix: unescapedKey,
listObjects: listFunc,
isNative: isNativeList,
})
}

View file

@ -0,0 +1,580 @@
//go:build gofuzz
// +build gofuzz
package handler
import (
"bytes"
"context"
"encoding/json"
"errors"
"io"
"mime/multipart"
"net/http"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/handler/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tokens"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
go_fuzz_utils "github.com/trailofbits/go-fuzz-utils"
"github.com/valyala/fasthttp"
)
const (
fuzzSuccessExitCode = 0
fuzzFailExitCode = -1
)
func prepareStrings(tp *go_fuzz_utils.TypeProvider, count int) ([]string, error) {
array := make([]string, count)
var err error
for i := 0; i < count; i++ {
err = tp.Reset()
if err != nil {
return nil, err
}
array[i], err = tp.GetString()
if err != nil {
return nil, err
}
}
return array, nil
}
func prepareBools(tp *go_fuzz_utils.TypeProvider, count int) ([]bool, error) {
array := make([]bool, count)
var err error
for i := 0; i < count; i++ {
err = tp.Reset()
if err != nil {
return nil, err
}
array[i], err = tp.GetBool()
if err != nil {
return nil, err
}
}
return array, nil
}
func getRandomDeterministicPositiveIntInRange(tp *go_fuzz_utils.TypeProvider, max int) (int, error) {
count, err := tp.GetInt()
if err != nil {
return -1, err
}
count = count % max
if count < 0 {
count += max
}
return count, nil
}
func generateHeaders(tp *go_fuzz_utils.TypeProvider, r *fasthttp.Request, params []string) error {
count, err := tp.GetInt()
if err != nil {
return err
}
count = count % len(params)
if count < 0 {
count += len(params)
}
for i := 0; i < count; i++ {
position, err := tp.GetInt()
if err != nil {
return err
}
position = position % len(params)
if position < 0 {
position += len(params)
}
v, err := tp.GetString()
if err != nil {
return err
}
r.Header.Set(params[position], v)
}
return nil
}
func maybeFillRandom(tp *go_fuzz_utils.TypeProvider, initValue string) (string, error) {
rnd, err := tp.GetBool()
if err != nil {
return "", err
}
if rnd == true {
initValue, err = tp.GetString()
if err != nil {
return "", err
}
}
return initValue, nil
}
func upload(tp *go_fuzz_utils.TypeProvider) (context.Context, *handlerContext, cid.ID, *fasthttp.RequestCtx, string, string, string, error) {
hc, err := prepareHandlerContext()
if err != nil {
return nil, nil, cid.ID{}, nil, "", "", "", err
}
aclList := []acl.Basic{
acl.Private,
acl.PrivateExtended,
acl.PublicRO,
acl.PublicROExtended,
acl.PublicRW,
acl.PublicRWExtended,
acl.PublicAppend,
acl.PublicAppendExtended,
}
pos, err := getRandomDeterministicPositiveIntInRange(tp, len(aclList))
if err != nil {
return nil, nil, cid.ID{}, nil, "", "", "", err
}
acl := aclList[pos]
strings, err := prepareStrings(tp, 6)
if err != nil {
return nil, nil, cid.ID{}, nil, "", "", "", err
}
bktName := strings[0]
objFileName := strings[1]
valAttr := strings[2]
keyAttr := strings[3]
if len(bktName) == 0 {
return nil, nil, cid.ID{}, nil, "", "", "", errors.New("not enought buckets")
}
cnrID, cnr, err := hc.prepareContainer(bktName, acl)
if err != nil {
return nil, nil, cid.ID{}, nil, "", "", "", err
}
hc.frostfs.SetContainer(cnrID, cnr)
ctx := context.Background()
ctx = middleware.SetNamespace(ctx, "")
r := new(fasthttp.RequestCtx)
utils.SetContextToRequest(ctx, r)
r.SetUserValue("cid", cnrID.EncodeToString())
attributes := map[string]string{
object.AttributeFileName: objFileName,
keyAttr: valAttr,
}
var buff bytes.Buffer
w := multipart.NewWriter(&buff)
fw, err := w.CreateFormFile("file", attributes[object.AttributeFileName])
if err != nil {
return nil, nil, cid.ID{}, nil, "", "", "", err
}
content, err := tp.GetBytes()
if err != nil {
return nil, nil, cid.ID{}, nil, "", "", "", err
}
if _, err = io.Copy(fw, bytes.NewReader(content)); err != nil {
return nil, nil, cid.ID{}, nil, "", "", "", err
}
if err = w.Close(); err != nil {
return nil, nil, cid.ID{}, nil, "", "", "", err
}
r.Request.SetBodyStream(&buff, buff.Len())
r.Request.Header.Set("Content-Type", w.FormDataContentType())
r.Request.Header.Set("X-Attribute-"+keyAttr, valAttr)
err = generateHeaders(tp, &r.Request, []string{"X-Attribute-", "X-Attribute-DupKey", "X-Attribute-MyAttribute", "X-Attribute-System-DupKey", "X-Attribute-System-Expiration-Epoch1", "X-Attribute-SYSTEM-Expiration-Epoch2", "X-Attribute-system-Expiration-Epoch3", "X-Attribute-User-Attribute", "X-Attribute-", "X-Attribute-FileName", "X-Attribute-FROSTFS", "X-Attribute-neofs", "X-Attribute-SYSTEM", "X-Attribute-System-Expiration-Duration", "X-Attribute-System-Expiration-Epoch", "X-Attribute-System-Expiration-RFC3339", "X-Attribute-System-Expiration-Timestamp", "X-Attribute-Timestamp", "X-Attribute-" + strings[4], "X-Attribute-System-" + strings[5]})
if err != nil {
return nil, nil, cid.ID{}, nil, "", "", "", err
}
hc.Handler().Upload(r)
if r.Response.StatusCode() != http.StatusOK {
return nil, nil, cid.ID{}, nil, "", "", "", errors.New("error on upload")
}
return ctx, hc, cnrID, r, objFileName, keyAttr, valAttr, nil
}
func InitFuzzUpload() {
}
func DoFuzzUpload(input []byte) int {
// FUZZER INIT
if len(input) < 100 {
return fuzzFailExitCode
}
tp, err := go_fuzz_utils.NewTypeProvider(input)
if err != nil {
return fuzzFailExitCode
}
_, _, _, _, _, _, _, err = upload(tp)
if err != nil {
return fuzzFailExitCode
}
return fuzzSuccessExitCode
}
func FuzzUpload(f *testing.F) {
f.Fuzz(func(t *testing.T, data []byte) {
DoFuzzUpload(data)
})
}
func downloadOrHead(tp *go_fuzz_utils.TypeProvider, ctx context.Context, hc *handlerContext, cnrID cid.ID, resp *fasthttp.RequestCtx, filename string) (*fasthttp.RequestCtx, error) {
var putRes putResponse
defer func() {
if r := recover(); r != nil {
panic(resp)
}
}()
data := resp.Response.Body()
err := json.Unmarshal(data, &putRes)
if err != nil {
return nil, err
}
obj := hc.frostfs.objects[putRes.ContainerID+"/"+putRes.ObjectID]
attr := object.NewAttribute()
attr.SetKey(object.AttributeFilePath)
filename, err = maybeFillRandom(tp, filename)
if err != nil {
return nil, err
}
attr.SetValue(filename)
obj.SetAttributes(append(obj.Attributes(), *attr)...)
r := new(fasthttp.RequestCtx)
utils.SetContextToRequest(ctx, r)
cid := cnrID.EncodeToString()
cid, err = maybeFillRandom(tp, cid)
if err != nil {
return nil, err
}
oid := putRes.ObjectID
oid, err = maybeFillRandom(tp, oid)
if err != nil {
return nil, err
}
r.SetUserValue("cid", cid)
r.SetUserValue("oid", oid)
rnd, err := tp.GetBool()
if err != nil {
return nil, err
}
if rnd == true {
r.SetUserValue("download", "true")
}
return r, nil
}
func InitFuzzGet() {
}
func DoFuzzGet(input []byte) int {
// FUZZER INIT
if len(input) < 100 {
return fuzzFailExitCode
}
tp, err := go_fuzz_utils.NewTypeProvider(input)
if err != nil {
return fuzzFailExitCode
}
ctx, hc, cnrID, resp, filename, _, _, err := upload(tp)
if err != nil {
return fuzzFailExitCode
}
r, err := downloadOrHead(tp, ctx, hc, cnrID, resp, filename)
if err != nil {
return fuzzFailExitCode
}
hc.Handler().DownloadByAddressOrBucketName(r)
return fuzzSuccessExitCode
}
func FuzzGet(f *testing.F) {
f.Fuzz(func(t *testing.T, data []byte) {
DoFuzzUpload(data)
})
}
func InitFuzzHead() {
}
func DoFuzzHead(input []byte) int {
// FUZZER INIT
if len(input) < 100 {
return fuzzFailExitCode
}
tp, err := go_fuzz_utils.NewTypeProvider(input)
if err != nil {
return fuzzFailExitCode
}
ctx, hc, cnrID, resp, filename, _, _, err := upload(tp)
if err != nil {
return fuzzFailExitCode
}
r, err := downloadOrHead(tp, ctx, hc, cnrID, resp, filename)
if err != nil {
return fuzzFailExitCode
}
hc.Handler().HeadByAddressOrBucketName(r)
return fuzzSuccessExitCode
}
func FuzzHead(f *testing.F) {
f.Fuzz(func(t *testing.T, data []byte) {
DoFuzzHead(data)
})
}
func InitFuzzDownloadByAttribute() {
}
func DoFuzzDownloadByAttribute(input []byte) int {
// FUZZER INIT
if len(input) < 100 {
return fuzzFailExitCode
}
tp, err := go_fuzz_utils.NewTypeProvider(input)
if err != nil {
return fuzzFailExitCode
}
ctx, hc, cnrID, _, _, attrKey, attrVal, err := upload(tp)
if err != nil {
return fuzzFailExitCode
}
cid := cnrID.EncodeToString()
cid, err = maybeFillRandom(tp, cid)
if err != nil {
return fuzzFailExitCode
}
attrKey, err = maybeFillRandom(tp, attrKey)
if err != nil {
return fuzzFailExitCode
}
attrVal, err = maybeFillRandom(tp, attrVal)
if err != nil {
return fuzzFailExitCode
}
r := new(fasthttp.RequestCtx)
utils.SetContextToRequest(ctx, r)
r.SetUserValue("cid", cid)
r.SetUserValue("attr_key", attrKey)
r.SetUserValue("attr_val", attrVal)
hc.Handler().DownloadByAttribute(r)
return fuzzSuccessExitCode
}
func FuzzDownloadByAttribute(f *testing.F) {
f.Fuzz(func(t *testing.T, data []byte) {
DoFuzzDownloadByAttribute(data)
})
}
func InitFuzzHeadByAttribute() {
}
func DoFuzzHeadByAttribute(input []byte) int {
// FUZZER INIT
if len(input) < 100 {
return fuzzFailExitCode
}
tp, err := go_fuzz_utils.NewTypeProvider(input)
if err != nil {
return fuzzFailExitCode
}
ctx, hc, cnrID, _, _, attrKey, attrVal, err := upload(tp)
if err != nil {
return fuzzFailExitCode
}
cid := cnrID.EncodeToString()
cid, err = maybeFillRandom(tp, cid)
if err != nil {
return fuzzFailExitCode
}
attrKey, err = maybeFillRandom(tp, attrKey)
if err != nil {
return fuzzFailExitCode
}
attrVal, err = maybeFillRandom(tp, attrVal)
if err != nil {
return fuzzFailExitCode
}
r := new(fasthttp.RequestCtx)
utils.SetContextToRequest(ctx, r)
r.SetUserValue("cid", cid)
r.SetUserValue("attr_key", attrKey)
r.SetUserValue("attr_val", attrVal)
hc.Handler().HeadByAttribute(r)
return fuzzSuccessExitCode
}
func FuzzHeadByAttribute(f *testing.F) {
f.Fuzz(func(t *testing.T, data []byte) {
DoFuzzHeadByAttribute(data)
})
}
func InitFuzzDownloadZipped() {
}
func DoFuzzDownloadZipped(input []byte) int {
// FUZZER INIT
if len(input) < 100 {
return fuzzFailExitCode
}
tp, err := go_fuzz_utils.NewTypeProvider(input)
if err != nil {
return fuzzFailExitCode
}
ctx, hc, cnrID, _, _, _, _, err := upload(tp)
if err != nil {
return fuzzFailExitCode
}
cid := cnrID.EncodeToString()
cid, err = maybeFillRandom(tp, cid)
if err != nil {
return fuzzFailExitCode
}
prefix := ""
prefix, err = maybeFillRandom(tp, prefix)
if err != nil {
return fuzzFailExitCode
}
r := new(fasthttp.RequestCtx)
utils.SetContextToRequest(ctx, r)
r.SetUserValue("cid", cid)
r.SetUserValue("prefix", prefix)
hc.Handler().DownloadZip(r)
return fuzzSuccessExitCode
}
func FuzzDownloadZipped(f *testing.F) {
f.Fuzz(func(t *testing.T, data []byte) {
DoFuzzDownloadZipped(data)
})
}
func InitFuzzStoreBearerTokenAppCtx() {
}
func DoFuzzStoreBearerTokenAppCtx(input []byte) int {
// FUZZER INIT
if len(input) < 100 {
return fuzzFailExitCode
}
tp, err := go_fuzz_utils.NewTypeProvider(input)
if err != nil {
return fuzzFailExitCode
}
prefix := ""
prefix, err = maybeFillRandom(tp, prefix)
if err != nil {
return fuzzFailExitCode
}
ctx := context.Background()
ctx = middleware.SetNamespace(ctx, "")
r := new(fasthttp.RequestCtx)
utils.SetContextToRequest(ctx, r)
strings, err := prepareStrings(tp, 3)
rand, err := prepareBools(tp, 2)
if rand[0] == true {
r.Request.Header.Set(fasthttp.HeaderAuthorization, "Bearer"+strings[0])
} else if rand[1] == true {
r.Request.Header.SetCookie(fasthttp.HeaderAuthorization, "Bearer"+strings[1])
} else {
r.Request.Header.Set(fasthttp.HeaderAuthorization, "Bearer"+strings[0])
r.Request.Header.SetCookie(fasthttp.HeaderAuthorization, "Bearer"+strings[1])
}
tokens.StoreBearerTokenAppCtx(ctx, r)
return fuzzSuccessExitCode
}
func FuzzStoreBearerTokenAppCtx(f *testing.F) {
f.Fuzz(func(t *testing.T, data []byte) {
DoFuzzStoreBearerTokenAppCtx(data)
})
}

View file

@ -0,0 +1,496 @@
package handler
import (
"archive/zip"
"bytes"
"context"
"encoding/json"
"io"
"mime/multipart"
"net/http"
"testing"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/cache"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/handler/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/resolver"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
cidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id/test"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/panjf2000/ants/v2"
"github.com/stretchr/testify/require"
"github.com/valyala/fasthttp"
"go.uber.org/zap"
)
type treeServiceMock struct {
system map[string]map[string]*data.BaseNodeVersion
}
func newTreeService() *treeServiceMock {
return &treeServiceMock{
system: make(map[string]map[string]*data.BaseNodeVersion),
}
}
func (t *treeServiceMock) CheckSettingsNodeExists(context.Context, *data.BucketInfo) error {
_, ok := t.system["bucket-settings"]
if !ok {
return layer.ErrNodeNotFound
}
return nil
}
func (t *treeServiceMock) GetSubTreeByPrefix(context.Context, *data.BucketInfo, string, bool) ([]data.NodeInfo, string, error) {
return nil, "", nil
}
func (t *treeServiceMock) GetLatestVersion(context.Context, *cid.ID, string) (*data.NodeVersion, error) {
return nil, nil
}
type configMock struct {
additionalSearch bool
}
func (c *configMock) DefaultTimestamp() bool {
return false
}
func (c *configMock) ArchiveCompression() bool {
return false
}
func (c *configMock) IndexPageEnabled() bool {
return false
}
func (c *configMock) IndexPageTemplate() string {
return ""
}
func (c *configMock) IndexPageNativeTemplate() string {
return ""
}
func (c *configMock) ClientCut() bool {
return false
}
func (c *configMock) BufferMaxSizeForPut() uint64 {
return 0
}
func (c *configMock) NamespaceHeader() string {
return ""
}
func (c *configMock) EnableFilepathFallback() bool {
return c.additionalSearch
}
type handlerContext struct {
key *keys.PrivateKey
owner user.ID
h *Handler
frostfs *TestFrostFS
tree *treeServiceMock
cfg *configMock
}
func (hc *handlerContext) Handler() *Handler {
return hc.h
}
func prepareHandlerContext() (*handlerContext, error) {
logger, err := zap.NewDevelopment()
if err != nil {
return nil, err
}
key, err := keys.NewPrivateKey()
if err != nil {
return nil, err
}
var owner user.ID
user.IDFromKey(&owner, key.PrivateKey.PublicKey)
testFrostFS := NewTestFrostFS(key)
testResolver := &resolver.Resolver{Name: "test_resolver"}
testResolver.SetResolveFunc(func(_ context.Context, name string) (*cid.ID, error) {
return testFrostFS.ContainerID(name)
})
params := &AppParams{
Logger: logger,
FrostFS: testFrostFS,
Owner: &owner,
Resolver: testResolver,
Cache: cache.NewBucketCache(&cache.Config{
Size: 1,
Lifetime: 1,
Logger: logger,
}, false),
}
treeMock := newTreeService()
cfgMock := &configMock{}
workerPool, err := ants.NewPool(1)
if err != nil {
return nil, err
}
handler := New(params, cfgMock, treeMock, workerPool)
return &handlerContext{
key: key,
owner: owner,
h: handler,
frostfs: testFrostFS,
tree: treeMock,
cfg: cfgMock,
}, nil
}
func (hc *handlerContext) prepareContainer(name string, basicACL acl.Basic) (cid.ID, *container.Container, error) {
var pp netmap.PlacementPolicy
err := pp.DecodeString("REP 1")
if err != nil {
return cid.ID{}, nil, err
}
var cnr container.Container
cnr.Init()
cnr.SetOwner(hc.owner)
cnr.SetPlacementPolicy(pp)
cnr.SetBasicACL(basicACL)
var domain container.Domain
domain.SetName(name)
container.WriteDomain(&cnr, domain)
container.SetName(&cnr, name)
container.SetCreationTime(&cnr, time.Now())
cnrID := cidtest.ID()
for op := acl.OpObjectGet; op < acl.OpObjectHash; op++ {
hc.frostfs.AllowUserOperation(cnrID, hc.owner, op, oid.ID{})
if basicACL.IsOpAllowed(op, acl.RoleOthers) {
hc.frostfs.AllowUserOperation(cnrID, user.ID{}, op, oid.ID{})
}
}
return cnrID, &cnr, nil
}
func TestBasic(t *testing.T) {
hc, err := prepareHandlerContext()
require.NoError(t, err)
bktName := "bucket"
cnrID, cnr, err := hc.prepareContainer(bktName, acl.PublicRWExtended)
require.NoError(t, err)
hc.frostfs.SetContainer(cnrID, cnr)
ctx := context.Background()
ctx = middleware.SetNamespace(ctx, "")
content := "hello"
r, err := prepareUploadRequest(ctx, cnrID.EncodeToString(), content)
require.NoError(t, err)
hc.Handler().Upload(r)
require.Equal(t, r.Response.StatusCode(), http.StatusOK)
var putRes putResponse
err = json.Unmarshal(r.Response.Body(), &putRes)
require.NoError(t, err)
obj := hc.frostfs.objects[putRes.ContainerID+"/"+putRes.ObjectID]
attr := prepareObjectAttributes(object.AttributeFilePath, objFileName)
obj.SetAttributes(append(obj.Attributes(), attr)...)
t.Run("get", func(t *testing.T) {
r = prepareGetRequest(ctx, cnrID.EncodeToString(), putRes.ObjectID)
hc.Handler().DownloadByAddressOrBucketName(r)
require.Equal(t, content, string(r.Response.Body()))
})
t.Run("head", func(t *testing.T) {
r = prepareGetRequest(ctx, cnrID.EncodeToString(), putRes.ObjectID)
hc.Handler().HeadByAddressOrBucketName(r)
require.Equal(t, putRes.ObjectID, string(r.Response.Header.Peek(hdrObjectID)))
require.Equal(t, putRes.ContainerID, string(r.Response.Header.Peek(hdrContainerID)))
})
t.Run("get by attribute", func(t *testing.T) {
r = prepareGetByAttributeRequest(ctx, bktName, keyAttr, valAttr)
hc.Handler().DownloadByAttribute(r)
require.Equal(t, content, string(r.Response.Body()))
})
t.Run("head by attribute", func(t *testing.T) {
r = prepareGetByAttributeRequest(ctx, bktName, keyAttr, valAttr)
hc.Handler().HeadByAttribute(r)
require.Equal(t, putRes.ObjectID, string(r.Response.Header.Peek(hdrObjectID)))
require.Equal(t, putRes.ContainerID, string(r.Response.Header.Peek(hdrContainerID)))
})
t.Run("zip", func(t *testing.T) {
r = prepareGetZipped(ctx, bktName, "")
hc.Handler().DownloadZip(r)
readerAt := bytes.NewReader(r.Response.Body())
zipReader, err := zip.NewReader(readerAt, int64(len(r.Response.Body())))
require.NoError(t, err)
require.Len(t, zipReader.File, 1)
require.Equal(t, objFileName, zipReader.File[0].Name)
f, err := zipReader.File[0].Open()
require.NoError(t, err)
defer func() {
inErr := f.Close()
require.NoError(t, inErr)
}()
data, err := io.ReadAll(f)
require.NoError(t, err)
require.Equal(t, content, string(data))
})
}
func TestFindObjectByAttribute(t *testing.T) {
hc, err := prepareHandlerContext()
require.NoError(t, err)
hc.cfg.additionalSearch = true
bktName := "bucket"
cnrID, cnr, err := hc.prepareContainer(bktName, acl.PublicRWExtended)
require.NoError(t, err)
hc.frostfs.SetContainer(cnrID, cnr)
ctx := context.Background()
ctx = middleware.SetNamespace(ctx, "")
content := "hello"
r, err := prepareUploadRequest(ctx, cnrID.EncodeToString(), content)
require.NoError(t, err)
hc.Handler().Upload(r)
require.Equal(t, r.Response.StatusCode(), http.StatusOK)
var putRes putResponse
err = json.Unmarshal(r.Response.Body(), &putRes)
require.NoError(t, err)
testAttrVal1 := "test-attr-val1"
testAttrVal2 := "test-attr-val2"
testAttrVal3 := "test-attr-val3"
for _, tc := range []struct {
name string
firstAttr object.Attribute
secondAttr object.Attribute
reqAttrKey string
reqAttrValue string
err string
additionalSearch bool
}{
{
name: "success search by FileName",
firstAttr: prepareObjectAttributes(attrFilePath, testAttrVal1),
secondAttr: prepareObjectAttributes(attrFileName, testAttrVal2),
reqAttrKey: attrFileName,
reqAttrValue: testAttrVal2,
additionalSearch: false,
},
{
name: "failed search by FileName",
firstAttr: prepareObjectAttributes(attrFilePath, testAttrVal1),
secondAttr: prepareObjectAttributes(attrFileName, testAttrVal2),
reqAttrKey: attrFileName,
reqAttrValue: testAttrVal3,
err: "not found",
additionalSearch: false,
},
{
name: "success search by FilePath (with additional search)",
firstAttr: prepareObjectAttributes(attrFilePath, testAttrVal1),
secondAttr: prepareObjectAttributes(attrFileName, testAttrVal2),
reqAttrKey: attrFilePath,
reqAttrValue: testAttrVal2,
additionalSearch: true,
},
{
name: "failed by FilePath (with additional search)",
firstAttr: prepareObjectAttributes(attrFilePath, testAttrVal1),
secondAttr: prepareObjectAttributes(attrFileName, testAttrVal2),
reqAttrKey: attrFilePath,
reqAttrValue: testAttrVal3,
err: "not found",
additionalSearch: true,
},
} {
t.Run(tc.name, func(t *testing.T) {
obj := hc.frostfs.objects[putRes.ContainerID+"/"+putRes.ObjectID]
obj.SetAttributes(tc.firstAttr, tc.secondAttr)
hc.cfg.additionalSearch = tc.additionalSearch
objID, err := hc.Handler().findObjectByAttribute(ctx, hc.Handler().log, cnrID, tc.reqAttrKey, tc.reqAttrValue)
if tc.err != "" {
require.Error(t, err)
require.Contains(t, err.Error(), tc.err)
return
}
require.NoError(t, err)
require.Equal(t, putRes.ObjectID, objID.EncodeToString())
})
}
}
func TestNeedSearchByFileName(t *testing.T) {
hc, err := prepareHandlerContext()
require.NoError(t, err)
for _, tc := range []struct {
name string
attrKey string
attrVal string
additionalSearch bool
expected bool
}{
{
name: "need search - not contains slash",
attrKey: attrFilePath,
attrVal: "cat.png",
additionalSearch: true,
expected: true,
},
{
name: "need search - single lead slash",
attrKey: attrFilePath,
attrVal: "/cat.png",
additionalSearch: true,
expected: true,
},
{
name: "don't need search - single slash but not lead",
attrKey: attrFilePath,
attrVal: "cats/cat.png",
additionalSearch: true,
expected: false,
},
{
name: "don't need search - more one slash",
attrKey: attrFilePath,
attrVal: "/cats/cat.png",
additionalSearch: true,
expected: false,
},
{
name: "don't need search - incorrect attribute key",
attrKey: attrFileName,
attrVal: "cat.png",
additionalSearch: true,
expected: false,
},
{
name: "don't need search - additional search disabled",
attrKey: attrFilePath,
attrVal: "cat.png",
additionalSearch: false,
expected: false,
},
} {
t.Run(tc.name, func(t *testing.T) {
hc.cfg.additionalSearch = tc.additionalSearch
res := hc.h.needSearchByFileName(tc.attrKey, tc.attrVal)
require.Equal(t, tc.expected, res)
})
}
}
func prepareUploadRequest(ctx context.Context, bucket, content string) (*fasthttp.RequestCtx, error) {
r := new(fasthttp.RequestCtx)
utils.SetContextToRequest(ctx, r)
r.SetUserValue("cid", bucket)
return r, fillMultipartBody(r, content)
}
func prepareGetRequest(ctx context.Context, bucket, objID string) *fasthttp.RequestCtx {
r := new(fasthttp.RequestCtx)
utils.SetContextToRequest(ctx, r)
r.SetUserValue("cid", bucket)
r.SetUserValue("oid", objID)
return r
}
func prepareGetByAttributeRequest(ctx context.Context, bucket, attrKey, attrVal string) *fasthttp.RequestCtx {
r := new(fasthttp.RequestCtx)
utils.SetContextToRequest(ctx, r)
r.SetUserValue("cid", bucket)
r.SetUserValue("attr_key", attrKey)
r.SetUserValue("attr_val", attrVal)
return r
}
func prepareGetZipped(ctx context.Context, bucket, prefix string) *fasthttp.RequestCtx {
r := new(fasthttp.RequestCtx)
utils.SetContextToRequest(ctx, r)
r.SetUserValue("cid", bucket)
r.SetUserValue("prefix", prefix)
return r
}
func prepareObjectAttributes(attrKey, attrValue string) object.Attribute {
attr := object.NewAttribute()
attr.SetKey(attrKey)
attr.SetValue(attrValue)
return *attr
}
const (
keyAttr = "User-Attribute"
valAttr = "user value"
objFileName = "newFile.txt"
)
func fillMultipartBody(r *fasthttp.RequestCtx, content string) error {
attributes := map[string]string{
object.AttributeFileName: objFileName,
keyAttr: valAttr,
}
var buff bytes.Buffer
w := multipart.NewWriter(&buff)
fw, err := w.CreateFormFile("file", attributes[object.AttributeFileName])
if err != nil {
return err
}
if _, err = io.Copy(fw, bytes.NewBufferString(content)); err != nil {
return err
}
if err = w.Close(); err != nil {
return err
}
r.Request.SetBodyStream(&buff, buff.Len())
r.Request.Header.Set("Content-Type", w.FormDataContentType())
r.Request.Header.Set("X-Attribute-"+keyAttr, valAttr)
return nil
}

View file

@ -1,17 +1,18 @@
package downloader
package handler
import (
"context"
"errors"
"io"
"net/http"
"strconv"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
"github.com/valyala/fasthttp"
"go.uber.org/zap"
)
@ -25,25 +26,30 @@ const (
hdrContainerID = "X-Container-Id"
)
func headObject(ctx context.Context, req request, clnt *pool.Pool, objectAddress oid.Address) {
func (h *Handler) headObject(ctx context.Context, req request, objectAddress oid.Address) {
var start = time.Now()
btoken := bearerToken(ctx)
var prm pool.PrmObjectHead
prm.SetAddress(objectAddress)
if btoken != nil {
prm.UseBearer(*btoken)
prm := PrmObjectHead{
PrmAuth: PrmAuth{
BearerToken: btoken,
},
Address: objectAddress,
}
obj, err := clnt.HeadObject(ctx, prm)
obj, err := h.frostfs.HeadObject(ctx, prm)
if err != nil {
req.handleFrostFSErr(err, start)
return
}
req.Response.Header.Set(fasthttp.HeaderContentLength, strconv.FormatUint(obj.PayloadSize(), 10))
var contentType string
var (
contentType string
filename string
filepath string
)
for _, attr := range obj.Attributes() {
key := attr.Key()
val := attr.Value()
@ -67,26 +73,30 @@ func headObject(ctx context.Context, req request, clnt *pool.Pool, objectAddress
req.Response.Header.Set(fasthttp.HeaderLastModified, time.Unix(value, 0).UTC().Format(http.TimeFormat))
case object.AttributeContentType:
contentType = val
case object.AttributeFilePath:
filepath = val
case object.AttributeFileName:
filename = val
}
}
if filename == "" {
filename = filepath
}
idsToResponse(&req.Response, &obj)
idsToResponse(&req.Response, obj)
if len(contentType) == 0 {
contentType, _, err = readContentType(obj.PayloadSize(), func(sz uint64) (io.Reader, error) {
var prmRange pool.PrmObjectRange
prmRange.SetAddress(objectAddress)
prmRange.SetLength(sz)
if btoken != nil {
prmRange.UseBearer(*btoken)
prmRange := PrmObjectRange{
PrmAuth: PrmAuth{
BearerToken: btoken,
},
Address: objectAddress,
PayloadRange: [2]uint64{0, sz},
}
resObj, err := clnt.ObjectRange(ctx, prmRange)
if err != nil {
return nil, err
}
return &resObj, nil
})
return h.frostfs.RangeObject(ctx, prmRange)
}, filename)
if err != nil && err != io.EOF {
req.handleFrostFSErr(err, start)
return
@ -104,19 +114,41 @@ func idsToResponse(resp *fasthttp.Response, obj *object.Object) {
}
// HeadByAddressOrBucketName handles head requests using simple cid/oid or bucketname/key format.
func (d *Downloader) HeadByAddressOrBucketName(c *fasthttp.RequestCtx) {
test, _ := c.UserValue("oid").(string)
var id oid.ID
func (h *Handler) HeadByAddressOrBucketName(c *fasthttp.RequestCtx) {
cidParam, _ := c.UserValue("cid").(string)
oidParam, _ := c.UserValue("oid").(string)
err := id.DecodeString(test)
ctx := utils.GetContextFromRequest(c)
log := utils.GetReqLogOrDefault(ctx, h.log).With(
zap.String("cid", cidParam),
zap.String("oid", oidParam),
)
bktInfo, err := h.getBucketInfo(ctx, cidParam, log)
if err != nil {
d.byBucketname(c, headObject)
logAndSendBucketError(c, log, err)
return
}
checkS3Err := h.tree.CheckSettingsNodeExists(ctx, bktInfo)
if checkS3Err != nil && !errors.Is(checkS3Err, layer.ErrNodeNotFound) {
logAndSendBucketError(c, log, checkS3Err)
return
}
req := newRequest(c, log)
var objID oid.ID
if checkS3Err == nil {
h.byS3Path(ctx, req, bktInfo.CID, oidParam, h.headObject)
} else if err = objID.DecodeString(oidParam); err == nil {
h.byNativeAddress(ctx, req, bktInfo.CID, objID, h.headObject)
} else {
d.byAddress(c, headObject)
logAndSendBucketError(c, log, checkS3Err)
return
}
}
// HeadByAttribute handles attribute-based head requests.
func (d *Downloader) HeadByAttribute(c *fasthttp.RequestCtx) {
d.byAttribute(c, headObject)
func (h *Handler) HeadByAttribute(c *fasthttp.RequestCtx) {
h.byAttribute(c, h.headObject)
}

View file

@ -0,0 +1,26 @@
package middleware
import (
"context"
"fmt"
)
// keyWrapper is wrapper for context keys.
type keyWrapper string
const nsKey = keyWrapper("namespace")
// GetNamespace extract namespace from context.
func GetNamespace(ctx context.Context) (string, error) {
ns, ok := ctx.Value(nsKey).(string)
if !ok {
return "", fmt.Errorf("couldn't get namespace from context")
}
return ns, nil
}
// SetNamespace sets namespace in the context.
func SetNamespace(ctx context.Context, ns string) context.Context {
return context.WithValue(ctx, nsKey, ns)
}

View file

@ -1,13 +1,17 @@
package uploader
package handler
import (
"errors"
"io"
"strconv"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/handler/multipart"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/uploader/multipart"
"go.uber.org/zap"
)
const attributeMultipartObjectSize = "S3-Multipart-Object-Size"
// MultipartFile provides standard ReadCloser interface and also allows one to
// get file name, it's used for multipart uploads.
type MultipartFile interface {
@ -38,10 +42,39 @@ func fetchMultipartFile(l *zap.Logger, r io.Reader, boundary string) (MultipartF
// ignore multipart/form-data values
if filename == "" {
l.Debug(logs.IgnorePartEmptyFilename, zap.String("form", name))
if err = part.Close(); err != nil {
l.Warn(logs.FailedToCloseReader, zap.Error(err))
}
continue
}
return part, nil
}
}
// getPayload returns initial payload if object is not multipart else composes new reader with parts data.
func (h *Handler) getPayload(p getMultiobjectBodyParams) (io.ReadCloser, uint64, error) {
cid, ok := p.obj.Header.ContainerID()
if !ok {
return nil, 0, errors.New("no container id set")
}
oid, ok := p.obj.Header.ID()
if !ok {
return nil, 0, errors.New("no object id set")
}
size, err := strconv.ParseUint(p.strSize, 10, 64)
if err != nil {
return nil, 0, err
}
ctx := p.req.RequestCtx
params := PrmInitMultiObjectReader{
Addr: newAddress(cid, oid),
Bearer: bearerToken(ctx),
}
payload, err := h.frostfs.InitMultiObjectReader(ctx, params)
if err != nil {
return nil, 0, err
}
return io.NopCloser(payload), size, nil
}

View file

@ -1,6 +1,6 @@
//go:build !integration
package uploader
package handler
import (
"crypto/rand"

198
internal/handler/reader.go Normal file
View file

@ -0,0 +1,198 @@
package handler
import (
"bytes"
"context"
"io"
"mime"
"net/http"
"path"
"strconv"
"strings"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/valyala/fasthttp"
"go.uber.org/zap"
)
type readCloser struct {
io.Reader
io.Closer
}
// initializes io.Reader with the limited size and detects Content-Type from it.
// Returns r's error directly. Also returns the processed data.
func readContentType(maxSize uint64, rInit func(uint64) (io.Reader, error), filename string) (string, []byte, error) {
if maxSize > sizeToDetectType {
maxSize = sizeToDetectType
}
buf := make([]byte, maxSize) // maybe sync-pool the slice?
r, err := rInit(maxSize)
if err != nil {
return "", nil, err
}
n, err := r.Read(buf)
if err != nil && err != io.EOF {
return "", nil, err
}
buf = buf[:n]
contentType := http.DetectContentType(buf)
// Since the detector detects the "text/plain" content type for various types of text files,
// including CSS, JavaScript, and CSV files,
// we'll determine the final content type based on the file's extension.
if strings.HasPrefix(contentType, "text/plain") {
ext := path.Ext(filename)
// If the file doesn't have a file extension, we'll keep the content type as is.
if len(ext) > 0 {
contentType = mime.TypeByExtension(ext)
}
}
return contentType, buf, err // to not lose io.EOF
}
type getMultiobjectBodyParams struct {
obj *Object
req request
strSize string
}
func (h *Handler) receiveFile(ctx context.Context, req request, objAddress oid.Address) {
var (
shouldDownload = req.QueryArgs().GetBool("download")
start = time.Now()
filename string
filepath string
contentType string
)
prm := PrmObjectGet{
PrmAuth: PrmAuth{
BearerToken: bearerToken(ctx),
},
Address: objAddress,
}
rObj, err := h.frostfs.GetObject(ctx, prm)
if err != nil {
req.handleFrostFSErr(err, start)
return
}
// we can't close reader in this function, so how to do it?
req.setIDs(rObj.Header)
payload := rObj.Payload
payloadSize := rObj.Header.PayloadSize()
for _, attr := range rObj.Header.Attributes() {
key := attr.Key()
val := attr.Value()
if !isValidToken(key) || !isValidValue(val) {
continue
}
key = utils.BackwardTransformIfSystem(key)
req.Response.Header.Set(utils.UserAttributeHeaderPrefix+key, val)
switch key {
case object.AttributeFileName:
filename = val
case object.AttributeTimestamp:
if err = req.setTimestamp(val); err != nil {
req.log.Error(logs.CouldntParseCreationDate,
zap.String("val", val),
zap.Error(err))
}
case object.AttributeContentType:
contentType = val
case object.AttributeFilePath:
filepath = val
case attributeMultipartObjectSize:
payload, payloadSize, err = h.getPayload(getMultiobjectBodyParams{
obj: rObj,
req: req,
strSize: val,
})
if err != nil {
req.handleFrostFSErr(err, start)
return
}
}
}
if filename == "" {
filename = filepath
}
req.setDisposition(shouldDownload, filename)
req.Response.Header.Set(fasthttp.HeaderContentLength, strconv.FormatUint(payloadSize, 10))
if len(contentType) == 0 {
// determine the Content-Type from the payload head
var payloadHead []byte
contentType, payloadHead, err = readContentType(payloadSize, func(uint64) (io.Reader, error) {
return payload, nil
}, filename)
if err != nil && err != io.EOF {
req.log.Error(logs.CouldNotDetectContentTypeFromPayload, zap.Error(err))
ResponseError(req.RequestCtx, "could not detect Content-Type from payload: "+err.Error(), fasthttp.StatusBadRequest)
return
}
// reset payload reader since a part of the data has been read
var headReader io.Reader = bytes.NewReader(payloadHead)
if err != io.EOF { // otherwise, we've already read full payload
headReader = io.MultiReader(headReader, payload)
}
// note: we could do with io.Reader, but SetBodyStream below closes body stream
// if it implements io.Closer and that's useful for us.
payload = readCloser{headReader, payload}
}
req.SetContentType(contentType)
req.Response.SetBodyStream(payload, int(payloadSize))
}
func (r *request) setIDs(obj object.Object) {
objID, _ := obj.ID()
cnrID, _ := obj.ContainerID()
r.Response.Header.Set(hdrObjectID, objID.String())
r.Response.Header.Set(hdrOwnerID, obj.OwnerID().String())
r.Response.Header.Set(hdrContainerID, cnrID.String())
}
func (r *request) setDisposition(shouldDownload bool, filename string) {
const (
inlineDisposition = "inline"
attachmentDisposition = "attachment"
)
dis := inlineDisposition
if shouldDownload {
dis = attachmentDisposition
}
r.Response.Header.Set(fasthttp.HeaderContentDisposition, dis+"; filename="+path.Base(filename))
}
func (r *request) setTimestamp(timestamp string) error {
value, err := strconv.ParseInt(timestamp, 10, 64)
if err != nil {
return err
}
r.Response.Header.Set(fasthttp.HeaderLastModified,
time.Unix(value, 0).UTC().Format(http.TimeFormat))
return nil
}

View file

@ -0,0 +1,89 @@
//go:build !integration
package handler
import (
"io"
"strings"
"testing"
"github.com/stretchr/testify/require"
)
const (
txtContentType = "text/plain; charset=utf-8"
cssContentType = "text/css; charset=utf-8"
htmlContentType = "text/html; charset=utf-8"
javascriptContentType = "text/javascript; charset=utf-8"
htmlBody = "<!DOCTYPE html><html ><head><meta charset=\"utf-8\"><title>Test Html</title>"
)
func TestDetector(t *testing.T) {
sb := strings.Builder{}
for i := 0; i < 10; i++ {
sb.WriteString("Some txt content. Content-Type must be detected properly by detector.")
}
for _, tc := range []struct {
Name string
ExpectedContentType string
Content string
FileName string
}{
{
Name: "less than 512b",
ExpectedContentType: txtContentType,
Content: sb.String()[:256],
FileName: "test.txt",
},
{
Name: "more than 512b",
ExpectedContentType: txtContentType,
Content: sb.String(),
FileName: "test.txt",
},
{
Name: "css content type",
ExpectedContentType: cssContentType,
Content: sb.String(),
FileName: "test.css",
},
{
Name: "javascript content type",
ExpectedContentType: javascriptContentType,
Content: sb.String(),
FileName: "test.js",
},
{
Name: "html content type by file content",
ExpectedContentType: htmlContentType,
Content: htmlBody,
FileName: "test.detect-by-content",
},
{
Name: "html content type by file extension",
ExpectedContentType: htmlContentType,
Content: sb.String(),
FileName: "test.html",
},
{
Name: "empty file extension",
ExpectedContentType: txtContentType,
Content: sb.String(),
FileName: "test",
},
} {
t.Run(tc.Name, func(t *testing.T) {
contentType, data, err := readContentType(uint64(len(tc.Content)),
func(uint64) (io.Reader, error) {
return strings.NewReader(tc.Content), nil
}, tc.FileName,
)
require.NoError(t, err)
require.Equal(t, tc.ExpectedContentType, contentType)
require.True(t, strings.HasPrefix(tc.Content, string(data)))
})
}
}

278
internal/handler/upload.go Normal file
View file

@ -0,0 +1,278 @@
package handler
import (
"archive/tar"
"bytes"
"compress/gzip"
"context"
"encoding/json"
"errors"
"io"
"net/http"
"path/filepath"
"strconv"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tokens"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/valyala/fasthttp"
"go.uber.org/zap"
)
const (
jsonHeader = "application/json; charset=UTF-8"
drainBufSize = 4096
explodeArchiveHeader = "X-Explode-Archive"
)
type putResponse struct {
ObjectID string `json:"object_id"`
ContainerID string `json:"container_id"`
}
func newPutResponse(addr oid.Address) *putResponse {
return &putResponse{
ObjectID: addr.Object().EncodeToString(),
ContainerID: addr.Container().EncodeToString(),
}
}
func (pr *putResponse) encode(w io.Writer) error {
enc := json.NewEncoder(w)
enc.SetIndent("", "\t")
return enc.Encode(pr)
}
// Upload handles multipart upload request.
func (h *Handler) Upload(c *fasthttp.RequestCtx) {
var file MultipartFile
scid, _ := c.UserValue("cid").(string)
bodyStream := c.RequestBodyStream()
drainBuf := make([]byte, drainBufSize)
ctx := utils.GetContextFromRequest(c)
reqLog := utils.GetReqLogOrDefault(ctx, h.log)
log := reqLog.With(zap.String("cid", scid))
bktInfo, err := h.getBucketInfo(ctx, scid, log)
if err != nil {
logAndSendBucketError(c, log, err)
return
}
boundary := string(c.Request.Header.MultipartFormBoundary())
if file, err = fetchMultipartFile(log, bodyStream, boundary); err != nil {
log.Error(logs.CouldNotReceiveMultipartForm, zap.Error(err))
ResponseError(c, "could not receive multipart/form: "+err.Error(), fasthttp.StatusBadRequest)
return
}
filtered, err := filterHeaders(log, &c.Request.Header)
if err != nil {
log.Error(logs.FailedToFilterHeaders, zap.Error(err))
ResponseError(c, err.Error(), fasthttp.StatusBadRequest)
return
}
if c.Request.Header.Peek(explodeArchiveHeader) != nil {
h.explodeArchive(request{c, log}, bktInfo, file, filtered)
} else {
h.uploadSingleObject(request{c, log}, bktInfo, file, filtered)
}
// Multipart is multipart and thus can contain more than one part which
// we ignore at the moment. Also, when dealing with chunked encoding
// the last zero-length chunk might be left unread (because multipart
// reader only cares about its boundary and doesn't look further) and
// it will be (erroneously) interpreted as the start of the next
// pipelined header. Thus, we need to drain the body buffer.
for {
_, err = bodyStream.Read(drainBuf)
if err == io.EOF || errors.Is(err, io.ErrUnexpectedEOF) {
break
}
}
}
func (h *Handler) uploadSingleObject(req request, bkt *data.BucketInfo, file MultipartFile, filtered map[string]string) {
c, log := req.RequestCtx, req.log
setIfNotExist(filtered, object.AttributeFileName, file.FileName())
attributes, err := h.extractAttributes(c, log, filtered)
if err != nil {
log.Error(logs.FailedToGetAttributes, zap.Error(err))
ResponseError(c, "could not extract attributes: "+err.Error(), fasthttp.StatusBadRequest)
return
}
idObj, err := h.uploadObject(c, bkt, attributes, file)
if err != nil {
h.handlePutFrostFSErr(c, err, log)
return
}
log.Debug(logs.ObjectUploaded,
zap.String("oid", idObj.EncodeToString()),
zap.String("FileName", file.FileName()),
)
addr := newAddress(bkt.CID, idObj)
c.Response.Header.SetContentType(jsonHeader)
// Try to return the response, otherwise, if something went wrong, throw an error.
if err = newPutResponse(addr).encode(c); err != nil {
log.Error(logs.CouldNotEncodeResponse, zap.Error(err))
ResponseError(c, "could not encode response", fasthttp.StatusBadRequest)
return
}
}
func (h *Handler) uploadObject(c *fasthttp.RequestCtx, bkt *data.BucketInfo, attrs []object.Attribute, file io.Reader) (oid.ID, error) {
ctx := utils.GetContextFromRequest(c)
obj := object.New()
obj.SetContainerID(bkt.CID)
obj.SetOwnerID(*h.ownerID)
obj.SetAttributes(attrs...)
prm := PrmObjectCreate{
PrmAuth: PrmAuth{
BearerToken: h.fetchBearerToken(ctx),
},
Object: obj,
Payload: file,
ClientCut: h.config.ClientCut(),
WithoutHomomorphicHash: bkt.HomomorphicHashDisabled,
BufferMaxSize: h.config.BufferMaxSizeForPut(),
}
idObj, err := h.frostfs.CreateObject(ctx, prm)
if err != nil {
return oid.ID{}, err
}
return idObj, nil
}
func (h *Handler) extractAttributes(c *fasthttp.RequestCtx, log *zap.Logger, filtered map[string]string) ([]object.Attribute, error) {
now := time.Now()
if rawHeader := c.Request.Header.Peek(fasthttp.HeaderDate); rawHeader != nil {
if parsed, err := time.Parse(http.TimeFormat, string(rawHeader)); err != nil {
log.Warn(logs.CouldNotParseClientTime, zap.String("Date header", string(rawHeader)), zap.Error(err))
} else {
now = parsed
}
}
if err := utils.PrepareExpirationHeader(c, h.frostfs, filtered, now); err != nil {
log.Error(logs.CouldNotPrepareExpirationHeader, zap.Error(err))
return nil, err
}
attributes := make([]object.Attribute, 0, len(filtered))
// prepares attributes from filtered headers
for key, val := range filtered {
attribute := newAttribute(key, val)
attributes = append(attributes, attribute)
}
// sets Timestamp attribute if it wasn't set from header and enabled by settings
if _, ok := filtered[object.AttributeTimestamp]; !ok && h.config.DefaultTimestamp() {
timestamp := newAttribute(object.AttributeTimestamp, strconv.FormatInt(time.Now().Unix(), 10))
attributes = append(attributes, timestamp)
}
return attributes, nil
}
func newAttribute(key string, val string) object.Attribute {
attr := object.NewAttribute()
attr.SetKey(key)
attr.SetValue(val)
return *attr
}
// explodeArchive read files from archive and creates objects for each of them.
// Sets FilePath attribute with name from tar.Header.
func (h *Handler) explodeArchive(req request, bkt *data.BucketInfo, file io.ReadCloser, filtered map[string]string) {
c, log := req.RequestCtx, req.log
// remove user attributes which vary for each file in archive
// to guarantee that they won't appear twice
delete(filtered, object.AttributeFileName)
delete(filtered, object.AttributeFilePath)
commonAttributes, err := h.extractAttributes(c, log, filtered)
if err != nil {
log.Error(logs.FailedToGetAttributes, zap.Error(err))
ResponseError(c, "could not extract attributes: "+err.Error(), fasthttp.StatusBadRequest)
return
}
attributes := commonAttributes
reader := file
if bytes.EqualFold(c.Request.Header.Peek(fasthttp.HeaderContentEncoding), []byte("gzip")) {
log.Debug(logs.GzipReaderSelected)
gzipReader, err := gzip.NewReader(file)
if err != nil {
log.Error(logs.FailedToCreateGzipReader, zap.Error(err))
ResponseError(c, "could read gzip file: "+err.Error(), fasthttp.StatusBadRequest)
return
}
defer func() {
if err := gzipReader.Close(); err != nil {
log.Warn(logs.FailedToCloseReader, zap.Error(err))
}
}()
reader = gzipReader
}
tarReader := tar.NewReader(reader)
for {
obj, err := tarReader.Next()
if errors.Is(err, io.EOF) {
break
} else if err != nil {
log.Error(logs.FailedToReadFileFromTar, zap.Error(err))
ResponseError(c, "could not get next entry: "+err.Error(), fasthttp.StatusBadRequest)
return
}
if isDir(obj.Name) {
continue
}
// set varying attributes
attributes = attributes[:len(commonAttributes)]
fileName := filepath.Base(obj.Name)
attributes = append(attributes, newAttribute(object.AttributeFilePath, obj.Name))
attributes = append(attributes, newAttribute(object.AttributeFileName, fileName))
idObj, err := h.uploadObject(c, bkt, attributes, tarReader)
if err != nil {
h.handlePutFrostFSErr(c, err, log)
return
}
log.Debug(logs.ObjectUploaded,
zap.String("oid", idObj.EncodeToString()),
zap.String("FileName", fileName),
)
}
}
func (h *Handler) handlePutFrostFSErr(r *fasthttp.RequestCtx, err error, log *zap.Logger) {
statusCode, msg, additionalFields := formErrorResponse("could not store file in frostfs", err)
logFields := append([]zap.Field{zap.Error(err)}, additionalFields...)
log.Error(logs.CouldNotStoreFileInFrostfs, logFields...)
ResponseError(r, msg, statusCode)
}
func (h *Handler) fetchBearerToken(ctx context.Context) *bearer.Token {
if tkn, err := tokens.LoadBearerToken(ctx); err == nil && tkn != nil {
return tkn
}
return nil
}

142
internal/handler/utils.go Normal file
View file

@ -0,0 +1,142 @@
package handler
import (
"context"
"errors"
"fmt"
"strings"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tokens"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
sdkstatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/valyala/fasthttp"
"go.uber.org/zap"
)
type request struct {
*fasthttp.RequestCtx
log *zap.Logger
}
func newRequest(ctx *fasthttp.RequestCtx, log *zap.Logger) request {
return request{
RequestCtx: ctx,
log: log,
}
}
func (r *request) handleFrostFSErr(err error, start time.Time) {
logFields := []zap.Field{
zap.Stringer("elapsed", time.Since(start)),
zap.Error(err),
}
statusCode, msg, additionalFields := formErrorResponse("could not receive object", err)
logFields = append(logFields, additionalFields...)
r.log.Error(logs.CouldNotReceiveObject, logFields...)
ResponseError(r.RequestCtx, msg, statusCode)
}
func bearerToken(ctx context.Context) *bearer.Token {
if tkn, err := tokens.LoadBearerToken(ctx); err == nil {
return tkn
}
return nil
}
func isDir(name string) bool {
return name == "" || strings.HasSuffix(name, "/")
}
func loadAttributes(attrs []object.Attribute) map[string]string {
result := make(map[string]string)
for _, attr := range attrs {
result[attr.Key()] = attr.Value()
}
return result
}
func isValidToken(s string) bool {
for _, c := range s {
if c <= ' ' || c > 127 {
return false
}
if strings.ContainsRune("()<>@,;:\\\"/[]?={}", c) {
return false
}
}
return true
}
func isValidValue(s string) bool {
for _, c := range s {
// HTTP specification allows for more technically, but we don't want to escape things.
if c < ' ' || c > 127 || c == '"' {
return false
}
}
return true
}
func logAndSendBucketError(c *fasthttp.RequestCtx, log *zap.Logger, err error) {
log.Error(logs.CouldntGetBucket, zap.Error(err))
if client.IsErrContainerNotFound(err) {
ResponseError(c, "Not Found", fasthttp.StatusNotFound)
return
}
ResponseError(c, "could not get bucket: "+err.Error(), fasthttp.StatusBadRequest)
}
func newAddress(cnr cid.ID, obj oid.ID) oid.Address {
var addr oid.Address
addr.SetContainer(cnr)
addr.SetObject(obj)
return addr
}
// setIfNotExist sets key value to map if key is not present yet.
func setIfNotExist(m map[string]string, key, value string) {
if _, ok := m[key]; !ok {
m[key] = value
}
}
func ResponseError(r *fasthttp.RequestCtx, msg string, code int) {
r.Error(msg+"\n", code)
}
func formErrorResponse(message string, err error) (int, string, []zap.Field) {
var (
msg string
statusCode int
logFields []zap.Field
)
st := new(sdkstatus.ObjectAccessDenied)
switch {
case errors.As(err, &st):
statusCode = fasthttp.StatusForbidden
reason := st.Reason()
msg = fmt.Sprintf("%s: %v: %s", message, err, reason)
logFields = append(logFields, zap.String("error_detail", reason))
case errors.Is(err, ErrQuotaLimitReached):
statusCode = fasthttp.StatusConflict
msg = fmt.Sprintf("%s: %v", message, err)
case client.IsErrObjectNotFound(err) || client.IsErrContainerNotFound(err):
statusCode = fasthttp.StatusNotFound
msg = "Not Found"
default:
statusCode = fasthttp.StatusBadRequest
msg = fmt.Sprintf("%s: %v", message, err)
}
return statusCode, msg, logFields
}

View file

@ -4,13 +4,15 @@ import (
"context"
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
)
// TreeService provide interface to interact with tree service using s3 data models.
type TreeService interface {
GetLatestVersion(ctx context.Context, cnrID *cid.ID, objectName string) (*api.NodeVersion, error)
GetLatestVersion(ctx context.Context, cnrID *cid.ID, objectName string) (*data.NodeVersion, error)
GetSubTreeByPrefix(ctx context.Context, bktInfo *data.BucketInfo, prefix string, latestOnly bool) ([]data.NodeInfo, string, error)
CheckSettingsNodeExists(ctx context.Context, bktInfo *data.BucketInfo) error
}
var (

View file

@ -1,69 +1,98 @@
package logs
const (
CouldntParseCreationDate = "couldn't parse creation date" // Info in ../../downloader/*
CouldNotDetectContentTypeFromPayload = "could not detect Content-Type from payload" // Error in ../../downloader/download.go
CouldNotReceiveObject = "could not receive object" // Error in ../../downloader/download.go
WrongContainerID = "wrong container id" // Error in ../../downloader/download.go and uploader/upload.go
WrongObjectID = "wrong object id" // Error in ../../downloader/download.go
ObjectWasntFound = "object wasn't found" // Error in ../../downloader/download.go
ObjectWasDeleted = "object was deleted" // Error in ../../downloader/download.go
CouldNotSearchForObjects = "could not search for objects" // Error in ../../downloader/download.go
ObjectNotFound = "object not found" // Error in ../../downloader/download.go
ReadObjectListFailed = "read object list failed" // Error in ../../downloader/download.go
CouldNotCheckContainerExistence = "could not check container existence" // Error in ../../downloader/download.go
FailedToAddObjectToArchive = "failed to add object to archive" // Error in ../../downloader/download.go
IteratingOverSelectedObjectsFailed = "iterating over selected objects failed" // Error in ../../downloader/download.go
ObjectsNotFound = "objects not found" // Error in ../../downloader/download.go
CloseZipWriter = "close zip writer" // Error in ../../downloader/download.go
ServiceIsRunning = "service is running" // Info in ../../metrics/service.go
ServiceCouldntStartOnConfiguredPort = "service couldn't start on configured port" // Warn in ../../metrics/service.go
ServiceHasntStartedSinceItsDisabled = "service hasn't started since it's disabled" // Info in ../../metrics/service.go
ShuttingDownService = "shutting down service" // Info in ../../metrics/service.go
CantShutDownService = "can't shut down service" // Panic in ../../metrics/service.go
IgnorePartEmptyFormName = "ignore part, empty form name" // Debug in ../../uploader/upload.go
IgnorePartEmptyFilename = "ignore part, empty filename" // Debug in ../../uploader/upload.go
CloseTemporaryMultipartFormFile = "close temporary multipart/form file" // Debug in ../../uploader/upload.go
CouldNotReceiveMultipartForm = "could not receive multipart/form" // Error in ../../uploader/upload.go
CouldNotProcessHeaders = "could not process headers" // Error in ../../uploader/upload.go
CouldNotParseClientTime = "could not parse client time" // Warn in ../../uploader/upload.go
CouldNotPrepareExpirationHeader = "could not prepare expiration header" // Error in ../../uploader/upload.go
CouldNotEncodeResponse = "could not encode response" // Error in ../../uploader/upload.go
CouldNotStoreFileInFrostfs = "could not store file in frostfs" // Error in ../../uploader/upload.go
AddAttributeToResultObject = "add attribute to result object" // Debug in ../../uploader/filter.go
FailedToCreateResolver = "failed to create resolver" // Fatal in ../../app.go
ContainerResolverWillBeDisabledBecauseOfResolversResolverOrderIsEmpty = "container resolver will be disabled because of resolvers 'resolver_order' is empty" // Info in ../../app.go
MetricsAreDisabled = "metrics are disabled" // Warn in ../../app.go
NoWalletPathSpecifiedCreatingEphemeralKeyAutomaticallyForThisRun = "no wallet path specified, creating ephemeral key automatically for this run" // Info in ../../app.go
StartingApplication = "starting application" // Info in ../../app.go
StartingServer = "starting server" // Info in ../../app.go
ListenAndServe = "listen and serve" // Fatal in ../../app.go
ShuttingDownWebServer = "shutting down web server" // Info in ../../app.go
FailedToShutdownTracing = "failed to shutdown tracing" // Warn in ../../app.go
SIGHUPConfigReloadStarted = "SIGHUP config reload started" // Info in ../../app.go
FailedToReloadConfigBecauseItsMissed = "failed to reload config because it's missed" // Warn in ../../app.go
FailedToReloadConfig = "failed to reload config" // Warn in ../../app.go
LogLevelWontBeUpdated = "log level won't be updated" // Warn in ../../app.go
FailedToUpdateResolvers = "failed to update resolvers" // Warn in ../../app.go
FailedToReloadServerParameters = "failed to reload server parameters" // Warn in ../../app.go
SIGHUPConfigReloadCompleted = "SIGHUP config reload completed" // Info in ../../app.go
AddedPathUploadCid = "added path /upload/{cid}" // Info in ../../app.go
AddedPathGetCidOid = "added path /get/{cid}/{oid}" // Info in ../../app.go
AddedPathGetByAttributeCidAttrKeyAttrVal = "added path /get_by_attribute/{cid}/{attr_key}/{attr_val:*}" // Info in ../../app.go
AddedPathZipCidPrefix = "added path /zip/{cid}/{prefix}" // Info in ../../app.go
Request = "request" // Info in ../../app.go
CouldNotFetchAndStoreBearerToken = "could not fetch and store bearer token" // Error in ../../app.go
FailedToAddServer = "failed to add server" // Warn in ../../app.go
AddServer = "add server" // Info in ../../app.go
NoHealthyServers = "no healthy servers" // Fatal in ../../app.go
FailedToInitializeTracing = "failed to initialize tracing" // Warn in ../../app.go
TracingConfigUpdated = "tracing config updated" // Info in ../../app.go
ResolverNNSWontBeUsedSinceRPCEndpointIsntProvided = "resolver nns won't be used since rpc_endpoint isn't provided" // Warn in ../../app.go
CouldNotLoadFrostFSPrivateKey = "could not load FrostFS private key" // Fatal in ../../settings.go
UsingCredentials = "using credentials" // Info in ../../settings.go
FailedToCreateConnectionPool = "failed to create connection pool" // Fatal in ../../settings.go
FailedToDialConnectionPool = "failed to dial connection pool" // Fatal in ../../settings.go
FailedToCreateTreePool = "failed to create tree pool" // Fatal in ../../settings.go
FailedToDialTreePool = "failed to dial tree pool" // Fatal in ../../settings.go
AddedStoragePeer = "added storage peer" // Info in ../../settings.go
CouldntParseCreationDate = "couldn't parse creation date"
CouldNotDetectContentTypeFromPayload = "could not detect Content-Type from payload"
CouldNotReceiveObject = "could not receive object"
ObjectWasDeleted = "object was deleted"
CouldNotSearchForObjects = "could not search for objects"
ObjectNotFound = "object not found"
ReadObjectListFailed = "read object list failed"
FailedToAddObjectToArchive = "failed to add object to archive"
FailedToGetObject = "failed to get object"
IteratingOverSelectedObjectsFailed = "iterating over selected objects failed"
ObjectsNotFound = "objects not found"
CloseZipWriter = "close zip writer"
ServiceIsRunning = "service is running"
ServiceCouldntStartOnConfiguredPort = "service couldn't start on configured port"
ServiceHasntStartedSinceItsDisabled = "service hasn't started since it's disabled"
ShuttingDownService = "shutting down service"
CantShutDownService = "can't shut down service"
CantGracefullyShutDownService = "can't gracefully shut down service, force stop"
IgnorePartEmptyFormName = "ignore part, empty form name"
IgnorePartEmptyFilename = "ignore part, empty filename"
CouldNotReceiveMultipartForm = "could not receive multipart/form"
CouldNotParseClientTime = "could not parse client time"
CouldNotPrepareExpirationHeader = "could not prepare expiration header"
CouldNotEncodeResponse = "could not encode response"
CouldNotStoreFileInFrostfs = "could not store file in frostfs"
AddAttributeToResultObject = "add attribute to result object"
FailedToCreateResolver = "failed to create resolver"
FailedToCreateWorkerPool = "failed to create worker pool"
FailedToReadIndexPageTemplate = "failed to read index page template"
SetCustomIndexPageTemplate = "set custom index page template"
ContainerResolverWillBeDisabledBecauseOfResolversResolverOrderIsEmpty = "container resolver will be disabled because of resolvers 'resolver_order' is empty"
MetricsAreDisabled = "metrics are disabled"
NoWalletPathSpecifiedCreatingEphemeralKeyAutomaticallyForThisRun = "no wallet path specified, creating ephemeral key automatically for this run"
StartingApplication = "starting application"
StartingServer = "starting server"
ListenAndServe = "listen and serve"
ShuttingDownWebServer = "shutting down web server"
FailedToShutdownTracing = "failed to shutdown tracing"
SIGHUPConfigReloadStarted = "SIGHUP config reload started"
FailedToReloadConfigBecauseItsMissed = "failed to reload config because it's missed"
FailedToReloadConfig = "failed to reload config"
LogLevelWontBeUpdated = "log level won't be updated"
FailedToUpdateResolvers = "failed to update resolvers"
FailedToReloadServerParameters = "failed to reload server parameters"
SIGHUPConfigReloadCompleted = "SIGHUP config reload completed"
AddedPathUploadCid = "added path /upload/{cid}"
AddedPathGetCidOid = "added path /get/{cid}/{oid}"
AddedPathGetByAttributeCidAttrKeyAttrVal = "added path /get_by_attribute/{cid}/{attr_key}/{attr_val:*}"
AddedPathZipCidPrefix = "added path /zip/{cid}/{prefix}"
Request = "request"
CouldNotFetchAndStoreBearerToken = "could not fetch and store bearer token"
FailedToAddServer = "failed to add server"
AddServer = "add server"
NoHealthyServers = "no healthy servers"
FailedToInitializeTracing = "failed to initialize tracing"
TracingConfigUpdated = "tracing config updated"
ResolverNNSWontBeUsedSinceRPCEndpointIsntProvided = "resolver nns won't be used since rpc_endpoint isn't provided"
RuntimeSoftMemoryDefinedWithGOMEMLIMIT = "soft runtime memory defined with GOMEMLIMIT environment variable, config value skipped"
RuntimeSoftMemoryLimitUpdated = "soft runtime memory limit value updated"
CouldNotLoadFrostFSPrivateKey = "could not load FrostFS private key"
UsingCredentials = "using credentials"
FailedToCreateConnectionPool = "failed to create connection pool"
FailedToDialConnectionPool = "failed to dial connection pool"
FailedToCreateTreePool = "failed to create tree pool"
FailedToDialTreePool = "failed to dial tree pool"
AddedStoragePeer = "added storage peer"
CouldntGetBucket = "could not get bucket"
CouldntPutBucketIntoCache = "couldn't put bucket info into cache"
FailedToSumbitTaskToPool = "failed to submit task to pool"
FailedToHeadObject = "failed to head object"
FailedToIterateOverResponse = "failed to iterate over search response"
InvalidCacheEntryType = "invalid cache entry type"
InvalidLifetimeUsingDefaultValue = "invalid lifetime, using default value (in seconds)"
InvalidCacheSizeUsingDefaultValue = "invalid cache size, using default value"
FailedToUnescapeQuery = "failed to unescape query"
ServerReconnecting = "reconnecting server..."
ServerReconnectedSuccessfully = "server reconnected successfully"
ServerReconnectFailed = "failed to reconnect server"
WarnDuplicateAddress = "duplicate address"
MultinetDialSuccess = "multinet dial successful"
MultinetDialFail = "multinet dial failed"
FailedToLoadMultinetConfig = "failed to load multinet config"
MultinetConfigWontBeUpdated = "multinet config won't be updated"
ObjectNotFoundByFilePathTrySearchByFileName = "object not found by filePath attribute, try search by fileName"
CouldntCacheNetmap = "couldn't cache netmap"
FailedToFilterHeaders = "failed to filter headers"
FailedToReadFileFromTar = "failed to read file from tar"
FailedToGetAttributes = "failed to get attributes"
ObjectUploaded = "object uploaded"
CloseGzipWriter = "close gzip writer"
CloseTarWriter = "close tar writer"
FailedToCloseReader = "failed to close reader"
FailedToCreateGzipReader = "failed to create gzip reader"
GzipReaderSelected = "gzip reader selected"
)

68
internal/net/config.go Normal file
View file

@ -0,0 +1,68 @@
package net
import (
"errors"
"fmt"
"net/netip"
"slices"
"time"
"git.frostfs.info/TrueCloudLab/multinet"
)
var errEmptySourceIPList = errors.New("empty source IP list")
type Subnet struct {
Prefix string
SourceIPs []string
}
type Config struct {
Enabled bool
Subnets []Subnet
Balancer string
Restrict bool
FallbackDelay time.Duration
EventHandler multinet.EventHandler
}
func (c Config) toMultinetConfig() (multinet.Config, error) {
var subnets []multinet.Subnet
for _, s := range c.Subnets {
var ms multinet.Subnet
p, err := netip.ParsePrefix(s.Prefix)
if err != nil {
return multinet.Config{}, fmt.Errorf("parse IP prefix '%s': %w", s.Prefix, err)
}
ms.Prefix = p
for _, ip := range s.SourceIPs {
addr, err := netip.ParseAddr(ip)
if err != nil {
return multinet.Config{}, fmt.Errorf("parse IP address '%s': %w", ip, err)
}
ms.SourceIPs = append(ms.SourceIPs, addr)
}
if len(ms.SourceIPs) == 0 {
return multinet.Config{}, errEmptySourceIPList
}
subnets = append(subnets, ms)
}
return multinet.Config{
Subnets: subnets,
Balancer: multinet.BalancerType(c.Balancer),
Restrict: c.Restrict,
FallbackDelay: c.FallbackDelay,
Dialer: newDefaultDialer(),
EventHandler: c.EventHandler,
}, nil
}
func (c Config) equals(other Config) bool {
return c.Enabled == other.Enabled &&
slices.EqualFunc(c.Subnets, other.Subnets, func(lhs, rhs Subnet) bool {
return lhs.Prefix == rhs.Prefix && slices.Equal(lhs.SourceIPs, rhs.SourceIPs)
}) &&
c.Balancer == other.Balancer &&
c.Restrict == other.Restrict &&
c.FallbackDelay == other.FallbackDelay
}

View file

@ -0,0 +1,54 @@
// NOTE: code is taken from https://github.com/grpc/grpc-go/blob/v1.68.x/internal/transport/http_util.go
/*
*
* Copyright 2014 gRPC authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package net
import (
"net/url"
"strings"
)
// parseDialTarget returns the network and address to pass to dialer.
func parseDialTarget(target string) (string, string) {
net := "tcp"
m1 := strings.Index(target, ":")
m2 := strings.Index(target, ":/")
// handle unix:addr which will fail with url.Parse
if m1 >= 0 && m2 < 0 {
if n := target[0:m1]; n == "unix" {
return n, target[m1+1:]
}
}
if m2 >= 0 {
t, err := url.Parse(target)
if err != nil {
return net, target
}
scheme := t.Scheme
addr := t.Path
if scheme == "unix" {
if addr == "" {
addr = t.Host
}
return scheme, addr
}
}
return net, target
}

36
internal/net/dialer.go Normal file
View file

@ -0,0 +1,36 @@
package net
import (
"net"
"syscall"
"time"
"golang.org/x/sys/unix"
)
func newDefaultDialer() net.Dialer {
// From `grpc.WithContextDialer` comment:
//
// Note: All supported releases of Go (as of December 2023) override the OS
// defaults for TCP keepalive time and interval to 15s. To enable TCP keepalive
// with OS defaults for keepalive time and interval, use a net.Dialer that sets
// the KeepAlive field to a negative value, and sets the SO_KEEPALIVE socket
// option to true from the Control field. For a concrete example of how to do
// this, see internal.NetDialerWithTCPKeepalive().
//
// https://github.com/grpc/grpc-go/blob/830135e6c5a351abf75f0c9cfdf978e5df8daeba/dialoptions.go#L432
//
// From `internal.NetDialerWithTCPKeepalive` comment:
//
// TODO: Once https://github.com/golang/go/issues/62254 lands, and the
// appropriate Go version becomes less than our least supported Go version, we
// should look into using the new API to make things more straightforward.
return net.Dialer{
KeepAlive: time.Duration(-1),
Control: func(_, _ string, c syscall.RawConn) error {
return c.Control(func(fd uintptr) {
_ = unix.SetsockoptInt(int(fd), unix.SOL_SOCKET, unix.SO_KEEPALIVE, 1)
})
},
}
}

View file

@ -0,0 +1,69 @@
package net
import (
"context"
"net"
"sync"
"git.frostfs.info/TrueCloudLab/multinet"
)
type DialerSource struct {
guard sync.RWMutex
c Config
md multinet.Dialer
}
func NewDialerSource(c Config) (*DialerSource, error) {
result := &DialerSource{}
if err := result.build(c); err != nil {
return nil, err
}
return result, nil
}
func (s *DialerSource) build(c Config) error {
if c.Enabled {
mc, err := c.toMultinetConfig()
if err != nil {
return err
}
md, err := multinet.NewDialer(mc)
if err != nil {
return err
}
s.md = md
s.c = c
return nil
}
s.md = nil
s.c = c
return nil
}
// GrpcContextDialer returns grpc.WithContextDialer func.
// Returns nil if multinet disabled.
func (s *DialerSource) GrpcContextDialer() func(context.Context, string) (net.Conn, error) {
s.guard.RLock()
defer s.guard.RUnlock()
if s.c.Enabled {
return func(ctx context.Context, address string) (net.Conn, error) {
network, address := parseDialTarget(address)
return s.md.DialContext(ctx, network, address)
}
}
return nil
}
func (s *DialerSource) Update(c Config) error {
s.guard.Lock()
defer s.guard.Unlock()
if s.c.equals(c) {
return nil
}
return s.build(c)
}

View file

@ -0,0 +1,28 @@
package net
import (
"net"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"go.uber.org/zap"
)
type LogEventHandler struct {
logger *zap.Logger
}
func (l LogEventHandler) DialPerformed(sourceIP net.Addr, _, address string, err error) {
sourceIPString := "undefined"
if sourceIP != nil {
sourceIPString = sourceIP.Network() + "://" + sourceIP.String()
}
if err == nil {
l.logger.Debug(logs.MultinetDialSuccess, zap.String("source", sourceIPString), zap.String("destination", address))
} else {
l.logger.Debug(logs.MultinetDialFail, zap.String("source", sourceIPString), zap.String("destination", address), zap.Error(err))
}
}
func NewLogEventHandler(logger *zap.Logger) LogEventHandler {
return LogEventHandler{logger: logger}
}

View file

@ -0,0 +1,259 @@
package frostfs
import (
"context"
"errors"
"fmt"
"io"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/handler"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// FrostFS represents virtual connection to the FrostFS network.
// It is used to provide an interface to dependent packages
// which work with FrostFS.
type FrostFS struct {
pool *pool.Pool
}
// NewFrostFS creates new FrostFS using provided pool.Pool.
func NewFrostFS(p *pool.Pool) *FrostFS {
return &FrostFS{
pool: p,
}
}
// Container implements frostfs.FrostFS interface method.
func (x *FrostFS) Container(ctx context.Context, containerPrm handler.PrmContainer) (*container.Container, error) {
prm := pool.PrmContainerGet{
ContainerID: containerPrm.ContainerID,
}
res, err := x.pool.GetContainer(ctx, prm)
if err != nil {
return nil, handleObjectError("read container via connection pool", err)
}
return &res, nil
}
// CreateObject implements frostfs.FrostFS interface method.
func (x *FrostFS) CreateObject(ctx context.Context, prm handler.PrmObjectCreate) (oid.ID, error) {
var prmPut pool.PrmObjectPut
prmPut.SetHeader(*prm.Object)
prmPut.SetPayload(prm.Payload)
prmPut.SetClientCut(prm.ClientCut)
prmPut.WithoutHomomorphicHash(prm.WithoutHomomorphicHash)
prmPut.SetBufferMaxSize(prm.BufferMaxSize)
if prm.BearerToken != nil {
prmPut.UseBearer(*prm.BearerToken)
}
idObj, err := x.pool.PutObject(ctx, prmPut)
if err != nil {
return oid.ID{}, handleObjectError("save object via connection pool", err)
}
return idObj.ObjectID, nil
}
// wraps io.ReadCloser and transforms Read errors related to access violation
// to frostfs.ErrAccessDenied.
type payloadReader struct {
io.ReadCloser
}
func (x payloadReader) Read(p []byte) (int, error) {
n, err := x.ReadCloser.Read(p)
if err != nil && errors.Is(err, io.EOF) {
return n, err
}
return n, handleObjectError("read payload", err)
}
// HeadObject implements frostfs.FrostFS interface method.
func (x *FrostFS) HeadObject(ctx context.Context, prm handler.PrmObjectHead) (*object.Object, error) {
var prmHead pool.PrmObjectHead
prmHead.SetAddress(prm.Address)
if prm.BearerToken != nil {
prmHead.UseBearer(*prm.BearerToken)
}
res, err := x.pool.HeadObject(ctx, prmHead)
if err != nil {
return nil, handleObjectError("read object header via connection pool", err)
}
return &res, nil
}
// GetObject implements frostfs.FrostFS interface method.
func (x *FrostFS) GetObject(ctx context.Context, prm handler.PrmObjectGet) (*handler.Object, error) {
var prmGet pool.PrmObjectGet
prmGet.SetAddress(prm.Address)
if prm.BearerToken != nil {
prmGet.UseBearer(*prm.BearerToken)
}
res, err := x.pool.GetObject(ctx, prmGet)
if err != nil {
return nil, handleObjectError("init full object reading via connection pool", err)
}
return &handler.Object{
Header: res.Header,
Payload: res.Payload,
}, nil
}
// RangeObject implements frostfs.FrostFS interface method.
func (x *FrostFS) RangeObject(ctx context.Context, prm handler.PrmObjectRange) (io.ReadCloser, error) {
var prmRange pool.PrmObjectRange
prmRange.SetAddress(prm.Address)
prmRange.SetOffset(prm.PayloadRange[0])
prmRange.SetLength(prm.PayloadRange[1])
if prm.BearerToken != nil {
prmRange.UseBearer(*prm.BearerToken)
}
res, err := x.pool.ObjectRange(ctx, prmRange)
if err != nil {
return nil, handleObjectError("init payload range reading via connection pool", err)
}
return payloadReader{&res}, nil
}
// SearchObjects implements frostfs.FrostFS interface method.
func (x *FrostFS) SearchObjects(ctx context.Context, prm handler.PrmObjectSearch) (handler.ResObjectSearch, error) {
var prmSearch pool.PrmObjectSearch
prmSearch.SetContainerID(prm.Container)
prmSearch.SetFilters(prm.Filters)
if prm.BearerToken != nil {
prmSearch.UseBearer(*prm.BearerToken)
}
res, err := x.pool.SearchObjects(ctx, prmSearch)
if err != nil {
return nil, handleObjectError("init object search via connection pool", err)
}
return &res, nil
}
// GetEpochDurations implements frostfs.FrostFS interface method.
func (x *FrostFS) GetEpochDurations(ctx context.Context) (*utils.EpochDurations, error) {
networkInfo, err := x.pool.NetworkInfo(ctx)
if err != nil {
return nil, err
}
res := &utils.EpochDurations{
CurrentEpoch: networkInfo.CurrentEpoch(),
MsPerBlock: networkInfo.MsPerBlock(),
BlockPerEpoch: networkInfo.EpochDuration(),
}
if res.BlockPerEpoch == 0 {
return nil, fmt.Errorf("EpochDuration is empty")
}
return res, nil
}
func (x *FrostFS) NetmapSnapshot(ctx context.Context) (netmap.NetMap, error) {
netmapSnapshot, err := x.pool.NetMapSnapshot(ctx)
if err != nil {
return netmapSnapshot, handleObjectError("get netmap via connection pool", err)
}
return netmapSnapshot, nil
}
// ResolverFrostFS represents virtual connection to the FrostFS network.
// It implements resolver.FrostFS.
type ResolverFrostFS struct {
pool *pool.Pool
}
// NewResolverFrostFS creates new ResolverFrostFS using provided pool.Pool.
func NewResolverFrostFS(p *pool.Pool) *ResolverFrostFS {
return &ResolverFrostFS{pool: p}
}
// SystemDNS implements resolver.FrostFS interface method.
func (x *ResolverFrostFS) SystemDNS(ctx context.Context) (string, error) {
networkInfo, err := x.pool.NetworkInfo(ctx)
if err != nil {
return "", handleObjectError("read network info via client", err)
}
domain := networkInfo.RawNetworkParameter("SystemDNS")
if domain == nil {
return "", errors.New("system DNS parameter not found or empty")
}
return string(domain), nil
}
func handleObjectError(msg string, err error) error {
if err == nil {
return nil
}
if reason, ok := IsErrObjectAccessDenied(err); ok {
if strings.Contains(reason, "limit reached") {
return fmt.Errorf("%s: %w: %s", msg, handler.ErrQuotaLimitReached, reason)
}
return fmt.Errorf("%s: %w: %s", msg, handler.ErrAccessDenied, reason)
}
if IsTimeoutError(err) {
return fmt.Errorf("%s: %w: %s", msg, handler.ErrGatewayTimeout, err.Error())
}
return fmt.Errorf("%s: %w", msg, err)
}
func UnwrapErr(err error) error {
unwrappedErr := errors.Unwrap(err)
for unwrappedErr != nil {
err = unwrappedErr
unwrappedErr = errors.Unwrap(err)
}
return err
}
func IsErrObjectAccessDenied(err error) (string, bool) {
err = UnwrapErr(err)
switch err := err.(type) {
default:
return "", false
case *apistatus.ObjectAccessDenied:
return err.Reason(), true
}
}
func IsTimeoutError(err error) bool {
if strings.Contains(err.Error(), "timeout") ||
errors.Is(err, context.DeadlineExceeded) {
return true
}
return status.Code(UnwrapErr(err)) == codes.DeadlineExceeded
}

View file

@ -0,0 +1,83 @@
package frostfs
import (
"context"
"errors"
"fmt"
"testing"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/handler"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
"github.com/stretchr/testify/require"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
func TestHandleObjectError(t *testing.T) {
msg := "some msg"
t.Run("nil error", func(t *testing.T) {
err := handleObjectError(msg, nil)
require.Nil(t, err)
})
t.Run("simple access denied", func(t *testing.T) {
reason := "some reason"
inputErr := new(apistatus.ObjectAccessDenied)
inputErr.WriteReason(reason)
err := handleObjectError(msg, inputErr)
require.ErrorIs(t, err, handler.ErrAccessDenied)
require.Contains(t, err.Error(), reason)
require.Contains(t, err.Error(), msg)
})
t.Run("access denied - quota reached", func(t *testing.T) {
reason := "Quota limit reached"
inputErr := new(apistatus.ObjectAccessDenied)
inputErr.WriteReason(reason)
err := handleObjectError(msg, inputErr)
require.ErrorIs(t, err, handler.ErrQuotaLimitReached)
require.Contains(t, err.Error(), reason)
require.Contains(t, err.Error(), msg)
})
t.Run("simple timeout", func(t *testing.T) {
inputErr := errors.New("timeout")
err := handleObjectError(msg, inputErr)
require.ErrorIs(t, err, handler.ErrGatewayTimeout)
require.Contains(t, err.Error(), inputErr.Error())
require.Contains(t, err.Error(), msg)
})
t.Run("deadline exceeded", func(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond)
defer cancel()
<-ctx.Done()
err := handleObjectError(msg, ctx.Err())
require.ErrorIs(t, err, handler.ErrGatewayTimeout)
require.Contains(t, err.Error(), ctx.Err().Error())
require.Contains(t, err.Error(), msg)
})
t.Run("grpc deadline exceeded", func(t *testing.T) {
inputErr := fmt.Errorf("wrap grpc error: %w", status.Error(codes.DeadlineExceeded, "error"))
err := handleObjectError(msg, inputErr)
require.ErrorIs(t, err, handler.ErrGatewayTimeout)
require.Contains(t, err.Error(), inputErr.Error())
require.Contains(t, err.Error(), msg)
})
t.Run("unknown error", func(t *testing.T) {
inputErr := errors.New("unknown error")
err := handleObjectError(msg, inputErr)
require.ErrorIs(t, err, inputErr)
require.Contains(t, err.Error(), msg)
})
}

View file

@ -0,0 +1,241 @@
package frostfs
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/handler"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
)
// PartInfo is upload information about part.
type PartInfo struct {
Key string `json:"key"`
UploadID string `json:"uploadId"`
Number int `json:"number"`
OID oid.ID `json:"oid"`
Size uint64 `json:"size"`
ETag string `json:"etag"`
MD5 string `json:"md5"`
Created time.Time `json:"created"`
}
type GetFrostFSParams struct {
// payload range
Off, Ln uint64
Addr oid.Address
}
type PartObj struct {
OID oid.ID
Size uint64
}
type readerInitiator interface {
InitFrostFSObjectPayloadReader(ctx context.Context, p GetFrostFSParams) (io.ReadCloser, error)
}
// MultiObjectReader implements io.Reader of payloads of the object list stored in the FrostFS network.
type MultiObjectReader struct {
ctx context.Context
layer readerInitiator
startPartOffset uint64
endPartLength uint64
prm GetFrostFSParams
curIndex int
curReader io.ReadCloser
parts []PartObj
}
type MultiObjectReaderConfig struct {
Initiator readerInitiator
// the offset of complete object and total size to read
Off, Ln uint64
Addr oid.Address
Parts []PartObj
}
var (
errOffsetIsOutOfRange = errors.New("offset is out of payload range")
errLengthIsOutOfRange = errors.New("length is out of payload range")
errEmptyPartsList = errors.New("empty parts list")
errorZeroRangeLength = errors.New("zero range length")
)
func (x *FrostFS) InitMultiObjectReader(ctx context.Context, p handler.PrmInitMultiObjectReader) (io.Reader, error) {
combinedObj, err := x.GetObject(ctx, handler.PrmObjectGet{
PrmAuth: handler.PrmAuth{BearerToken: p.Bearer},
Address: p.Addr,
})
if err != nil {
return nil, fmt.Errorf("get combined object '%s': %w", p.Addr.Object().EncodeToString(), err)
}
var parts []*PartInfo
if err = json.NewDecoder(combinedObj.Payload).Decode(&parts); err != nil {
return nil, fmt.Errorf("unmarshal combined object parts: %w", err)
}
objParts := make([]PartObj, len(parts))
for i, part := range parts {
objParts[i] = PartObj{
OID: part.OID,
Size: part.Size,
}
}
return NewMultiObjectReader(ctx, MultiObjectReaderConfig{
Initiator: x,
Off: p.Off,
Ln: p.Ln,
Parts: objParts,
Addr: p.Addr,
})
}
func NewMultiObjectReader(ctx context.Context, cfg MultiObjectReaderConfig) (*MultiObjectReader, error) {
if len(cfg.Parts) == 0 {
return nil, errEmptyPartsList
}
r := &MultiObjectReader{
ctx: ctx,
layer: cfg.Initiator,
prm: GetFrostFSParams{
Addr: cfg.Addr,
},
parts: cfg.Parts,
}
if cfg.Off+cfg.Ln == 0 {
return r, nil
}
if cfg.Off > 0 && cfg.Ln == 0 {
return nil, errorZeroRangeLength
}
startPartIndex, startPartOffset := findStartPart(cfg)
if startPartIndex == -1 {
return nil, errOffsetIsOutOfRange
}
r.startPartOffset = startPartOffset
endPartIndex, endPartLength := findEndPart(cfg)
if endPartIndex == -1 {
return nil, errLengthIsOutOfRange
}
r.endPartLength = endPartLength
r.parts = cfg.Parts[startPartIndex : endPartIndex+1]
return r, nil
}
func findStartPart(cfg MultiObjectReaderConfig) (index int, offset uint64) {
position := cfg.Off
for i, part := range cfg.Parts {
// Strict inequality when searching for start position to avoid reading zero length part.
if position < part.Size {
return i, position
}
position -= part.Size
}
return -1, 0
}
func findEndPart(cfg MultiObjectReaderConfig) (index int, length uint64) {
position := cfg.Off + cfg.Ln
for i, part := range cfg.Parts {
// Non-strict inequality when searching for end position to avoid out of payload range error.
if position <= part.Size {
return i, position
}
position -= part.Size
}
return -1, 0
}
func (x *MultiObjectReader) Read(p []byte) (n int, err error) {
if x.curReader != nil {
n, err = x.curReader.Read(p)
if err != nil {
if closeErr := x.curReader.Close(); closeErr != nil {
return n, fmt.Errorf("%w (close err: %v)", err, closeErr)
}
}
if !errors.Is(err, io.EOF) {
return n, err
}
x.curIndex++
}
if x.curIndex == len(x.parts) {
return n, io.EOF
}
x.prm.Addr.SetObject(x.parts[x.curIndex].OID)
if x.curIndex == 0 {
x.prm.Off = x.startPartOffset
x.prm.Ln = x.parts[x.curIndex].Size - x.startPartOffset
}
if x.curIndex == len(x.parts)-1 {
x.prm.Ln = x.endPartLength - x.prm.Off
}
x.curReader, err = x.layer.InitFrostFSObjectPayloadReader(x.ctx, x.prm)
if err != nil {
return n, fmt.Errorf("init payload reader for the next part: %w", err)
}
x.prm.Off = 0
x.prm.Ln = 0
next, err := x.Read(p[n:])
return n + next, err
}
// InitFrostFSObjectPayloadReader initializes payload reader of the FrostFS object.
// Zero range corresponds to full payload (panics if only offset is set).
func (x *FrostFS) InitFrostFSObjectPayloadReader(ctx context.Context, p GetFrostFSParams) (io.ReadCloser, error) {
var prmAuth handler.PrmAuth
if p.Off+p.Ln != 0 {
prm := handler.PrmObjectRange{
PrmAuth: prmAuth,
PayloadRange: [2]uint64{p.Off, p.Ln},
Address: p.Addr,
}
return x.RangeObject(ctx, prm)
}
prm := handler.PrmObjectGet{
PrmAuth: prmAuth,
Address: p.Addr,
}
res, err := x.GetObject(ctx, prm)
if err != nil {
return nil, err
}
return res.Payload, nil
}

View file

@ -0,0 +1,137 @@
package frostfs
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"testing"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
oidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id/test"
"github.com/stretchr/testify/require"
)
type readerInitiatorMock struct {
parts map[oid.ID][]byte
}
func (r *readerInitiatorMock) InitFrostFSObjectPayloadReader(_ context.Context, p GetFrostFSParams) (io.ReadCloser, error) {
partPayload, ok := r.parts[p.Addr.Object()]
if !ok {
return nil, errors.New("part not found")
}
if p.Off+p.Ln == 0 {
return io.NopCloser(bytes.NewReader(partPayload)), nil
}
if p.Off > uint64(len(partPayload)-1) {
return nil, fmt.Errorf("invalid offset: %d/%d", p.Off, len(partPayload))
}
if p.Off+p.Ln > uint64(len(partPayload)) {
return nil, fmt.Errorf("invalid range: %d-%d/%d", p.Off, p.Off+p.Ln, len(partPayload))
}
return io.NopCloser(bytes.NewReader(partPayload[p.Off : p.Off+p.Ln])), nil
}
func prepareDataReader() ([]byte, []PartObj, *readerInitiatorMock) {
mockInitReader := &readerInitiatorMock{
parts: map[oid.ID][]byte{
oidtest.ID(): []byte("first part 1"),
oidtest.ID(): []byte("second part 2"),
oidtest.ID(): []byte("third part 3"),
},
}
var fullPayload []byte
parts := make([]PartObj, 0, len(mockInitReader.parts))
for id, payload := range mockInitReader.parts {
parts = append(parts, PartObj{OID: id, Size: uint64(len(payload))})
fullPayload = append(fullPayload, payload...)
}
return fullPayload, parts, mockInitReader
}
func TestMultiReader(t *testing.T) {
ctx := context.Background()
fullPayload, parts, mockInitReader := prepareDataReader()
for _, tc := range []struct {
name string
off uint64
ln uint64
err error
}{
{
name: "simple read all",
},
{
name: "simple read with length",
ln: uint64(len(fullPayload)),
},
{
name: "middle of parts",
off: parts[0].Size + 2,
ln: 4,
},
{
name: "first and second",
off: parts[0].Size - 4,
ln: 8,
},
{
name: "first and third",
off: parts[0].Size - 4,
ln: parts[1].Size + 8,
},
{
name: "second part",
off: parts[0].Size,
ln: parts[1].Size,
},
{
name: "second and third",
off: parts[0].Size,
ln: parts[1].Size + parts[2].Size,
},
{
name: "offset out of range",
off: uint64(len(fullPayload) + 1),
ln: 1,
err: errOffsetIsOutOfRange,
},
{
name: "zero length",
off: parts[1].Size + 1,
ln: 0,
err: errorZeroRangeLength,
},
} {
t.Run(tc.name, func(t *testing.T) {
multiReader, err := NewMultiObjectReader(ctx, MultiObjectReaderConfig{
Initiator: mockInitReader,
Parts: parts,
Off: tc.off,
Ln: tc.ln,
})
require.ErrorIs(t, err, tc.err)
if tc.err == nil {
off := tc.off
ln := tc.ln
if off+ln == 0 {
ln = uint64(len(fullPayload))
}
data, err := io.ReadAll(multiReader)
require.NoError(t, err)
require.Equal(t, fullPayload[off:off+ln], data)
}
})
}
}

View file

@ -0,0 +1,69 @@
package frostfs
import (
"context"
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/cache"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/handler"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"go.uber.org/zap"
)
type Source struct {
frostFS *FrostFS
netmapCache *cache.NetmapCache
bucketCache *cache.BucketCache
log *zap.Logger
}
func NewSource(frostFS *FrostFS, netmapCache *cache.NetmapCache, bucketCache *cache.BucketCache, log *zap.Logger) *Source {
return &Source{
frostFS: frostFS,
netmapCache: netmapCache,
bucketCache: bucketCache,
log: log,
}
}
func (s *Source) NetMapSnapshot(ctx context.Context) (netmap.NetMap, error) {
cachedNetmap := s.netmapCache.Get()
if cachedNetmap != nil {
return *cachedNetmap, nil
}
netmapSnapshot, err := s.frostFS.NetmapSnapshot(ctx)
if err != nil {
return netmap.NetMap{}, fmt.Errorf("get netmap: %w", err)
}
if err = s.netmapCache.Put(netmapSnapshot); err != nil {
s.log.Warn(logs.CouldntCacheNetmap, zap.Error(err))
}
return netmapSnapshot, nil
}
func (s *Source) PlacementPolicy(ctx context.Context, cnrID cid.ID) (netmap.PlacementPolicy, error) {
info := s.bucketCache.GetByCID(cnrID)
if info != nil {
return info.PlacementPolicy, nil
}
prm := handler.PrmContainer{
ContainerID: cnrID,
}
res, err := s.frostFS.Container(ctx, prm)
if err != nil {
return netmap.PlacementPolicy{}, fmt.Errorf("get container: %w", err)
}
// We don't put container back to the cache to keep cache
// coherent to the requests made by users. FrostFS Source
// is being used by SDK Tree Pool and it should not fill cache
// with possibly irrelevant container values.
return res.PlacementPolicy(), nil
}

View file

@ -0,0 +1,163 @@
package frostfs
import (
"context"
"errors"
"fmt"
"io"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tokens"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tree"
apitree "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/tree"
treepool "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool/tree"
)
type GetNodeByPathResponseInfoWrapper struct {
response *apitree.GetNodeByPathResponseInfo
}
func (n GetNodeByPathResponseInfoWrapper) GetNodeID() []uint64 {
return []uint64{n.response.GetNodeID()}
}
func (n GetNodeByPathResponseInfoWrapper) GetParentID() []uint64 {
return []uint64{n.response.GetParentID()}
}
func (n GetNodeByPathResponseInfoWrapper) GetTimestamp() []uint64 {
return []uint64{n.response.GetTimestamp()}
}
func (n GetNodeByPathResponseInfoWrapper) GetMeta() []tree.Meta {
res := make([]tree.Meta, len(n.response.GetMeta()))
for i, value := range n.response.GetMeta() {
res[i] = value
}
return res
}
type PoolWrapper struct {
p *treepool.Pool
}
func NewPoolWrapper(p *treepool.Pool) *PoolWrapper {
return &PoolWrapper{p: p}
}
func (w *PoolWrapper) GetNodes(ctx context.Context, prm *tree.GetNodesParams) ([]tree.NodeResponse, error) {
poolPrm := treepool.GetNodesParams{
CID: prm.CnrID,
TreeID: prm.TreeID,
Path: prm.Path,
Meta: prm.Meta,
PathAttribute: tree.FileNameKey,
LatestOnly: prm.LatestOnly,
AllAttrs: prm.AllAttrs,
BearerToken: getBearer(ctx),
}
nodes, err := w.p.GetNodes(ctx, poolPrm)
if err != nil {
return nil, handleError(err)
}
res := make([]tree.NodeResponse, len(nodes))
for i, info := range nodes {
res[i] = GetNodeByPathResponseInfoWrapper{info}
}
return res, nil
}
func getBearer(ctx context.Context) []byte {
token, err := tokens.LoadBearerToken(ctx)
if err != nil {
return nil
}
return token.Marshal()
}
func handleError(err error) error {
if err == nil {
return nil
}
if errors.Is(err, treepool.ErrNodeNotFound) {
return fmt.Errorf("%w: %s", tree.ErrNodeNotFound, err.Error())
}
if errors.Is(err, treepool.ErrNodeAccessDenied) {
return fmt.Errorf("%w: %s", tree.ErrNodeAccessDenied, err.Error())
}
return err
}
func (w *PoolWrapper) GetSubTree(ctx context.Context, bktInfo *data.BucketInfo, treeID string, rootID []uint64, depth uint32, sort bool) ([]tree.NodeResponse, error) {
order := treepool.NoneOrder
if sort {
order = treepool.AscendingOrder
}
poolPrm := treepool.GetSubTreeParams{
CID: bktInfo.CID,
TreeID: treeID,
RootID: rootID,
Depth: depth,
BearerToken: getBearer(ctx),
Order: order,
}
if len(rootID) == 1 && rootID[0] == 0 {
// storage node interprets 'nil' value as []uint64{0}
// gate wants to send 'nil' value instead of []uint64{0}, because
// it provides compatibility with previous tree service api where
// single uint64(0) value is dropped from signature
poolPrm.RootID = nil
}
subTreeReader, err := w.p.GetSubTree(ctx, poolPrm)
if err != nil {
return nil, handleError(err)
}
var subtree []tree.NodeResponse
node, err := subTreeReader.Next()
for err == nil {
subtree = append(subtree, GetSubTreeResponseBodyWrapper{node})
node, err = subTreeReader.Next()
}
if err != io.EOF {
return nil, handleError(err)
}
return subtree, nil
}
type GetSubTreeResponseBodyWrapper struct {
response *apitree.GetSubTreeResponseBody
}
func (n GetSubTreeResponseBodyWrapper) GetNodeID() []uint64 {
return n.response.GetNodeID()
}
func (n GetSubTreeResponseBodyWrapper) GetParentID() []uint64 {
resp := n.response.GetParentID()
if resp == nil {
// storage sends nil that should be interpreted as []uint64{0}
// due to protobuf compatibility, see 'GetSubTree' function
return []uint64{0}
}
return resp
}
func (n GetSubTreeResponseBodyWrapper) GetTimestamp() []uint64 {
return n.response.GetTimestamp()
}
func (n GetSubTreeResponseBodyWrapper) GetMeta() []tree.Meta {
res := make([]tree.Meta, len(n.response.GetMeta()))
for i, value := range n.response.GetMeta() {
res[i] = value
}
return res
}

View file

@ -0,0 +1,112 @@
{{$container := .Container}}
{{ $prefix := trimPrefix .Prefix }}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8"/>
<title>Index of {{.Protocol}}://{{$container}}
/{{if $prefix}}/{{$prefix}}/{{end}}</title>
<style>
.alert {
width: 80%;
box-sizing: border-box;
padding: 20px;
background-color: #f44336;
color: white;
margin-bottom: 15px;
}
table {
width: 80%;
border-collapse: collapse;
}
body {
background: #f2f2f2;
}
table, th, td {
border: 0 solid transparent;
}
th, td {
padding: 10px;
text-align: left;
}
th {
background-color: #c3bcbc;
}
h1 {
font-size: 1.5em;
}
tr:nth-child(even) {background-color: #ebe7e7;}
</style>
</head>
<body>
<h1>Index of {{.Protocol}}://{{$container}}/{{if $prefix}}{{$prefix}}/{{end}}</h1>
{{ if .HasErrors }}
<div class="alert">
Errors occurred while processing the request. Perhaps some objects are missing
</div>
{{ end }}
<table>
<thead>
<tr>
<th>Filename</th>
<th>OID</th>
<th>Size</th>
<th>Created</th>
<th>Download</th>
</tr>
</thead>
<tbody>
{{ $trimmedPrefix := trimPrefix $prefix }}
{{if $trimmedPrefix }}
<tr>
<td>
⮐<a href="/get/{{$container}}{{ urlencode $trimmedPrefix }}/">..</a>
</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
{{else}}
<tr>
<td>
⮐<a href="/get/{{$container}}/">..</a>
</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
{{end}}
{{range .Objects}}
<tr>
<td>
{{if .IsDir}}
🗀
<a href="{{.GetURL}}/">
{{.FileName}}/
</a>
{{else}}
🗎
<a href="{{ .GetURL }}">
{{.FileName}}
</a>
{{end}}
</td>
<td>{{.OID}}</td>
<td>{{if not .IsDir}}{{ formatSize .Size }}{{end}}</td>
<td>{{ .Created }}</td>
<td>
{{ if .OID }}
<a href="{{ .GetURL }}?download=true">
Link
</a>
{{ end }}
</td>
</tr>
{{end}}
</tbody>
</table>
</body>
</html>

View file

@ -0,0 +1,6 @@
package templates
import _ "embed"
//go:embed index.gotmpl
var DefaultIndexTemplate string

View file

@ -76,6 +76,15 @@ var appMetricsDesc = map[string]map[string]Description{
VariableLabels: []string{"endpoint"},
},
},
statisticSubsystem: {
droppedLogs: Description{
Type: dto.MetricType_COUNTER,
Namespace: namespace,
Subsystem: statisticSubsystem,
Name: droppedLogs,
Help: "Dropped logs (by sampling) count",
},
},
}
type Description struct {
@ -148,3 +157,12 @@ func mustNewGaugeVec(description Description) *prometheus.GaugeVec {
description.VariableLabels,
)
}
func mustNewCounter(description Description) prometheus.Counter {
if description.Type != dto.MetricType_COUNTER {
panic("invalid metric type")
}
return prometheus.NewCounter(
prometheus.CounterOpts(newOpts(description)),
)
}

View file

@ -10,15 +10,17 @@ import (
)
const (
namespace = "frostfs_http_gw"
stateSubsystem = "state"
poolSubsystem = "pool"
serverSubsystem = "server"
namespace = "frostfs_http_gw"
stateSubsystem = "state"
poolSubsystem = "pool"
serverSubsystem = "server"
statisticSubsystem = "statistic"
)
const (
healthMetric = "health"
versionInfoMetric = "version_info"
droppedLogs = "dropped_logs"
)
const (
@ -30,21 +32,19 @@ const (
)
const (
methodGetBalance = "get_balance"
methodPutContainer = "put_container"
methodGetContainer = "get_container"
methodListContainer = "list_container"
methodDeleteContainer = "delete_container"
methodGetContainerEacl = "get_container_eacl"
methodSetContainerEacl = "set_container_eacl"
methodEndpointInfo = "endpoint_info"
methodNetworkInfo = "network_info"
methodPutObject = "put_object"
methodDeleteObject = "delete_object"
methodGetObject = "get_object"
methodHeadObject = "head_object"
methodRangeObject = "range_object"
methodCreateSession = "create_session"
methodGetBalance = "get_balance"
methodPutContainer = "put_container"
methodGetContainer = "get_container"
methodListContainer = "list_container"
methodDeleteContainer = "delete_container"
methodEndpointInfo = "endpoint_info"
methodNetworkInfo = "network_info"
methodPutObject = "put_object"
methodDeleteObject = "delete_object"
methodGetObject = "get_object"
methodHeadObject = "head_object"
methodRangeObject = "range_object"
methodCreateSession = "create_session"
)
// HealthStatus of the gate application.
@ -69,6 +69,7 @@ type GateMetrics struct {
stateMetrics
poolMetricsCollector
serverMetrics
statisticMetrics
}
type stateMetrics struct {
@ -76,6 +77,10 @@ type stateMetrics struct {
versionInfo *prometheus.GaugeVec
}
type statisticMetrics struct {
droppedLogs prometheus.Counter
}
type poolMetricsCollector struct {
scraper StatisticScraper
overallErrors prometheus.Gauge
@ -96,10 +101,14 @@ func NewGateMetrics(p StatisticScraper) *GateMetrics {
serverMetric := newServerMetrics()
serverMetric.register()
statsMetric := newStatisticMetrics()
statsMetric.register()
return &GateMetrics{
stateMetrics: *stateMetric,
poolMetricsCollector: *poolMetric,
serverMetrics: *serverMetric,
statisticMetrics: *statsMetric,
}
}
@ -107,6 +116,7 @@ func (g *GateMetrics) Unregister() {
g.stateMetrics.unregister()
prometheus.Unregister(&g.poolMetricsCollector)
g.serverMetrics.unregister()
g.statisticMetrics.unregister()
}
func newStateMetrics() *stateMetrics {
@ -116,6 +126,20 @@ func newStateMetrics() *stateMetrics {
}
}
func newStatisticMetrics() *statisticMetrics {
return &statisticMetrics{
droppedLogs: mustNewCounter(appMetricsDesc[statisticSubsystem][droppedLogs]),
}
}
func (s *statisticMetrics) register() {
prometheus.MustRegister(s.droppedLogs)
}
func (s *statisticMetrics) unregister() {
prometheus.Unregister(s.droppedLogs)
}
func (m stateMetrics) register() {
prometheus.MustRegister(m.healthCheck)
prometheus.MustRegister(m.versionInfo)
@ -134,6 +158,13 @@ func (m stateMetrics) SetVersion(ver string) {
m.versionInfo.WithLabelValues(ver).Set(1)
}
func (s *statisticMetrics) DroppedLogsInc() {
if s == nil {
return
}
s.droppedLogs.Inc()
}
func newPoolMetricsCollector(p StatisticScraper) *poolMetricsCollector {
return &poolMetricsCollector{
scraper: p,
@ -191,8 +222,6 @@ func (m *poolMetricsCollector) updateRequestsDuration(node pool.NodeStatistic) {
m.requestDuration.WithLabelValues(node.Address(), methodGetContainer).Set(float64(node.AverageGetContainer().Milliseconds()))
m.requestDuration.WithLabelValues(node.Address(), methodListContainer).Set(float64(node.AverageListContainer().Milliseconds()))
m.requestDuration.WithLabelValues(node.Address(), methodDeleteContainer).Set(float64(node.AverageDeleteContainer().Milliseconds()))
m.requestDuration.WithLabelValues(node.Address(), methodGetContainerEacl).Set(float64(node.AverageGetContainerEACL().Milliseconds()))
m.requestDuration.WithLabelValues(node.Address(), methodSetContainerEacl).Set(float64(node.AverageSetContainerEACL().Milliseconds()))
m.requestDuration.WithLabelValues(node.Address(), methodEndpointInfo).Set(float64(node.AverageEndpointInfo().Milliseconds()))
m.requestDuration.WithLabelValues(node.Address(), methodNetworkInfo).Set(float64(node.AverageNetworkInfo().Milliseconds()))
m.requestDuration.WithLabelValues(node.Address(), methodPutObject).Set(float64(node.AveragePutObject().Milliseconds()))

View file

@ -40,6 +40,9 @@ func (ms *Service) ShutDown(ctx context.Context) {
ms.log.Info(logs.ShuttingDownService, zap.String("endpoint", ms.Addr))
err := ms.Shutdown(ctx)
if err != nil {
ms.log.Panic(logs.CantShutDownService)
ms.log.Error(logs.CantGracefullyShutDownService, zap.Error(err))
if err = ms.Close(); err != nil {
ms.log.Panic(logs.CantShutDownService, zap.Error(err))
}
}
}

View file

@ -1,35 +0,0 @@
package resolver
import (
"context"
"errors"
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
)
// FrostFSResolver represents virtual connection to the FrostFS network.
// It implements resolver.FrostFS.
type FrostFSResolver struct {
pool *pool.Pool
}
// NewFrostFSResolver creates new FrostFSResolver using provided pool.Pool.
func NewFrostFSResolver(p *pool.Pool) *FrostFSResolver {
return &FrostFSResolver{pool: p}
}
// SystemDNS implements resolver.FrostFS interface method.
func (x *FrostFSResolver) SystemDNS(ctx context.Context) (string, error) {
networkInfo, err := x.pool.NetworkInfo(ctx)
if err != nil {
return "", fmt.Errorf("read network info via client: %w", err)
}
domain := networkInfo.RawNetworkParameter("SystemDNS")
if domain == nil {
return "", errors.New("system DNS parameter not found or empty")
}
return string(domain), nil
}

View file

@ -6,6 +6,7 @@ import (
"fmt"
"sync"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/handler/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/ns"
@ -28,9 +29,14 @@ type FrostFS interface {
SystemDNS(context.Context) (string, error)
}
type Settings interface {
FormContainerZone(ns string) (zone string, isDefault bool)
}
type Config struct {
FrostFS FrostFS
RPCAddress string
Settings Settings
}
type ContainerResolver struct {
@ -135,29 +141,43 @@ func (r *ContainerResolver) equals(resolverNames []string) bool {
func newResolver(name string, cfg *Config) (*Resolver, error) {
switch name {
case DNSResolver:
return NewDNSResolver(cfg.FrostFS)
return NewDNSResolver(cfg.FrostFS, cfg.Settings)
case NNSResolver:
return NewNNSResolver(cfg.RPCAddress)
return NewNNSResolver(cfg.RPCAddress, cfg.Settings)
default:
return nil, fmt.Errorf("unknown resolver: %s", name)
}
}
func NewDNSResolver(frostFS FrostFS) (*Resolver, error) {
func NewDNSResolver(frostFS FrostFS, settings Settings) (*Resolver, error) {
if frostFS == nil {
return nil, fmt.Errorf("pool must not be nil for DNS resolver")
}
if settings == nil {
return nil, fmt.Errorf("resolver settings must not be nil for DNS resolver")
}
var dns ns.DNS
resolveFunc := func(ctx context.Context, name string) (*cid.ID, error) {
domain, err := frostFS.SystemDNS(ctx)
var err error
namespace, err := middleware.GetNamespace(ctx)
if err != nil {
return nil, fmt.Errorf("read system DNS parameter of the FrostFS: %w", err)
return nil, err
}
domain = name + "." + domain
zone, isDefault := settings.FormContainerZone(namespace)
if isDefault {
zone, err = frostFS.SystemDNS(ctx)
if err != nil {
return nil, fmt.Errorf("read system DNS parameter of the FrostFS: %w", err)
}
}
domain := name + "." + zone
cnrID, err := dns.ResolveContainerName(domain)
if err != nil {
return nil, fmt.Errorf("couldn't resolve container '%s' as '%s': %w", name, domain, err)
}
@ -170,17 +190,32 @@ func NewDNSResolver(frostFS FrostFS) (*Resolver, error) {
}, nil
}
func NewNNSResolver(rpcAddress string) (*Resolver, error) {
func NewNNSResolver(rpcAddress string, settings Settings) (*Resolver, error) {
if rpcAddress == "" {
return nil, fmt.Errorf("rpc address must not be empty for NNS resolver")
}
if settings == nil {
return nil, fmt.Errorf("resolver settings must not be nil for NNS resolver")
}
var nns ns.NNS
if err := nns.Dial(rpcAddress); err != nil {
return nil, fmt.Errorf("could not dial nns: %w", err)
}
resolveFunc := func(_ context.Context, name string) (*cid.ID, error) {
resolveFunc := func(ctx context.Context, name string) (*cid.ID, error) {
var d container.Domain
d.SetName(name)
namespace, err := middleware.GetNamespace(ctx)
if err != nil {
return nil, err
}
zone, _ := settings.FormContainerZone(namespace)
d.SetZone(zone)
cnrID, err := nns.ResolveContainerDomain(d)
if err != nil {
return nil, fmt.Errorf("couldn't resolve container '%s': %w", name, err)

View file

@ -1,41 +0,0 @@
package response
import (
"errors"
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
sdkstatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
"github.com/valyala/fasthttp"
"go.uber.org/zap"
)
func Error(r *fasthttp.RequestCtx, msg string, code int) {
r.Error(msg+"\n", code)
}
func FormErrorResponse(message string, err error) (int, string, []zap.Field) {
var (
msg string
statusCode int
logFields []zap.Field
)
st := new(sdkstatus.ObjectAccessDenied)
switch {
case errors.As(err, &st):
statusCode = fasthttp.StatusForbidden
reason := st.Reason()
msg = fmt.Sprintf("%s: %v: %s", message, err, reason)
logFields = append(logFields, zap.String("error_detail", reason))
case client.IsErrObjectNotFound(err) || client.IsErrContainerNotFound(err):
statusCode = fasthttp.StatusNotFound
msg = "Not Found"
default:
statusCode = fasthttp.StatusBadRequest
msg = fmt.Sprintf("%s: %v", message, err)
}
return statusCode, msg, logFields
}

View file

@ -1,518 +0,0 @@
package main
import (
"context"
"encoding/hex"
"fmt"
"os"
"path"
"runtime"
"sort"
"strconv"
"strings"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/resolver"
grpctracing "git.frostfs.info/TrueCloudLab/frostfs-observability/tracing/grpc"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
treepool "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool/tree"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/spf13/pflag"
"github.com/spf13/viper"
"github.com/valyala/fasthttp"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
"google.golang.org/grpc"
)
const (
defaultRebalanceTimer = 60 * time.Second
defaultRequestTimeout = 15 * time.Second
defaultConnectTimeout = 10 * time.Second
defaultStreamTimeout = 10 * time.Second
defaultShutdownTimeout = 15 * time.Second
defaultPoolErrorThreshold uint32 = 100
cfgServer = "server"
cfgTLSEnabled = "tls.enabled"
cfgTLSCertFile = "tls.cert_file"
cfgTLSKeyFile = "tls.key_file"
// Web.
cfgWebReadBufferSize = "web.read_buffer_size"
cfgWebWriteBufferSize = "web.write_buffer_size"
cfgWebReadTimeout = "web.read_timeout"
cfgWebWriteTimeout = "web.write_timeout"
cfgWebStreamRequestBody = "web.stream_request_body"
cfgWebMaxRequestBodySize = "web.max_request_body_size"
// Metrics / Profiler.
cfgPrometheusEnabled = "prometheus.enabled"
cfgPrometheusAddress = "prometheus.address"
cfgPprofEnabled = "pprof.enabled"
cfgPprofAddress = "pprof.address"
// Tracing ...
cfgTracingEnabled = "tracing.enabled"
cfgTracingExporter = "tracing.exporter"
cfgTracingEndpoint = "tracing.endpoint"
// Pool config.
cfgConTimeout = "connect_timeout"
cfgStreamTimeout = "stream_timeout"
cfgReqTimeout = "request_timeout"
cfgRebalance = "rebalance_timer"
cfgPoolErrorThreshold = "pool_error_threshold"
// Logger.
cfgLoggerLevel = "logger.level"
// Wallet.
cfgWalletPassphrase = "wallet.passphrase"
cfgWalletPath = "wallet.path"
cfgWalletAddress = "wallet.address"
// Uploader Header.
cfgUploaderHeaderEnableDefaultTimestamp = "upload_header.use_default_timestamp"
// Peers.
cfgPeers = "peers"
// NeoGo.
cfgRPCEndpoint = "rpc_endpoint"
// Resolving.
cfgResolveOrder = "resolve_order"
// Zip compression.
cfgZipCompression = "zip.compression"
// Command line args.
cmdHelp = "help"
cmdVersion = "version"
cmdPprof = "pprof"
cmdMetrics = "metrics"
cmdWallet = "wallet"
cmdAddress = "address"
cmdConfig = "config"
cmdConfigDir = "config-dir"
cmdListenAddress = "listen_address"
)
var ignore = map[string]struct{}{
cfgPeers: {},
cmdHelp: {},
cmdVersion: {},
}
func settings() *viper.Viper {
v := viper.New()
v.AutomaticEnv()
v.SetEnvPrefix(Prefix)
v.AllowEmptyEnv(true)
v.SetConfigType("yaml")
v.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
// flags setup:
flags := pflag.NewFlagSet("commandline", pflag.ExitOnError)
flags.SetOutput(os.Stdout)
flags.SortFlags = false
flags.Bool(cmdPprof, false, "enable pprof")
flags.Bool(cmdMetrics, false, "enable prometheus")
help := flags.BoolP(cmdHelp, "h", false, "show help")
version := flags.BoolP(cmdVersion, "v", false, "show version")
flags.StringP(cmdWallet, "w", "", `path to the wallet`)
flags.String(cmdAddress, "", `address of wallet account`)
flags.StringArray(cmdConfig, nil, "config paths")
flags.String(cmdConfigDir, "", "config dir path")
flags.Duration(cfgConTimeout, defaultConnectTimeout, "gRPC connect timeout")
flags.Duration(cfgStreamTimeout, defaultStreamTimeout, "gRPC individual message timeout")
flags.Duration(cfgReqTimeout, defaultRequestTimeout, "gRPC request timeout")
flags.Duration(cfgRebalance, defaultRebalanceTimer, "gRPC connection rebalance timer")
flags.String(cmdListenAddress, "0.0.0.0:8080", "addresses to listen")
flags.String(cfgTLSCertFile, "", "TLS certificate path")
flags.String(cfgTLSKeyFile, "", "TLS key path")
peers := flags.StringArrayP(cfgPeers, "p", nil, "FrostFS nodes")
resolveMethods := flags.StringSlice(cfgResolveOrder, []string{resolver.NNSResolver, resolver.DNSResolver}, "set container name resolve order")
// set defaults:
// logger:
v.SetDefault(cfgLoggerLevel, "debug")
// pool:
v.SetDefault(cfgPoolErrorThreshold, defaultPoolErrorThreshold)
// web-server:
v.SetDefault(cfgWebReadBufferSize, 4096)
v.SetDefault(cfgWebWriteBufferSize, 4096)
v.SetDefault(cfgWebReadTimeout, time.Minute*10)
v.SetDefault(cfgWebWriteTimeout, time.Minute*5)
v.SetDefault(cfgWebStreamRequestBody, true)
v.SetDefault(cfgWebMaxRequestBodySize, fasthttp.DefaultMaxRequestBodySize)
// upload header
v.SetDefault(cfgUploaderHeaderEnableDefaultTimestamp, false)
// zip:
v.SetDefault(cfgZipCompression, false)
// metrics
v.SetDefault(cfgPprofAddress, "localhost:8083")
v.SetDefault(cfgPrometheusAddress, "localhost:8084")
// Binding flags
if err := v.BindPFlag(cfgPprofEnabled, flags.Lookup(cmdPprof)); err != nil {
panic(err)
}
if err := v.BindPFlag(cfgPrometheusEnabled, flags.Lookup(cmdMetrics)); err != nil {
panic(err)
}
if err := v.BindPFlag(cfgWalletPath, flags.Lookup(cmdWallet)); err != nil {
panic(err)
}
if err := v.BindPFlag(cfgWalletAddress, flags.Lookup(cmdAddress)); err != nil {
panic(err)
}
if err := v.BindPFlags(flags); err != nil {
panic(err)
}
if err := v.BindPFlag(cfgServer+".0.address", flags.Lookup(cmdListenAddress)); err != nil {
panic(err)
}
if err := v.BindPFlag(cfgServer+".0."+cfgTLSKeyFile, flags.Lookup(cfgTLSKeyFile)); err != nil {
panic(err)
}
if err := v.BindPFlag(cfgServer+".0."+cfgTLSCertFile, flags.Lookup(cfgTLSCertFile)); err != nil {
panic(err)
}
if err := flags.Parse(os.Args); err != nil {
panic(err)
}
if v.IsSet(cfgServer+".0."+cfgTLSKeyFile) && v.IsSet(cfgServer+".0."+cfgTLSCertFile) {
v.Set(cfgServer+".0."+cfgTLSEnabled, true)
}
if resolveMethods != nil {
v.SetDefault(cfgResolveOrder, *resolveMethods)
}
switch {
case help != nil && *help:
fmt.Printf("FrostFS HTTP Gateway %s\n", Version)
flags.PrintDefaults()
fmt.Println()
fmt.Println("Default environments:")
fmt.Println()
keys := v.AllKeys()
sort.Strings(keys)
for i := range keys {
if _, ok := ignore[keys[i]]; ok {
continue
}
defaultValue := v.GetString(keys[i])
if len(defaultValue) == 0 {
continue
}
k := strings.Replace(keys[i], ".", "_", -1)
fmt.Printf("%s_%s = %s\n", Prefix, strings.ToUpper(k), defaultValue)
}
fmt.Println()
fmt.Println("Peers preset:")
fmt.Println()
fmt.Printf("%s_%s_[N]_ADDRESS = string\n", Prefix, strings.ToUpper(cfgPeers))
fmt.Printf("%s_%s_[N]_WEIGHT = float\n", Prefix, strings.ToUpper(cfgPeers))
os.Exit(0)
case version != nil && *version:
fmt.Printf("FrostFS HTTP Gateway\nVersion: %s\nGoVersion: %s\n", Version, runtime.Version())
os.Exit(0)
}
if err := readInConfig(v); err != nil {
panic(err)
}
if peers != nil && len(*peers) > 0 {
for i := range *peers {
v.SetDefault(cfgPeers+"."+strconv.Itoa(i)+".address", (*peers)[i])
v.SetDefault(cfgPeers+"."+strconv.Itoa(i)+".weight", 1)
v.SetDefault(cfgPeers+"."+strconv.Itoa(i)+".priority", 1)
}
}
return v
}
func readInConfig(v *viper.Viper) error {
if v.IsSet(cmdConfig) {
if err := readConfig(v); err != nil {
return err
}
}
if v.IsSet(cmdConfigDir) {
if err := readConfigDir(v); err != nil {
return err
}
}
return nil
}
func readConfigDir(v *viper.Viper) error {
cfgSubConfigDir := v.GetString(cmdConfigDir)
entries, err := os.ReadDir(cfgSubConfigDir)
if err != nil {
return err
}
for _, entry := range entries {
if entry.IsDir() {
continue
}
ext := path.Ext(entry.Name())
if ext != ".yaml" && ext != ".yml" {
continue
}
if err = mergeConfig(v, path.Join(cfgSubConfigDir, entry.Name())); err != nil {
return err
}
}
return nil
}
func readConfig(v *viper.Viper) error {
for _, fileName := range v.GetStringSlice(cmdConfig) {
if err := mergeConfig(v, fileName); err != nil {
return err
}
}
return nil
}
func mergeConfig(v *viper.Viper, fileName string) error {
cfgFile, err := os.Open(fileName)
if err != nil {
return err
}
defer func() {
if errClose := cfgFile.Close(); errClose != nil {
panic(errClose)
}
}()
return v.MergeConfig(cfgFile)
}
// newLogger constructs a zap.Logger instance for current application.
// Panics on failure.
//
// Logger is built from zap's production logging configuration with:
// - parameterized level (debug by default)
// - console encoding
// - ISO8601 time encoding
//
// Logger records a stack trace for all messages at or above fatal level.
//
// See also zapcore.Level, zap.NewProductionConfig, zap.AddStacktrace.
func newLogger(v *viper.Viper) (*zap.Logger, zap.AtomicLevel) {
lvl, err := getLogLevel(v)
if err != nil {
panic(err)
}
c := zap.NewProductionConfig()
c.Level = zap.NewAtomicLevelAt(lvl)
c.Encoding = "console"
c.EncoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder
l, err := c.Build(
zap.AddStacktrace(zap.NewAtomicLevelAt(zap.FatalLevel)),
)
if err != nil {
panic(fmt.Sprintf("build zap logger instance: %v", err))
}
return l, c.Level
}
func getLogLevel(v *viper.Viper) (zapcore.Level, error) {
var lvl zapcore.Level
lvlStr := v.GetString(cfgLoggerLevel)
err := lvl.UnmarshalText([]byte(lvlStr))
if err != nil {
return lvl, fmt.Errorf("incorrect logger level configuration %s (%v), "+
"value should be one of %v", lvlStr, err, [...]zapcore.Level{
zapcore.DebugLevel,
zapcore.InfoLevel,
zapcore.WarnLevel,
zapcore.ErrorLevel,
zapcore.DPanicLevel,
zapcore.PanicLevel,
zapcore.FatalLevel,
})
}
return lvl, nil
}
func fetchServers(v *viper.Viper) []ServerInfo {
var servers []ServerInfo
for i := 0; ; i++ {
key := cfgServer + "." + strconv.Itoa(i) + "."
var serverInfo ServerInfo
serverInfo.Address = v.GetString(key + "address")
serverInfo.TLS.Enabled = v.GetBool(key + cfgTLSEnabled)
serverInfo.TLS.KeyFile = v.GetString(key + cfgTLSKeyFile)
serverInfo.TLS.CertFile = v.GetString(key + cfgTLSCertFile)
if serverInfo.Address == "" {
break
}
servers = append(servers, serverInfo)
}
return servers
}
func getPools(ctx context.Context, logger *zap.Logger, cfg *viper.Viper) (*pool.Pool, *treepool.Pool, *keys.PrivateKey) {
key, err := getFrostFSKey(cfg, logger)
if err != nil {
logger.Fatal(logs.CouldNotLoadFrostFSPrivateKey, zap.Error(err))
}
var prm pool.InitParameters
var prmTree treepool.InitParameters
prm.SetKey(&key.PrivateKey)
prmTree.SetKey(key)
logger.Info(logs.UsingCredentials, zap.String("FrostFS", hex.EncodeToString(key.PublicKey().Bytes())))
for _, peer := range fetchPeers(logger, cfg) {
prm.AddNode(peer)
prmTree.AddNode(peer)
}
connTimeout := cfg.GetDuration(cfgConTimeout)
if connTimeout <= 0 {
connTimeout = defaultConnectTimeout
}
prm.SetNodeDialTimeout(connTimeout)
prmTree.SetNodeDialTimeout(connTimeout)
streamTimeout := cfg.GetDuration(cfgStreamTimeout)
if streamTimeout <= 0 {
streamTimeout = defaultStreamTimeout
}
prm.SetNodeStreamTimeout(streamTimeout)
prmTree.SetNodeStreamTimeout(streamTimeout)
healthCheckTimeout := cfg.GetDuration(cfgReqTimeout)
if healthCheckTimeout <= 0 {
healthCheckTimeout = defaultRequestTimeout
}
prm.SetHealthcheckTimeout(healthCheckTimeout)
prmTree.SetHealthcheckTimeout(healthCheckTimeout)
rebalanceInterval := cfg.GetDuration(cfgRebalance)
if rebalanceInterval <= 0 {
rebalanceInterval = defaultRebalanceTimer
}
prm.SetClientRebalanceInterval(rebalanceInterval)
prmTree.SetClientRebalanceInterval(rebalanceInterval)
errorThreshold := cfg.GetUint32(cfgPoolErrorThreshold)
if errorThreshold <= 0 {
errorThreshold = defaultPoolErrorThreshold
}
prm.SetErrorThreshold(errorThreshold)
prm.SetLogger(logger)
prmTree.SetLogger(logger)
var apiGRPCDialOpts []grpc.DialOption
var treeGRPCDialOpts []grpc.DialOption
if cfg.GetBool(cfgTracingEnabled) {
interceptors := []grpc.DialOption{
grpc.WithUnaryInterceptor(grpctracing.NewUnaryClientInteceptor()),
grpc.WithStreamInterceptor(grpctracing.NewStreamClientInterceptor()),
}
treeGRPCDialOpts = append(treeGRPCDialOpts, interceptors...)
apiGRPCDialOpts = append(apiGRPCDialOpts, interceptors...)
}
prm.SetGRPCDialOptions(apiGRPCDialOpts...)
prmTree.SetGRPCDialOptions(treeGRPCDialOpts...)
p, err := pool.NewPool(prm)
if err != nil {
logger.Fatal(logs.FailedToCreateConnectionPool, zap.Error(err))
}
if err = p.Dial(ctx); err != nil {
logger.Fatal(logs.FailedToDialConnectionPool, zap.Error(err))
}
treePool, err := treepool.NewPool(prmTree)
if err != nil {
logger.Fatal(logs.FailedToCreateTreePool, zap.Error(err))
}
if err = treePool.Dial(ctx); err != nil {
logger.Fatal(logs.FailedToDialTreePool, zap.Error(err))
}
return p, treePool, key
}
func fetchPeers(l *zap.Logger, v *viper.Viper) []pool.NodeParam {
var nodes []pool.NodeParam
for i := 0; ; i++ {
key := cfgPeers + "." + strconv.Itoa(i) + "."
address := v.GetString(key + "address")
weight := v.GetFloat64(key + "weight")
priority := v.GetInt(key + "priority")
if address == "" {
break
}
if weight <= 0 { // unspecified or wrong
weight = 1
}
if priority <= 0 { // unspecified or wrong
priority = 1
}
nodes = append(nodes, pool.NewNodeParam(priority, address, weight))
l.Info(logs.AddedStoragePeer,
zap.Int("priority", priority),
zap.String("address", address),
zap.Float64("weight", weight))
}
return nodes
}

View file

@ -52,8 +52,8 @@ func BearerTokenFromCookie(h *fasthttp.RequestHeader) []byte {
// StoreBearerTokenAppCtx extracts a bearer token from the header or cookie and stores
// it in the application context.
func StoreBearerTokenAppCtx(ctx context.Context, req *fasthttp.RequestCtx) (context.Context, error) {
tkn, err := fetchBearerToken(req)
func StoreBearerTokenAppCtx(ctx context.Context, c *fasthttp.RequestCtx) (context.Context, error) {
tkn, err := fetchBearerToken(c)
if err != nil {
return nil, err
}
@ -82,14 +82,22 @@ func fetchBearerToken(ctx *fasthttp.RequestCtx) (*bearer.Token, error) {
tkn = new(bearer.Token)
)
for _, parse := range []fromHandler{BearerTokenFromHeader, BearerTokenFromCookie} {
if buf = parse(&ctx.Request.Header); buf == nil {
buf = parse(&ctx.Request.Header)
if buf == nil {
continue
} else if data, err := base64.StdEncoding.DecodeString(string(buf)); err != nil {
}
data, err := base64.StdEncoding.DecodeString(string(buf))
if err != nil {
lastErr = fmt.Errorf("can't base64-decode bearer token: %w", err)
continue
} else if err = tkn.Unmarshal(data); err != nil {
lastErr = fmt.Errorf("can't unmarshal bearer token: %w", err)
continue
}
if err = tkn.Unmarshal(data); err != nil {
if err = tkn.UnmarshalJSON(data); err != nil {
lastErr = fmt.Errorf("can't unmarshal bearer token: %w", err)
continue
}
}
return tkn, nil

View file

@ -23,19 +23,29 @@ func makeTestCookie(value []byte) *fasthttp.RequestHeader {
func makeTestHeader(value []byte) *fasthttp.RequestHeader {
header := new(fasthttp.RequestHeader)
if value != nil {
header.Set(fasthttp.HeaderAuthorization, bearerTokenHdr+" "+string(value))
header.Set(fasthttp.HeaderAuthorization, string(value))
}
return header
}
func Test_fromCookie(t *testing.T) {
func makeBearer(value string) string {
return bearerTokenHdr + " " + value
}
func TestBearerTokenFromCookie(t *testing.T) {
cases := []struct {
name string
actual []byte
expect []byte
}{
{name: "empty"},
{name: "normal", actual: []byte("TOKEN"), expect: []byte("TOKEN")},
{
name: "empty",
},
{
name: "normal",
actual: []byte("TOKEN"),
expect: []byte("TOKEN"),
},
}
for _, tt := range cases {
@ -45,14 +55,31 @@ func Test_fromCookie(t *testing.T) {
}
}
func Test_fromHeader(t *testing.T) {
func TestBearerTokenFromHeader(t *testing.T) {
validToken := "token"
tokenWithoutPrefix := "invalid-token"
cases := []struct {
name string
actual []byte
expect []byte
}{
{name: "empty"},
{name: "normal", actual: []byte("TOKEN"), expect: []byte("TOKEN")},
{
name: "empty",
},
{
name: "token without the bearer prefix",
actual: []byte(tokenWithoutPrefix),
},
{
name: "token without payload",
actual: []byte(makeBearer("")),
},
{
name: "normal",
actual: []byte(makeBearer(validToken)),
expect: []byte(validToken),
},
}
for _, tt := range cases {
@ -62,7 +89,7 @@ func Test_fromHeader(t *testing.T) {
}
}
func Test_fetchBearerToken(t *testing.T) {
func TestFetchBearerToken(t *testing.T) {
key, err := keys.NewPrivateKey()
require.NoError(t, err)
var uid user.ID
@ -71,47 +98,109 @@ func Test_fetchBearerToken(t *testing.T) {
tkn := new(bearer.Token)
tkn.ForUser(uid)
t64 := base64.StdEncoding.EncodeToString(tkn.Marshal())
require.NotEmpty(t, t64)
jsonToken, err := tkn.MarshalJSON()
require.NoError(t, err)
jsonTokenBase64 := base64.StdEncoding.EncodeToString(jsonToken)
binaryTokenBase64 := base64.StdEncoding.EncodeToString(tkn.Marshal())
require.NotEmpty(t, jsonTokenBase64)
require.NotEmpty(t, binaryTokenBase64)
cases := []struct {
name string
name string
cookie string
header string
error string
nilCtx bool
expect *bearer.Token
}{
{name: "empty"},
{name: "bad base64 header", header: "WRONG BASE64", error: "can't base64-decode bearer token"},
{name: "bad base64 cookie", cookie: "WRONG BASE64", error: "can't base64-decode bearer token"},
{name: "header token unmarshal error", header: "dGVzdAo=", error: "can't unmarshal bearer token"},
{name: "cookie token unmarshal error", cookie: "dGVzdAo=", error: "can't unmarshal bearer token"},
{
name: "empty",
},
{
name: "nil context",
nilCtx: true,
},
{
name: "bad base64 header",
header: "WRONG BASE64",
error: "can't base64-decode bearer token",
},
{
name: "bad base64 cookie",
cookie: "WRONG BASE64",
error: "can't base64-decode bearer token",
},
{
name: "header token unmarshal error",
header: "dGVzdAo=",
error: "can't unmarshal bearer token",
},
{
name: "cookie token unmarshal error",
cookie: "dGVzdAo=",
error: "can't unmarshal bearer token",
},
{
name: "bad header and cookie",
header: "WRONG BASE64",
cookie: "dGVzdAo=",
error: "can't unmarshal bearer token",
},
{
name: "bad header, but good cookie",
name: "bad header, but good cookie with binary token",
header: "dGVzdAo=",
cookie: t64,
cookie: binaryTokenBase64,
expect: tkn,
},
{
name: "bad cookie, but good header with binary token",
header: binaryTokenBase64,
cookie: "dGVzdAo=",
expect: tkn,
},
{
name: "bad header, but good cookie with json token",
header: "dGVzdAo=",
cookie: jsonTokenBase64,
expect: tkn,
},
{
name: "bad cookie, but good header with json token",
header: jsonTokenBase64,
cookie: "dGVzdAo=",
expect: tkn,
},
{
name: "ok for header with binary token",
header: binaryTokenBase64,
expect: tkn,
},
{
name: "ok for cookie with binary token",
cookie: binaryTokenBase64,
expect: tkn,
},
{
name: "ok for header with json token",
header: jsonTokenBase64,
expect: tkn,
},
{
name: "ok for cookie with json token",
cookie: jsonTokenBase64,
expect: tkn,
},
{name: "ok for header", header: t64, expect: tkn},
{name: "ok for cookie", cookie: t64, expect: tkn},
}
for _, tt := range cases {
t.Run(tt.name, func(t *testing.T) {
ctx := makeTestRequest(tt.cookie, tt.header)
var ctx *fasthttp.RequestCtx
if !tt.nilCtx {
ctx = makeTestRequest(tt.cookie, tt.header)
}
actual, err := fetchBearerToken(ctx)
if tt.error == "" {
@ -139,7 +228,7 @@ func makeTestRequest(cookie, header string) *fasthttp.RequestCtx {
return ctx
}
func Test_checkAndPropagateBearerToken(t *testing.T) {
func TestCheckAndPropagateBearerToken(t *testing.T) {
key, err := keys.NewPrivateKey()
require.NoError(t, err)
var uid user.ID
@ -162,3 +251,85 @@ func Test_checkAndPropagateBearerToken(t *testing.T) {
require.NoError(t, err)
require.Equal(t, tkn, actual)
}
func TestLoadBearerToken(t *testing.T) {
ctx := context.Background()
token := new(bearer.Token)
cases := []struct {
name string
appCtx context.Context
error string
}{
{
name: "token is missing in the context",
appCtx: ctx,
error: "found empty bearer token",
},
{
name: "normal",
appCtx: context.WithValue(ctx, bearerTokenKey, token),
},
}
for _, tt := range cases {
t.Run(tt.name, func(t *testing.T) {
tkn, err := LoadBearerToken(tt.appCtx)
if tt.error == "" {
require.NoError(t, err)
require.Equal(t, token, tkn)
return
}
require.Contains(t, err.Error(), tt.error)
})
}
}
func TestStoreBearerTokenAppCtx(t *testing.T) {
key, err := keys.NewPrivateKey()
require.NoError(t, err)
var uid user.ID
user.IDFromKey(&uid, key.PrivateKey.PublicKey)
tkn := new(bearer.Token)
tkn.ForUser(uid)
t64 := base64.StdEncoding.EncodeToString(tkn.Marshal())
require.NotEmpty(t, t64)
cases := []struct {
name string
req *fasthttp.RequestCtx
error string
}{
{
name: "invalid token",
req: makeTestRequest("dGVzdAo=", ""),
error: "can't unmarshal bearer token",
},
{
name: "normal",
req: makeTestRequest(t64, ""),
},
}
for _, tt := range cases {
t.Run(tt.name, func(t *testing.T) {
ctx, err := StoreBearerTokenAppCtx(context.Background(), tt.req)
if tt.error == "" {
require.NoError(t, err)
actualToken, ok := ctx.Value(bearerTokenKey).(*bearer.Token)
require.True(t, ok)
require.Equal(t, tkn, actualToken)
return
}
require.Contains(t, err.Error(), tt.error)
})
}
}

View file

@ -2,11 +2,12 @@ package tree
import (
"context"
"errors"
"fmt"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/data"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/layer"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
)
@ -20,6 +21,7 @@ type (
// Each method must return ErrNodeNotFound or ErrNodeAccessDenied if relevant.
ServiceClient interface {
GetNodes(ctx context.Context, p *GetNodesParams) ([]NodeResponse, error)
GetSubTree(ctx context.Context, bktInfo *data.BucketInfo, treeID string, rootID []uint64, depth uint32, sort bool) ([]NodeResponse, error)
}
treeNode struct {
@ -27,8 +29,14 @@ type (
Meta map[string]string
}
multiSystemNode struct {
// the first element is latest
nodes []*treeNode
}
GetNodesParams struct {
CnrID cid.ID
BktInfo *data.BucketInfo
TreeID string
Path []string
Meta []string
@ -46,17 +54,19 @@ var (
)
const (
FileNameKey = "FileName"
)
FileNameKey = "FileName"
settingsFileName = "bucket-settings"
const (
oidKV = "OID"
oidKV = "OID"
uploadIDKV = "UploadId"
sizeKV = "Size"
// keys for delete marker nodes.
isDeleteMarkerKV = "IsDeleteMarker"
// versionTree -- ID of a tree with object versions.
versionTree = "version"
systemTree = "system"
separator = "/"
)
@ -73,25 +83,28 @@ type Meta interface {
type NodeResponse interface {
GetMeta() []Meta
GetTimestamp() []uint64
GetNodeID() []uint64
GetParentID() []uint64
}
func newTreeNode(nodeInfo NodeResponse) (*treeNode, error) {
treeNode := &treeNode{
tNode := &treeNode{
Meta: make(map[string]string, len(nodeInfo.GetMeta())),
}
for _, kv := range nodeInfo.GetMeta() {
switch kv.GetKey() {
case oidKV:
if err := treeNode.ObjID.DecodeString(string(kv.GetValue())); err != nil {
if err := tNode.ObjID.DecodeString(string(kv.GetValue())); err != nil {
return nil, err
}
default:
treeNode.Meta[kv.GetKey()] = string(kv.GetValue())
tNode.Meta[kv.GetKey()] = string(kv.GetValue())
}
}
return treeNode, nil
return tNode, nil
}
func (n *treeNode) Get(key string) (string, bool) {
@ -104,30 +117,94 @@ func (n *treeNode) FileName() (string, bool) {
return value, ok
}
func newNodeVersion(node NodeResponse) (*api.NodeVersion, error) {
treeNode, err := newTreeNode(node)
func newNodeVersion(node NodeResponse) (*data.NodeVersion, error) {
tNode, err := newTreeNode(node)
if err != nil {
return nil, fmt.Errorf("invalid tree node: %w", err)
}
return newNodeVersionFromTreeNode(treeNode), nil
return newNodeVersionFromTreeNode(tNode), nil
}
func newNodeVersionFromTreeNode(treeNode *treeNode) *api.NodeVersion {
func newNodeVersionFromTreeNode(treeNode *treeNode) *data.NodeVersion {
_, isDeleteMarker := treeNode.Get(isDeleteMarkerKV)
version := &api.NodeVersion{
BaseNodeVersion: api.BaseNodeVersion{
OID: treeNode.ObjID,
version := &data.NodeVersion{
BaseNodeVersion: data.BaseNodeVersion{
OID: treeNode.ObjID,
IsDeleteMarker: isDeleteMarker,
},
DeleteMarker: isDeleteMarker,
}
return version
}
func (c *Tree) GetLatestVersion(ctx context.Context, cnrID *cid.ID, objectName string) (*api.NodeVersion, error) {
meta := []string{oidKV, isDeleteMarkerKV}
func newNodeInfo(node NodeResponse) data.NodeInfo {
nodeMeta := node.GetMeta()
nodeInfo := data.NodeInfo{
Meta: make([]data.NodeMeta, 0, len(nodeMeta)),
}
for _, meta := range nodeMeta {
nodeInfo.Meta = append(nodeInfo.Meta, meta)
}
return nodeInfo
}
func newMultiNode(nodes []NodeResponse) (*multiSystemNode, error) {
var (
err error
index int
maxTimestamp uint64
)
if len(nodes) == 0 {
return nil, errors.New("multi node must have at least one node")
}
treeNodes := make([]*treeNode, len(nodes))
for i, node := range nodes {
if treeNodes[i], err = newTreeNode(node); err != nil {
return nil, fmt.Errorf("parse system node response: %w", err)
}
if timestamp := getMaxTimestamp(node); timestamp > maxTimestamp {
index = i
maxTimestamp = timestamp
}
}
treeNodes[0], treeNodes[index] = treeNodes[index], treeNodes[0]
return &multiSystemNode{
nodes: treeNodes,
}, nil
}
func (m *multiSystemNode) Latest() *treeNode {
return m.nodes[0]
}
func (m *multiSystemNode) Old() []*treeNode {
return m.nodes[1:]
}
func (c *Tree) GetLatestVersion(ctx context.Context, cnrID *cid.ID, objectName string) (*data.NodeVersion, error) {
nodes, err := c.GetVersions(ctx, cnrID, objectName)
if err != nil {
return nil, err
}
latestNode, err := getLatestVersionNode(nodes)
if err != nil {
return nil, err
}
return newNodeVersion(latestNode)
}
func (c *Tree) GetVersions(ctx context.Context, cnrID *cid.ID, objectName string) ([]NodeResponse, error) {
meta := []string{oidKV, isDeleteMarkerKV, sizeKV}
path := pathFromName(objectName)
p := &GetNodesParams{
@ -135,22 +212,248 @@ func (c *Tree) GetLatestVersion(ctx context.Context, cnrID *cid.ID, objectName s
TreeID: versionTree,
Path: path,
Meta: meta,
LatestOnly: true,
LatestOnly: false,
AllAttrs: false,
}
return c.service.GetNodes(ctx, p)
}
func (c *Tree) CheckSettingsNodeExists(ctx context.Context, bktInfo *data.BucketInfo) error {
_, err := c.getSystemNode(ctx, bktInfo, settingsFileName)
if err != nil {
return err
}
return nil
}
func (c *Tree) getSystemNode(ctx context.Context, bktInfo *data.BucketInfo, name string) (*multiSystemNode, error) {
p := &GetNodesParams{
CnrID: bktInfo.CID,
BktInfo: bktInfo,
TreeID: systemTree,
Path: []string{name},
LatestOnly: false,
AllAttrs: true,
}
nodes, err := c.service.GetNodes(ctx, p)
if err != nil {
return nil, err
}
nodes = filterMultipartNodes(nodes)
if len(nodes) == 0 {
return nil, layer.ErrNodeNotFound
}
return newNodeVersion(nodes[0])
return newMultiNode(nodes)
}
func filterMultipartNodes(nodes []NodeResponse) []NodeResponse {
res := make([]NodeResponse, 0, len(nodes))
LOOP:
for _, node := range nodes {
for _, meta := range node.GetMeta() {
if meta.GetKey() == uploadIDKV {
continue LOOP
}
}
res = append(res, node)
}
return res
}
func getLatestVersionNode(nodes []NodeResponse) (NodeResponse, error) {
var (
maxCreationTime uint64
targetIndexNode = -1
)
for i, node := range nodes {
if !checkExistOID(node.GetMeta()) {
continue
}
if currentCreationTime := getMaxTimestamp(node); currentCreationTime > maxCreationTime {
targetIndexNode = i
maxCreationTime = currentCreationTime
}
}
if targetIndexNode == -1 {
return nil, layer.ErrNodeNotFound
}
return nodes[targetIndexNode], nil
}
func checkExistOID(meta []Meta) bool {
for _, kv := range meta {
if kv.GetKey() == "OID" {
return true
}
}
return false
}
// pathFromName splits name by '/'.
func pathFromName(objectName string) []string {
return strings.Split(objectName, separator)
}
func (c *Tree) GetSubTreeByPrefix(ctx context.Context, bktInfo *data.BucketInfo, prefix string, latestOnly bool) ([]data.NodeInfo, string, error) {
rootID, tailPrefix, err := c.determinePrefixNode(ctx, bktInfo, versionTree, prefix)
if err != nil {
return nil, "", err
}
subTree, err := c.service.GetSubTree(ctx, bktInfo, versionTree, rootID, 2, false)
if err != nil {
if errors.Is(err, ErrNodeNotFound) {
return nil, "", nil
}
return nil, "", err
}
nodesMap := make(map[string][]NodeResponse, len(subTree))
for _, node := range subTree {
if MultiID(rootID).Equal(node.GetNodeID()) {
continue
}
fileName := GetFilename(node)
if !strings.HasPrefix(fileName, tailPrefix) {
continue
}
nodes := nodesMap[fileName]
// Add all nodes if flag latestOnly is false.
// Add all intermediate nodes
// and only latest leaf (object) nodes. To do this store and replace last leaf (object) node in nodes[0]
if len(nodes) == 0 {
nodes = []NodeResponse{node}
} else if !latestOnly || isIntermediate(node) {
nodes = append(nodes, node)
} else if isIntermediate(nodes[0]) {
nodes = append([]NodeResponse{node}, nodes...)
} else if getMaxTimestamp(node) > getMaxTimestamp(nodes[0]) {
nodes[0] = node
}
nodesMap[fileName] = nodes
}
result := make([]data.NodeInfo, 0, len(subTree))
for _, nodes := range nodesMap {
result = append(result, nodeResponseToNodeInfo(nodes)...)
}
return result, strings.TrimSuffix(prefix, tailPrefix), nil
}
func nodeResponseToNodeInfo(nodes []NodeResponse) []data.NodeInfo {
nodesInfo := make([]data.NodeInfo, 0, len(nodes))
for _, node := range nodes {
nodesInfo = append(nodesInfo, newNodeInfo(node))
}
return nodesInfo
}
func (c *Tree) determinePrefixNode(ctx context.Context, bktInfo *data.BucketInfo, treeID, prefix string) ([]uint64, string, error) {
rootID := []uint64{0}
path := strings.Split(prefix, separator)
tailPrefix := path[len(path)-1]
if len(path) > 1 {
var err error
rootID, err = c.getPrefixNodeID(ctx, bktInfo, treeID, path[:len(path)-1])
if err != nil {
return nil, "", err
}
}
return rootID, tailPrefix, nil
}
func (c *Tree) getPrefixNodeID(ctx context.Context, bktInfo *data.BucketInfo, treeID string, prefixPath []string) ([]uint64, error) {
p := &GetNodesParams{
CnrID: bktInfo.CID,
BktInfo: bktInfo,
TreeID: treeID,
Path: prefixPath,
LatestOnly: false,
AllAttrs: true,
}
nodes, err := c.service.GetNodes(ctx, p)
if err != nil {
return nil, err
}
var intermediateNodes []uint64
for _, node := range nodes {
if isIntermediate(node) {
intermediateNodes = append(intermediateNodes, node.GetNodeID()...)
}
}
if len(intermediateNodes) == 0 {
return nil, layer.ErrNodeNotFound
}
return intermediateNodes, nil
}
func GetFilename(node NodeResponse) string {
for _, kv := range node.GetMeta() {
if kv.GetKey() == FileNameKey {
return string(kv.GetValue())
}
}
return ""
}
func isIntermediate(node NodeResponse) bool {
if len(node.GetMeta()) != 1 {
return false
}
return node.GetMeta()[0].GetKey() == FileNameKey
}
func getMaxTimestamp(node NodeResponse) uint64 {
var maxTimestamp uint64
for _, timestamp := range node.GetTimestamp() {
if timestamp > maxTimestamp {
maxTimestamp = timestamp
}
}
return maxTimestamp
}
type MultiID []uint64
func (m MultiID) Equal(id MultiID) bool {
seen := make(map[uint64]struct{}, len(m))
for i := range m {
seen[m[i]] = struct{}{}
}
for i := range id {
if _, ok := seen[id[i]]; !ok {
return false
}
}
return true
}

150
tree/tree_test.go Normal file
View file

@ -0,0 +1,150 @@
package tree
import (
"testing"
"github.com/stretchr/testify/require"
)
type nodeMeta struct {
key string
value []byte
}
func (m nodeMeta) GetKey() string {
return m.key
}
func (m nodeMeta) GetValue() []byte {
return m.value
}
type nodeResponse struct {
meta []nodeMeta
timestamp []uint64
}
func (n nodeResponse) GetTimestamp() []uint64 {
return n.timestamp
}
func (n nodeResponse) GetMeta() []Meta {
res := make([]Meta, len(n.meta))
for i, value := range n.meta {
res[i] = value
}
return res
}
func (n nodeResponse) GetNodeID() []uint64 {
return nil
}
func (n nodeResponse) GetParentID() []uint64 {
return nil
}
func TestGetLatestNode(t *testing.T) {
for _, tc := range []struct {
name string
nodes []NodeResponse
exceptedOID string
error bool
}{
{
name: "empty",
nodes: []NodeResponse{},
error: true,
},
{
name: "one node of the object version",
nodes: []NodeResponse{
nodeResponse{
timestamp: []uint64{1},
meta: []nodeMeta{
{
key: oidKV,
value: []byte("oid1"),
},
},
},
},
exceptedOID: "oid1",
},
{
name: "one node of the object version and one node of the secondary object",
nodes: []NodeResponse{
nodeResponse{
timestamp: []uint64{3},
meta: []nodeMeta{},
},
nodeResponse{
timestamp: []uint64{1},
meta: []nodeMeta{
{
key: oidKV,
value: []byte("oid1"),
},
},
},
},
exceptedOID: "oid1",
},
{
name: "all nodes represent a secondary object",
nodes: []NodeResponse{
nodeResponse{
timestamp: []uint64{3},
meta: []nodeMeta{},
},
nodeResponse{
timestamp: []uint64{5},
meta: []nodeMeta{},
},
},
error: true,
},
{
name: "several nodes of different types and with different timestamp",
nodes: []NodeResponse{
nodeResponse{
timestamp: []uint64{1},
meta: []nodeMeta{
{
key: oidKV,
value: []byte("oid1"),
},
},
},
nodeResponse{
timestamp: []uint64{3},
meta: []nodeMeta{},
},
nodeResponse{
timestamp: []uint64{4},
meta: []nodeMeta{
{
key: oidKV,
value: []byte("oid2"),
},
},
},
nodeResponse{
timestamp: []uint64{6},
meta: []nodeMeta{},
},
},
exceptedOID: "oid2",
},
} {
t.Run(tc.name, func(t *testing.T) {
actualNode, err := getLatestVersionNode(tc.nodes)
if tc.error {
require.Error(t, err)
return
}
require.NoError(t, err)
require.Equal(t, tc.exceptedOID, string(actualNode.GetMeta()[0].GetValue()))
})
}
}

View file

@ -1,227 +0,0 @@
package uploader
import (
"context"
"encoding/json"
"io"
"net/http"
"strconv"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/resolver"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/response"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/tokens"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/utils"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/valyala/fasthttp"
"go.uber.org/atomic"
"go.uber.org/zap"
)
const (
jsonHeader = "application/json; charset=UTF-8"
drainBufSize = 4096
)
// Uploader is an upload request handler.
type Uploader struct {
log *zap.Logger
pool *pool.Pool
ownerID *user.ID
settings *Settings
containerResolver *resolver.ContainerResolver
}
// Settings stores reloading parameters, so it has to provide atomic getters and setters.
type Settings struct {
defaultTimestamp atomic.Bool
}
func (s *Settings) DefaultTimestamp() bool {
return s.defaultTimestamp.Load()
}
func (s *Settings) SetDefaultTimestamp(val bool) {
s.defaultTimestamp.Store(val)
}
// New creates a new Uploader using specified logger, connection pool and
// other options.
func New(params *utils.AppParams, settings *Settings) *Uploader {
return &Uploader{
log: params.Logger,
pool: params.Pool,
ownerID: params.Owner,
settings: settings,
containerResolver: params.Resolver,
}
}
// Upload handles multipart upload request.
func (u *Uploader) Upload(req *fasthttp.RequestCtx) {
var (
file MultipartFile
idObj oid.ID
addr oid.Address
scid, _ = req.UserValue("cid").(string)
log = u.log.With(zap.String("cid", scid))
bodyStream = req.RequestBodyStream()
drainBuf = make([]byte, drainBufSize)
)
ctx := utils.GetContextFromRequest(req)
idCnr, err := utils.GetContainerID(ctx, scid, u.containerResolver)
if err != nil {
log.Error(logs.WrongContainerID, zap.Error(err))
response.Error(req, "wrong container id", fasthttp.StatusBadRequest)
return
}
defer func() {
// If the temporary reader can be closed - let's close it.
if file == nil {
return
}
err := file.Close()
log.Debug(
logs.CloseTemporaryMultipartFormFile,
zap.Stringer("address", addr),
zap.String("filename", file.FileName()),
zap.Error(err),
)
}()
boundary := string(req.Request.Header.MultipartFormBoundary())
if file, err = fetchMultipartFile(u.log, bodyStream, boundary); err != nil {
log.Error(logs.CouldNotReceiveMultipartForm, zap.Error(err))
response.Error(req, "could not receive multipart/form: "+err.Error(), fasthttp.StatusBadRequest)
return
}
filtered, err := filterHeaders(u.log, &req.Request.Header)
if err != nil {
log.Error(logs.CouldNotProcessHeaders, zap.Error(err))
response.Error(req, err.Error(), fasthttp.StatusBadRequest)
return
}
now := time.Now()
if rawHeader := req.Request.Header.Peek(fasthttp.HeaderDate); rawHeader != nil {
if parsed, err := time.Parse(http.TimeFormat, string(rawHeader)); err != nil {
log.Warn(logs.CouldNotParseClientTime, zap.String("Date header", string(rawHeader)), zap.Error(err))
} else {
now = parsed
}
}
if err = utils.PrepareExpirationHeader(req, u.pool, filtered, now); err != nil {
log.Error(logs.CouldNotPrepareExpirationHeader, zap.Error(err))
response.Error(req, "could not prepare expiration header: "+err.Error(), fasthttp.StatusBadRequest)
return
}
attributes := make([]object.Attribute, 0, len(filtered))
// prepares attributes from filtered headers
for key, val := range filtered {
attribute := object.NewAttribute()
attribute.SetKey(key)
attribute.SetValue(val)
attributes = append(attributes, *attribute)
}
// sets FileName attribute if it wasn't set from header
if _, ok := filtered[object.AttributeFileName]; !ok {
filename := object.NewAttribute()
filename.SetKey(object.AttributeFileName)
filename.SetValue(file.FileName())
attributes = append(attributes, *filename)
}
// sets Timestamp attribute if it wasn't set from header and enabled by settings
if _, ok := filtered[object.AttributeTimestamp]; !ok && u.settings.DefaultTimestamp() {
timestamp := object.NewAttribute()
timestamp.SetKey(object.AttributeTimestamp)
timestamp.SetValue(strconv.FormatInt(time.Now().Unix(), 10))
attributes = append(attributes, *timestamp)
}
obj := object.New()
obj.SetContainerID(*idCnr)
obj.SetOwnerID(u.ownerID)
obj.SetAttributes(attributes...)
var prm pool.PrmObjectPut
prm.SetHeader(*obj)
prm.SetPayload(file)
bt := u.fetchBearerToken(ctx)
if bt != nil {
prm.UseBearer(*bt)
}
if idObj, err = u.pool.PutObject(ctx, prm); err != nil {
u.handlePutFrostFSErr(req, err)
return
}
addr.SetObject(idObj)
addr.SetContainer(*idCnr)
// Try to return the response, otherwise, if something went wrong, throw an error.
if err = newPutResponse(addr).encode(req); err != nil {
log.Error(logs.CouldNotEncodeResponse, zap.Error(err))
response.Error(req, "could not encode response", fasthttp.StatusBadRequest)
return
}
// Multipart is multipart and thus can contain more than one part which
// we ignore at the moment. Also, when dealing with chunked encoding
// the last zero-length chunk might be left unread (because multipart
// reader only cares about its boundary and doesn't look further) and
// it will be (erroneously) interpreted as the start of the next
// pipelined header. Thus we need to drain the body buffer.
for {
_, err = bodyStream.Read(drainBuf)
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
}
}
// Report status code and content type.
req.Response.SetStatusCode(fasthttp.StatusOK)
req.Response.Header.SetContentType(jsonHeader)
}
func (u *Uploader) handlePutFrostFSErr(r *fasthttp.RequestCtx, err error) {
statusCode, msg, additionalFields := response.FormErrorResponse("could not store file in frostfs", err)
logFields := append([]zap.Field{zap.Error(err)}, additionalFields...)
u.log.Error(logs.CouldNotStoreFileInFrostfs, logFields...)
response.Error(r, msg, statusCode)
}
func (u *Uploader) fetchBearerToken(ctx context.Context) *bearer.Token {
if tkn, err := tokens.LoadBearerToken(ctx); err == nil && tkn != nil {
return tkn
}
return nil
}
type putResponse struct {
ObjectID string `json:"object_id"`
ContainerID string `json:"container_id"`
}
func newPutResponse(addr oid.Address) *putResponse {
return &putResponse{
ObjectID: addr.Object().EncodeToString(),
ContainerID: addr.Container().EncodeToString(),
}
}
func (pr *putResponse) encode(w io.Writer) error {
enc := json.NewEncoder(w)
enc.SetIndent("", "\t")
return enc.Encode(pr)
}

View file

@ -11,10 +11,18 @@ import (
"time"
"unicode"
"unicode/utf8"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
)
type EpochDurations struct {
CurrentEpoch uint64
MsPerBlock int64
BlockPerEpoch uint64
}
type EpochInfoFetcher interface {
GetEpochDurations(context.Context) (*EpochDurations, error)
}
const (
UserAttributeHeaderPrefix = "X-Attribute-"
)
@ -151,7 +159,7 @@ func title(str string) string {
return string(r0) + str[size:]
}
func PrepareExpirationHeader(ctx context.Context, p *pool.Pool, headers map[string]string, now time.Time) error {
func PrepareExpirationHeader(ctx context.Context, epochFetcher EpochInfoFetcher, headers map[string]string, now time.Time) error {
formatsNum := 0
index := -1
for i, transformer := range transformers {
@ -165,7 +173,7 @@ func PrepareExpirationHeader(ctx context.Context, p *pool.Pool, headers map[stri
case 0:
return nil
case 1:
epochDuration, err := GetEpochDurations(ctx, p)
epochDuration, err := epochFetcher.GetEpochDurations(ctx)
if err != nil {
return fmt.Errorf("couldn't get epoch durations from network info: %w", err)
}

View file

@ -1,15 +0,0 @@
package utils
import (
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/resolver"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"go.uber.org/zap"
)
type AppParams struct {
Logger *zap.Logger
Pool *pool.Pool
Owner *user.ID
Resolver *resolver.ContainerResolver
}

View file

@ -30,12 +30,12 @@ func (c *httpCarrier) Set(key string, value string) {
func (c *httpCarrier) Keys() []string {
dict := make(map[string]interface{})
c.r.Request.Header.VisitAll(
func(key, value []byte) {
func(key, _ []byte) {
dict[string(key)] = true
},
)
c.r.Response.Header.VisitAll(
func(key, value []byte) {
func(key, _ []byte) {
dict[string(key)] = true
},
)

View file

@ -2,49 +2,11 @@ package utils
import (
"context"
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-http-gw/resolver"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
"github.com/valyala/fasthttp"
"go.uber.org/zap"
)
// GetContainerID decode container id, if it's not a valid container id
// then trey to resolve name using provided resolver.
func GetContainerID(ctx context.Context, containerID string, resolver *resolver.ContainerResolver) (*cid.ID, error) {
cnrID := new(cid.ID)
err := cnrID.DecodeString(containerID)
if err != nil {
cnrID, err = resolver.Resolve(ctx, containerID)
}
return cnrID, err
}
type EpochDurations struct {
CurrentEpoch uint64
MsPerBlock int64
BlockPerEpoch uint64
}
func GetEpochDurations(ctx context.Context, p *pool.Pool) (*EpochDurations, error) {
networkInfo, err := p.NetworkInfo(ctx)
if err != nil {
return nil, err
}
res := &EpochDurations{
CurrentEpoch: networkInfo.CurrentEpoch(),
MsPerBlock: networkInfo.MsPerBlock(),
BlockPerEpoch: networkInfo.EpochDuration(),
}
if res.BlockPerEpoch == 0 {
return nil, fmt.Errorf("EpochDuration is empty")
}
return res, nil
}
// SetContextToRequest adds new context to fasthttp request.
func SetContextToRequest(ctx context.Context, c *fasthttp.RequestCtx) {
c.SetUserValue("context", ctx)
@ -54,3 +16,34 @@ func SetContextToRequest(ctx context.Context, c *fasthttp.RequestCtx) {
func GetContextFromRequest(c *fasthttp.RequestCtx) context.Context {
return c.UserValue("context").(context.Context)
}
type ctxReqLoggerKeyType struct{}
// SetReqLog sets child zap.Logger in the context.
func SetReqLog(ctx context.Context, log *zap.Logger) context.Context {
if ctx == nil {
return nil
}
return context.WithValue(ctx, ctxReqLoggerKeyType{}, log)
}
// GetReqLog returns log if set.
// If zap.Logger isn't set returns nil.
func GetReqLog(ctx context.Context) *zap.Logger {
if ctx == nil {
return nil
} else if r, ok := ctx.Value(ctxReqLoggerKeyType{}).(*zap.Logger); ok {
return r
}
return nil
}
// GetReqLogOrDefault returns log from context, if it exists.
// If the log is missing from the context, the default logger is returned.
func GetReqLogOrDefault(ctx context.Context, defaultLog *zap.Logger) *zap.Logger {
log := GetReqLog(ctx)
if log == nil {
log = defaultLog
}
return log
}