Compare commits

...

80 commits

Author SHA1 Message Date
6bf6a3b1a3 [#362] Check user and groups during policy check
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-05-08 15:25:14 +03:00
2f108c9951 [#362] Expand control service
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-05-08 15:15:49 +03:00
c43ef040dc [#382] Fix request type determination
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-05-07 15:17:22 +03:00
2ab655b909 [#380] Add test for credentials versioning
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-05-03 07:24:13 +00:00
1c398551e5 [#380] creds: Increase test coverage
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-05-03 07:24:13 +00:00
db05021786 [#379] Add Iana CharsetReader for Oracle integration
Signed-off-by: Pavel Pogodaev <p.pogodaev@yadro.com>
2024-04-25 17:44:38 +03:00
034396d554 [#377] Add check of Source IP
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-04-22 15:29:18 +03:00
3c436d8de9 [#365] Include iam user tags in query
Signed-off-by: Pavel Pogodaev <p.pogodaev@yadro.com>
2024-04-22 10:47:43 +03:00
45f77de8c8 [#371] Add custom Source IP header configuration
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-04-22 07:42:45 +00:00
d903de2457 [#370] Fix fetching attributes from tree
Port TrueCloudLab/frostfs-s3-gw#374

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-19 17:33:55 +03:00
e22ff52165 [#367] Add check of AccessBox attributes
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-04-19 06:25:26 +00:00
5315f7b733 [#269] Create frostfsid wrapper with cache
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-18 09:32:30 +03:00
43a687b572 [#269] authmate: Update frostfsid using
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-17 12:11:23 +03:00
29a2dae40c [#269] Move frostfsid client to separate package
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-17 12:11:23 +03:00
fec3b3f31e [#269] Add frostfsid cache configuration
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-17 12:11:23 +03:00
7db89c840b [#368] Update vulnerable dependencies
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-17 11:29:09 +03:00
3ff027587c [#357] Add check of request and resource tags
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-04-17 07:06:58 +00:00
9f29fcbd52 [#353] docs: Add bucket policy docs
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-15 11:41:19 +03:00
8307c73fef [#364] Fix removing combined object
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-12 14:56:38 +03:00
d8889fca56 [#340] Fix encode object acl
In the process of encode the acl of an object,
we use a map. As a result, when traversing the
map, we can get a different sequence of permissions
each time. Therefore, a list is used instead of a map.

Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-04-11 09:28:30 +00:00
61ff4702a2 [#360] Reuse single target during policy check
Policy engine library is able to manage multiple
targets and resolve different status results.

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-04-10 17:56:47 +03:00
6da1acc554 [#360] Use 'c' prefix for bucket policies instead of 'n'
With 'c' prefix, acl chains become shorter, thus gateway
receives shorter results and avoids sessions to neo-go.

There is still issue with many IAM rules.

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-04-10 17:56:47 +03:00
3ea3f971e1 [#359] Update APE to allow put tombstone on delete object
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-10 15:12:30 +03:00
cb83f7646f [#347] port: Explicitly specify sorting order of subtree for object listing
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-04-09 18:57:47 +03:00
9c012d0a66 [#355] Remove policies when delete bucket
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-09 15:49:46 +00:00
bda014b7b4 [#355] Update frostfs-contract to terminate session iterator
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-09 15:49:46 +00:00
37d05dcefd [#353] Add check of listing parameters and versionID
Add properties in policy check:
* s3:delimiter
* s3:prefix
* s3:max-keys
* s3:VersionId

Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-04-08 17:57:55 +03:00
8407b3ea4c [#352] policy: Use iterators to list chains
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-04 12:51:12 +00:00
e537675223 [#341] Update CHANGELOG
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-04-03 12:04:48 +00:00
789464e134 [#341] Add "h2" as next proto to allow HTTP/2 requests in http.Serve
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-04-03 12:04:48 +00:00
a138f4954b [#341] Test HTTP/2 requests
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-04-03 12:04:48 +00:00
8669bf6b50 [#346] acl: Update APE and fix using
* Remove native policy when remove bucket policy
* Allow policies that contain only s3 compatible statements
(now deny rules cannot be converted to native rules)

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-02 12:43:04 +00:00
6b8095182e [#343] docs: Actualize s3 compatibility table
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-02 15:02:51 +03:00
348126b3b8 [#301] go.mod: Update sdk-go
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-28 09:13:27 +03:00
fbe7a784e8 [#301] Support GetBucketPolicyStatus
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-28 09:13:25 +03:00
bfcde09f07 [#291] server auto re-binding
Signed-off-by: Pavel Pogodaev <p.pogodaev@yadro.com>
2024-03-27 14:28:50 +03:00
94bd1dfe28 [#334] Add auth doc
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-21 12:12:29 +03:00
80c7b73eb9 [#306] In APE buckets forbid canned acl except private
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-19 16:57:26 +03:00
62cc5a04a7 [#328] Log error on failed response writing
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-15 11:02:26 +03:00
6788306998 [#328] Log invalid tree service KVs
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-04 15:35:23 +03:00
4ee3648183 [#328] Log invalid lock enabled header
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-04 15:09:51 +03:00
ee48d1dc85 [#325] Log error on failed request id generation
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-04 09:49:41 +00:00
f958eef2b3 [#325] Use default empty data.LockInfo in get/head in case of error
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-04 09:49:41 +00:00
81b44ab3d3 [#325] Fix mutex usage in controller
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-04 09:49:41 +00:00
623001c403 [#325] Close listener on error
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-04 09:49:41 +00:00
70043c4800 [#324] Close nns resolver after use
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-03-04 09:06:26 +00:00
8050ca2d51 [#306] Use session token for container read operations
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-01 18:14:33 +03:00
c12e264697 [#306] Simplify cid resolver for metrics
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-01 17:46:16 +03:00
e9f38a49e4 [#306] Fix forming key for bucket cache
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-01 16:09:40 +03:00
fabb4134bc [#318] Use log msg from constants
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-29 17:30:28 +03:00
e1ee36b979 [#318] Fix tests
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-29 17:30:28 +03:00
937367caaf [#318] Fix panic on invalid multipart form
Previously, simple 'curl -X POST http://localhost:8084/test' leads to panic because of wrong handle matching

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-29 17:30:28 +03:00
7b86bac6ee [#318] Log unmatched requests
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-29 17:30:28 +03:00
529ec7e0b9 [#318] Don't log empty bucket/name
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-29 17:30:28 +03:00
4741e74210 [#318] Log successfully authenticated accessKeyIDs
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-29 17:30:28 +03:00
f1470bab4a [#318] auth: Add context for logged errors
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-29 17:30:28 +03:00
6e5bcaef97 [#318] Log policy request checking
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-29 17:30:28 +03:00
1522db05c5 [#318] Log namespace for requests
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-29 17:30:28 +03:00
31da31862a [#300] Update error logging in DeleteMultipleObjects
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-02-29 14:24:32 +00:00
7de1ffdbe9 [#306] Fix billing tests
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 18:00:27 +03:00
3285a2e105 [#306] policy: Change default access strategy
Use access strategy based on bucket type and/or config flags.

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 17:53:13 +03:00
1bfea006b0 [#306] Update APE
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 17:50:08 +03:00
56b50f2075 [#306] Remove flag to disable policy contract
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 17:50:08 +03:00
8f89f275bd [#306] Save bucket policy as native chain
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 17:50:08 +03:00
ff15f9f28a [#306] Fix update settings for buckets without owner key in tree
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 17:50:08 +03:00
c868af8a62 [#306] Add flag to enable old ACL bucket creation
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 17:50:08 +03:00
bac1b3fb2d [#306] Use zero basic acl to mark APE containers
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 17:50:08 +03:00
c452d58ce2 [#306] Reduce number of policy contract invocations
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 17:50:08 +03:00
499a202d28 [#306] Update CHANGELOG.md
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 17:50:08 +03:00
d9d12debc3 [#306] Add tests
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 17:50:08 +03:00
3d0d2032c6 [#306] acl: Handle put/get acl for APE buckets
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 17:50:08 +03:00
1f2cf0ed67 [#306] Use APE instead of eACL on bucket creation
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 17:50:08 +03:00
37be8851b3 [#306] Simplify namespaces configuration
Resolve ns alias at the beginning of the request just once.
Keep in ns map only one default ns key.

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 17:50:08 +03:00
c4c199defe [#306] Use s3 as chain id prefix to be consistent
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-28 17:50:08 +03:00
2981a47e99 [#321] Use correct owner id in billing metrics
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-02-28 14:52:44 +03:00
391fc9cbe3 [#311] Change object owner for anonymous put
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-02-21 15:03:16 +00:00
4eb2c7fb7d [#290] Fix TestErrorTimeoutChecking test
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-02-20 11:39:49 +00:00
563c1d9bd7 [#308] Fix linter issues
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-02-16 18:25:06 +03:00
0f3b4ab0ed [#308] Update linter versions
Latest golangci-lint has newer x/tools version and
it is incompatible with internal linter.

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-02-16 18:24:53 +03:00
bd8d2d00ba [#313] logger: Fix logging level changing for journald
Signed-off-by: Artem Tataurov <a.tataurov@yadro.com>
2024-02-16 17:44:16 +03:00
120 changed files with 6825 additions and 1716 deletions

View file

@ -9,6 +9,10 @@ This document outlines major changes between releases.
- Fix status code in GET/HEAD delete marker (#226) - Fix status code in GET/HEAD delete marker (#226)
- Fix `NextVersionIDMarker` in `list-object-versions` (#248) - Fix `NextVersionIDMarker` in `list-object-versions` (#248)
- Fix possibility of panic during SIGHUP (#288) - Fix possibility of panic during SIGHUP (#288)
- Fix flaky `TestErrorTimeoutChecking` (`make test` sometimes failed) (#290)
- Fix user owner ID in billing metrics (#321)
- Fix HTTP/2 requests (#341)
- Fix Decoder.CharsetReader is nil (#379)
### Added ### Added
- Add new `frostfs.buffer_max_size_for_put` config param and sync TZ hash for PUT operations (#197) - Add new `frostfs.buffer_max_size_for_put` config param and sync TZ hash for PUT operations (#197)
@ -24,13 +28,18 @@ This document outlines major changes between releases.
- Support `policy` contract (#259) - Support `policy` contract (#259)
- Support `proxy` contract (#287) - Support `proxy` contract (#287)
- Authmate: support custom attributes (#292) - Authmate: support custom attributes (#292)
- Add new `reconnect_interval` config param (#291)
- Support `GetBucketPolicyStatus` (#301)
- Add FrostfsID cache (#269)
- Add new `source_ip_header` config param (#371)
### Changed ### Changed
- Generalise config param `use_default_xmlns_for_complete_multipart` to `use_default_xmlns` so that use default xmlns for all requests (#221) - Generalise config param `use_default_xmlns_for_complete_multipart` to `use_default_xmlns` so that use default xmlns for all requests (#221)
- Set server IdleTimeout and ReadHeaderTimeout to `30s` and allow to configure them (#220) - Set server IdleTimeout and ReadHeaderTimeout to `30s` and allow to configure them (#220)
- Return `ETag` value in quotes (#219) - Return `ETag` value in quotes (#219)
- Use tombstone when delete multipart upload (#275) - Use tombstone when delete multipart upload (#275)
- Support new parameter `cache.accessbox.removing_check_interval` (#XX) - Support new parameter `cache.accessbox.removing_check_interval` (#305)
- Use APE rules instead of eACL in container creation (#306)
### Removed ### Removed
- Drop sending whitespace characters during complete multipart upload and related config param `kludge.complete_multipart_keepalive` (#227) - Drop sending whitespace characters during complete multipart upload and related config param `kludge.complete_multipart_keepalive` (#227)

View file

@ -4,8 +4,8 @@
REPO ?= $(shell go list -m) REPO ?= $(shell go list -m)
VERSION ?= $(shell git describe --tags --dirty --match "v*" --always --abbrev=8 2>/dev/null || cat VERSION 2>/dev/null || echo "develop") VERSION ?= $(shell git describe --tags --dirty --match "v*" --always --abbrev=8 2>/dev/null || cat VERSION 2>/dev/null || echo "develop")
GO_VERSION ?= 1.20 GO_VERSION ?= 1.20
LINT_VERSION ?= 1.54.0 LINT_VERSION ?= 1.56.1
TRUECLOUDLAB_LINT_VERSION ?= 0.0.2 TRUECLOUDLAB_LINT_VERSION ?= 0.0.5
BINDIR = bin BINDIR = bin
METRICS_DUMP_OUT ?= ./metrics-dump.json METRICS_DUMP_OUT ?= ./metrics-dump.json

View file

@ -94,12 +94,12 @@ func New(creds tokens.Credentials, prefixes []string) *Center {
func (c *Center) parseAuthHeader(header string) (*AuthHeader, error) { func (c *Center) parseAuthHeader(header string) (*AuthHeader, error) {
submatches := c.reg.GetSubmatches(header) submatches := c.reg.GetSubmatches(header)
if len(submatches) != authHeaderPartsNum { if len(submatches) != authHeaderPartsNum {
return nil, apiErrors.GetAPIError(apiErrors.ErrAuthorizationHeaderMalformed) return nil, fmt.Errorf("%w: %s", apiErrors.GetAPIError(apiErrors.ErrAuthorizationHeaderMalformed), header)
} }
accessKey := strings.Split(submatches["access_key_id"], "0") accessKey := strings.Split(submatches["access_key_id"], "0")
if len(accessKey) != accessKeyPartsNum { if len(accessKey) != accessKeyPartsNum {
return nil, apiErrors.GetAPIError(apiErrors.ErrInvalidAccessKeyID) return nil, fmt.Errorf("%w: %s", apiErrors.GetAPIError(apiErrors.ErrInvalidAccessKeyID), accessKey)
} }
signedFields := strings.Split(submatches["signed_header_fields"], ";") signedFields := strings.Split(submatches["signed_header_fields"], ";")
@ -114,11 +114,12 @@ func (c *Center) parseAuthHeader(header string) (*AuthHeader, error) {
}, nil }, nil
} }
func (a *AuthHeader) getAddress() (oid.Address, error) { func getAddress(accessKeyID string) (oid.Address, error) {
var addr oid.Address var addr oid.Address
if err := addr.DecodeString(strings.ReplaceAll(a.AccessKeyID, "0", "/")); err != nil { if err := addr.DecodeString(strings.ReplaceAll(accessKeyID, "0", "/")); err != nil {
return addr, apiErrors.GetAPIError(apiErrors.ErrInvalidAccessKeyID) return addr, fmt.Errorf("%w: %s", apiErrors.GetAPIError(apiErrors.ErrInvalidAccessKeyID), accessKeyID)
} }
return addr, nil return addr, nil
} }
@ -161,7 +162,7 @@ func (c *Center) Authenticate(r *http.Request) (*middleware.Box, error) {
if strings.HasPrefix(r.Header.Get(ContentTypeHdr), "multipart/form-data") { if strings.HasPrefix(r.Header.Get(ContentTypeHdr), "multipart/form-data") {
return c.checkFormData(r) return c.checkFormData(r)
} }
return nil, middleware.ErrNoAuthorizationHeader return nil, fmt.Errorf("%w: %v", middleware.ErrNoAuthorizationHeader, authHeaderField)
} }
authHdr, err = c.parseAuthHeader(authHeaderField[0]) authHdr, err = c.parseAuthHeader(authHeaderField[0])
if err != nil { if err != nil {
@ -176,18 +177,18 @@ func (c *Center) Authenticate(r *http.Request) (*middleware.Box, error) {
return nil, fmt.Errorf("failed to parse x-amz-date header field: %w", err) return nil, fmt.Errorf("failed to parse x-amz-date header field: %w", err)
} }
if err := c.checkAccessKeyID(authHdr.AccessKeyID); err != nil { if err = c.checkAccessKeyID(authHdr.AccessKeyID); err != nil {
return nil, err return nil, err
} }
addr, err := authHdr.getAddress() addr, err := getAddress(authHdr.AccessKeyID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
box, err := c.cli.GetBox(r.Context(), addr) box, attrs, err := c.cli.GetBox(r.Context(), addr)
if err != nil { if err != nil {
return nil, fmt.Errorf("get box: %w", err) return nil, fmt.Errorf("get box '%s': %w", addr, err)
} }
if err = checkFormatHashContentSHA256(r.Header.Get(AmzContentSHA256)); err != nil { if err = checkFormatHashContentSHA256(r.Header.Get(AmzContentSHA256)); err != nil {
@ -206,6 +207,7 @@ func (c *Center) Authenticate(r *http.Request) (*middleware.Box, error) {
Region: authHdr.Region, Region: authHdr.Region,
SignatureV4: authHdr.SignatureV4, SignatureV4: authHdr.SignatureV4,
}, },
Attributes: attrs,
} }
if needClientTime { if needClientTime {
result.ClientTime = signatureDateTime result.ClientTime = signatureDateTime
@ -218,10 +220,11 @@ func checkFormatHashContentSHA256(hash string) error {
if !IsStandardContentSHA256(hash) { if !IsStandardContentSHA256(hash) {
hashBinary, err := hex.DecodeString(hash) hashBinary, err := hex.DecodeString(hash)
if err != nil { if err != nil {
return apiErrors.GetAPIError(apiErrors.ErrContentSHA256Mismatch) return fmt.Errorf("%w: decode hash: %s: %s", apiErrors.GetAPIError(apiErrors.ErrContentSHA256Mismatch),
hash, err.Error())
} }
if len(hashBinary) != sha256.Size && len(hash) != 0 { if len(hashBinary) != sha256.Size && len(hash) != 0 {
return apiErrors.GetAPIError(apiErrors.ErrContentSHA256Mismatch) return fmt.Errorf("%w: invalid hash size %d", apiErrors.GetAPIError(apiErrors.ErrContentSHA256Mismatch), len(hashBinary))
} }
} }
@ -239,12 +242,12 @@ func (c Center) checkAccessKeyID(accessKeyID string) error {
} }
} }
return apiErrors.GetAPIError(apiErrors.ErrAccessDenied) return fmt.Errorf("%w: accesskeyID prefix isn't allowed", apiErrors.GetAPIError(apiErrors.ErrAccessDenied))
} }
func (c *Center) checkFormData(r *http.Request) (*middleware.Box, error) { func (c *Center) checkFormData(r *http.Request) (*middleware.Box, error) {
if err := r.ParseMultipartForm(maxFormSizeMemory); err != nil { if err := r.ParseMultipartForm(maxFormSizeMemory); err != nil {
return nil, apiErrors.GetAPIError(apiErrors.ErrInvalidArgument) return nil, fmt.Errorf("%w: parse multipart form with max size %d", apiErrors.GetAPIError(apiErrors.ErrInvalidArgument), maxFormSizeMemory)
} }
if err := prepareForm(r.MultipartForm); err != nil { if err := prepareForm(r.MultipartForm); err != nil {
@ -253,12 +256,13 @@ func (c *Center) checkFormData(r *http.Request) (*middleware.Box, error) {
policy := MultipartFormValue(r, "policy") policy := MultipartFormValue(r, "policy")
if policy == "" { if policy == "" {
return nil, middleware.ErrNoAuthorizationHeader return nil, fmt.Errorf("%w: missing policy", middleware.ErrNoAuthorizationHeader)
} }
submatches := c.postReg.GetSubmatches(MultipartFormValue(r, "x-amz-credential")) creds := MultipartFormValue(r, "x-amz-credential")
submatches := c.postReg.GetSubmatches(creds)
if len(submatches) != 4 { if len(submatches) != 4 {
return nil, apiErrors.GetAPIError(apiErrors.ErrAuthorizationHeaderMalformed) return nil, fmt.Errorf("%w: %s", apiErrors.GetAPIError(apiErrors.ErrAuthorizationHeaderMalformed), creds)
} }
signatureDateTime, err := time.Parse("20060102T150405Z", MultipartFormValue(r, "x-amz-date")) signatureDateTime, err := time.Parse("20060102T150405Z", MultipartFormValue(r, "x-amz-date"))
@ -266,25 +270,27 @@ func (c *Center) checkFormData(r *http.Request) (*middleware.Box, error) {
return nil, fmt.Errorf("failed to parse x-amz-date field: %w", err) return nil, fmt.Errorf("failed to parse x-amz-date field: %w", err)
} }
var addr oid.Address addr, err := getAddress(submatches["access_key_id"])
if err = addr.DecodeString(strings.ReplaceAll(submatches["access_key_id"], "0", "/")); err != nil { if err != nil {
return nil, apiErrors.GetAPIError(apiErrors.ErrInvalidAccessKeyID) return nil, err
} }
box, err := c.cli.GetBox(r.Context(), addr) box, attrs, err := c.cli.GetBox(r.Context(), addr)
if err != nil { if err != nil {
return nil, fmt.Errorf("get box: %w", err) return nil, fmt.Errorf("get box '%s': %w", addr, err)
} }
secret := box.Gate.SecretKey secret := box.Gate.SecretKey
service, region := submatches["service"], submatches["region"] service, region := submatches["service"], submatches["region"]
signature := signStr(secret, service, region, signatureDateTime, policy) signature := signStr(secret, service, region, signatureDateTime, policy)
if signature != MultipartFormValue(r, "x-amz-signature") { reqSignature := MultipartFormValue(r, "x-amz-signature")
return nil, apiErrors.GetAPIError(apiErrors.ErrSignatureDoesNotMatch) if signature != reqSignature {
return nil, fmt.Errorf("%w: %s != %s", apiErrors.GetAPIError(apiErrors.ErrSignatureDoesNotMatch),
reqSignature, signature)
} }
return &middleware.Box{AccessBox: box}, nil return &middleware.Box{AccessBox: box, Attributes: attrs}, nil
} }
func cloneRequest(r *http.Request, authHeader *AuthHeader) *http.Request { func cloneRequest(r *http.Request, authHeader *AuthHeader) *http.Request {
@ -317,10 +323,12 @@ func (c *Center) checkSign(authHeader *AuthHeader, box *accessbox.Box, request *
if authHeader.IsPresigned { if authHeader.IsPresigned {
now := time.Now() now := time.Now()
if signatureDateTime.Add(authHeader.Expiration).Before(now) { if signatureDateTime.Add(authHeader.Expiration).Before(now) {
return apiErrors.GetAPIError(apiErrors.ErrExpiredPresignRequest) return fmt.Errorf("%w: expired: now %s, signature %s", apiErrors.GetAPIError(apiErrors.ErrExpiredPresignRequest),
now.Format(time.RFC3339), signatureDateTime.Format(time.RFC3339))
} }
if now.Before(signatureDateTime) { if now.Before(signatureDateTime) {
return apiErrors.GetAPIError(apiErrors.ErrBadRequest) return fmt.Errorf("%w: signature time from the future: now %s, signature %s", apiErrors.GetAPIError(apiErrors.ErrBadRequest),
now.Format(time.RFC3339), signatureDateTime.Format(time.RFC3339))
} }
if _, err := signer.Presign(request, nil, authHeader.Service, authHeader.Region, authHeader.Expiration, signatureDateTime); err != nil { if _, err := signer.Presign(request, nil, authHeader.Service, authHeader.Region, authHeader.Expiration, signatureDateTime); err != nil {
return fmt.Errorf("failed to pre-sign temporary HTTP request: %w", err) return fmt.Errorf("failed to pre-sign temporary HTTP request: %w", err)
@ -334,7 +342,8 @@ func (c *Center) checkSign(authHeader *AuthHeader, box *accessbox.Box, request *
} }
if authHeader.SignatureV4 != signature { if authHeader.SignatureV4 != signature {
return apiErrors.GetAPIError(apiErrors.ErrSignatureDoesNotMatch) return fmt.Errorf("%w: %s != %s: headers %v", apiErrors.GetAPIError(apiErrors.ErrSignatureDoesNotMatch),
authHeader.SignatureV4, signature, authHeader.SignedFields)
} }
return nil return nil

View file

@ -45,7 +45,7 @@ func TestAuthHeaderParse(t *testing.T) {
}, },
} { } {
authHeader, err := center.parseAuthHeader(tc.header) authHeader, err := center.parseAuthHeader(tc.header)
require.Equal(t, tc.err, err, tc.header) require.ErrorIs(t, err, tc.err, tc.header)
require.Equal(t, tc.expected, authHeader, tc.header) require.Equal(t, tc.expected, authHeader, tc.header)
} }
} }
@ -82,8 +82,8 @@ func TestAuthHeaderGetAddress(t *testing.T) {
err: defaulErr, err: defaulErr,
}, },
} { } {
_, err := tc.authHeader.getAddress() _, err := getAddress(tc.authHeader.AccessKeyID)
require.Equal(t, tc.err, err, tc.authHeader.AccessKeyID) require.ErrorIs(t, err, tc.err, tc.authHeader.AccessKeyID)
} }
} }
@ -141,7 +141,7 @@ func TestCheckFormatContentSHA256(t *testing.T) {
} { } {
t.Run(tc.name, func(t *testing.T) { t.Run(tc.name, func(t *testing.T) {
err := checkFormatHashContentSHA256(tc.hash) err := checkFormatHashContentSHA256(tc.hash)
require.Equal(t, tc.error, err) require.ErrorIs(t, err, tc.error)
}) })
} }
} }

View file

@ -10,6 +10,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/tokens" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/tokens"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status" apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/credentials"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@ -31,13 +32,13 @@ func (m credentialsMock) addBox(addr oid.Address, box *accessbox.Box) {
m.boxes[addr.String()] = box m.boxes[addr.String()] = box
} }
func (m credentialsMock) GetBox(_ context.Context, addr oid.Address) (*accessbox.Box, error) { func (m credentialsMock) GetBox(_ context.Context, addr oid.Address) (*accessbox.Box, []object.Attribute, error) {
box, ok := m.boxes[addr.String()] box, ok := m.boxes[addr.String()]
if !ok { if !ok {
return nil, &apistatus.ObjectNotFound{} return nil, nil, &apistatus.ObjectNotFound{}
} }
return box, nil return box, nil, nil
} }
func (m credentialsMock) Put(context.Context, cid.ID, tokens.CredentialsParam) (oid.Address, error) { func (m credentialsMock) Put(context.Context, cid.ID, tokens.CredentialsParam) (oid.Address, error) {

View file

@ -6,6 +6,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/bluele/gcache" "github.com/bluele/gcache"
"go.uber.org/zap" "go.uber.org/zap"
@ -27,6 +28,7 @@ type (
AccessBoxCacheValue struct { AccessBoxCacheValue struct {
Box *accessbox.Box Box *accessbox.Box
Attributes []object.Attribute
PutTime time.Time PutTime time.Time
} }
) )
@ -72,9 +74,10 @@ func (o *AccessBoxCache) Get(address oid.Address) *AccessBoxCacheValue {
} }
// Put stores an accessbox to cache. // Put stores an accessbox to cache.
func (o *AccessBoxCache) Put(address oid.Address, box *accessbox.Box) error { func (o *AccessBoxCache) Put(address oid.Address, box *accessbox.Box, attrs []object.Attribute) error {
val := &AccessBoxCacheValue{ val := &AccessBoxCacheValue{
Box: box, Box: box,
Attributes: attrs,
PutTime: time.Now(), PutTime: time.Now(),
} }
return o.cache.Set(address, val) return o.cache.Set(address, val)

View file

@ -65,6 +65,6 @@ func (o *BucketCache) Delete(bkt *data.BucketInfo) bool {
return o.cache.Remove(formKey(bkt.Zone, bkt.Name)) return o.cache.Remove(formKey(bkt.Zone, bkt.Name))
} }
func formKey(ns, name string) string { func formKey(zone, name string) string {
return name + "." + ns return name + "." + zone
} }

View file

@ -3,10 +3,14 @@ package cache
import ( import (
"testing" "testing"
"git.frostfs.info/TrueCloudLab/frostfs-contract/frostfsid/client"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
cidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id/test" cidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id/test"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id/test" oidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id/test"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"go.uber.org/zap" "go.uber.org/zap"
"go.uber.org/zap/zaptest/observer" "go.uber.org/zap/zaptest/observer"
@ -18,11 +22,13 @@ func TestAccessBoxCacheType(t *testing.T) {
addr := oidtest.Address() addr := oidtest.Address()
box := &accessbox.Box{} box := &accessbox.Box{}
var attrs []object.Attribute
err := cache.Put(addr, box) err := cache.Put(addr, box, attrs)
require.NoError(t, err) require.NoError(t, err)
val := cache.Get(addr) val := cache.Get(addr)
require.Equal(t, box, val.Box) require.Equal(t, box, val.Box)
require.Equal(t, attrs, val.Attributes)
require.Equal(t, 0, observedLog.Len()) require.Equal(t, 0, observedLog.Len())
err = cache.cache.Set(addr, "tmp") err = cache.cache.Set(addr, "tmp")
@ -194,6 +200,44 @@ func TestNotificationConfigurationCacheType(t *testing.T) {
assertInvalidCacheEntry(t, cache.GetNotificationConfiguration(key), observedLog) assertInvalidCacheEntry(t, cache.GetNotificationConfiguration(key), observedLog)
} }
func TestFrostFSIDSubjectCacheType(t *testing.T) {
logger, observedLog := getObservedLogger()
cache := NewFrostfsIDCache(DefaultFrostfsIDConfig(logger))
key, err := util.Uint160DecodeStringLE("4ea976429703418ef00fc4912a409b6a0b973034")
require.NoError(t, err)
value := &client.SubjectExtended{}
err = cache.PutSubject(key, value)
require.NoError(t, err)
val := cache.GetSubject(key)
require.Equal(t, value, val)
require.Equal(t, 0, observedLog.Len())
err = cache.cache.Set(key, "tmp")
require.NoError(t, err)
assertInvalidCacheEntry(t, cache.GetSubject(key), observedLog)
}
func TestFrostFSIDUserKeyCacheType(t *testing.T) {
logger, observedLog := getObservedLogger()
cache := NewFrostfsIDCache(DefaultFrostfsIDConfig(logger))
ns, name := "ns", "name"
value, err := keys.NewPrivateKey()
require.NoError(t, err)
err = cache.PutUserKey(ns, name, value.PublicKey())
require.NoError(t, err)
val := cache.GetUserKey(ns, name)
require.Equal(t, value.PublicKey(), val)
require.Equal(t, 0, observedLog.Len())
err = cache.cache.Set(ns+"/"+name, "tmp")
require.NoError(t, err)
assertInvalidCacheEntry(t, cache.GetUserKey(ns, name), observedLog)
}
func assertInvalidCacheEntry(t *testing.T, val interface{}, observedLog *observer.ObservedLogs) { func assertInvalidCacheEntry(t *testing.T, val interface{}, observedLog *observer.ObservedLogs) {
require.Nil(t, val) require.Nil(t, val)
require.Equal(t, 1, observedLog.Len()) require.Equal(t, 1, observedLog.Len())

77
api/cache/frostfsid.go vendored Normal file
View file

@ -0,0 +1,77 @@
package cache
import (
"fmt"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-contract/frostfsid/client"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"github.com/bluele/gcache"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/util"
"go.uber.org/zap"
)
// FrostfsIDCache provides lru cache for frostfsid contract.
type FrostfsIDCache struct {
cache gcache.Cache
logger *zap.Logger
}
const (
// DefaultFrostfsIDCacheSize is a default maximum number of entries in cache.
DefaultFrostfsIDCacheSize = 1e4
// DefaultFrostfsIDCacheLifetime is a default lifetime of entries in cache.
DefaultFrostfsIDCacheLifetime = time.Minute
)
// DefaultFrostfsIDConfig returns new default cache expiration values.
func DefaultFrostfsIDConfig(logger *zap.Logger) *Config {
return &Config{
Size: DefaultFrostfsIDCacheSize,
Lifetime: DefaultFrostfsIDCacheLifetime,
Logger: logger,
}
}
// NewFrostfsIDCache creates an object of FrostfsIDCache.
func NewFrostfsIDCache(config *Config) *FrostfsIDCache {
gc := gcache.New(config.Size).LRU().Expiration(config.Lifetime).Build()
return &FrostfsIDCache{cache: gc, logger: config.Logger}
}
// GetSubject returns a cached client.SubjectExtended. Returns nil if value is missing.
func (c *FrostfsIDCache) GetSubject(key util.Uint160) *client.SubjectExtended {
return get[client.SubjectExtended](c, key)
}
// PutSubject puts a client.SubjectExtended to cache.
func (c *FrostfsIDCache) PutSubject(key util.Uint160, subject *client.SubjectExtended) error {
return c.cache.Set(key, subject)
}
// GetUserKey returns a cached *keys.PublicKey. Returns nil if value is missing.
func (c *FrostfsIDCache) GetUserKey(ns, name string) *keys.PublicKey {
return get[keys.PublicKey](c, ns+"/"+name)
}
// PutUserKey puts a client.SubjectExtended to cache.
func (c *FrostfsIDCache) PutUserKey(ns, name string, userKey *keys.PublicKey) error {
return c.cache.Set(ns+"/"+name, userKey)
}
func get[T any](c *FrostfsIDCache, key any) *T {
entry, err := c.cache.Get(key)
if err != nil {
return nil
}
result, ok := entry.(*T)
if !ok {
c.logger.Warn(logs.InvalidCacheEntryType, zap.String("actual", fmt.Sprintf("%T", entry)),
zap.String("expected", fmt.Sprintf("%T", result)))
return nil
}
return result
}

View file

@ -48,7 +48,7 @@ func (k *ListSessionKey) String() string {
// NewListSessionCache is a constructor which creates an object of ListObjectsCache with the given lifetime of entries. // NewListSessionCache is a constructor which creates an object of ListObjectsCache with the given lifetime of entries.
func NewListSessionCache(config *Config) *ListSessionCache { func NewListSessionCache(config *Config) *ListSessionCache {
gc := gcache.New(config.Size).LRU().Expiration(config.Lifetime).EvictedFunc(func(key interface{}, val interface{}) { gc := gcache.New(config.Size).LRU().Expiration(config.Lifetime).EvictedFunc(func(_ interface{}, val interface{}) {
session, ok := val.(*data.ListSession) session, ok := val.(*data.ListSession)
if !ok { if !ok {
config.Logger.Warn(logs.InvalidCacheEntryType, zap.String("actual", fmt.Sprintf("%T", val)), config.Logger.Warn(logs.InvalidCacheEntryType, zap.String("actual", fmt.Sprintf("%T", val)),

View file

@ -8,6 +8,7 @@ import (
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
) )
const ( const (
@ -31,6 +32,7 @@ type (
LocationConstraint string LocationConstraint string
ObjectLockEnabled bool ObjectLockEnabled bool
HomomorphicHashDisabled bool HomomorphicHashDisabled bool
APEEnabled bool
} }
// ObjectInfo holds S3 object data. // ObjectInfo holds S3 object data.
@ -60,8 +62,10 @@ type (
// BucketSettings stores settings such as versioning. // BucketSettings stores settings such as versioning.
BucketSettings struct { BucketSettings struct {
Versioning string `json:"versioning"` Versioning string
LockConfiguration *ObjectLockConfiguration `json:"lock_configuration"` LockConfiguration *ObjectLockConfiguration
CannedACL string
OwnerKey *keys.PublicKey
} }
// CORSConfiguration stores CORS configuration of a request. // CORSConfiguration stores CORS configuration of a request.
@ -79,6 +83,14 @@ type (
ExposeHeaders []string `xml:"ExposeHeader" json:"ExposeHeaders"` ExposeHeaders []string `xml:"ExposeHeader" json:"ExposeHeaders"`
MaxAgeSeconds int `xml:"MaxAgeSeconds,omitempty" json:"MaxAgeSeconds,omitempty"` MaxAgeSeconds int `xml:"MaxAgeSeconds,omitempty" json:"MaxAgeSeconds,omitempty"`
} }
// ObjectVersion stores object version info.
ObjectVersion struct {
BktInfo *BucketInfo
ObjectName string
VersionID string
NoErrorOnDeleteMarker bool
}
) )
// NotificationInfoFromObject creates new NotificationInfo from ObjectInfo. // NotificationInfoFromObject creates new NotificationInfo from ObjectInfo.

30
api/data/tagging.go Normal file
View file

@ -0,0 +1,30 @@
package data
import "encoding/xml"
// Tagging contains tag set.
type Tagging struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ Tagging"`
TagSet []Tag `xml:"TagSet>Tag"`
}
// Tag is an AWS key-value tag.
type Tag struct {
Key string
Value string
}
type GetObjectTaggingParams struct {
ObjectVersion *ObjectVersion
// NodeVersion can be nil. If not nil we save one request to tree service.
NodeVersion *NodeVersion // optional
}
type PutObjectTaggingParams struct {
ObjectVersion *ObjectVersion
TagSet map[string]string
// NodeVersion can be nil. If not nil we save one request to tree service.
NodeVersion *NodeVersion // optional
}

View file

@ -26,6 +26,7 @@ type (
const ( const (
_ ErrorCode = iota _ ErrorCode = iota
ErrAccessDenied ErrAccessDenied
ErrAccessControlListNotSupported
ErrBadDigest ErrBadDigest
ErrEntityTooSmall ErrEntityTooSmall
ErrEntityTooLarge ErrEntityTooLarge
@ -90,6 +91,7 @@ const (
ErrBucketNotEmpty ErrBucketNotEmpty
ErrAllAccessDisabled ErrAllAccessDisabled
ErrMalformedPolicy ErrMalformedPolicy
ErrMalformedPolicyNotPrincipal
ErrMissingFields ErrMissingFields
ErrMissingCredTag ErrMissingCredTag
ErrCredMalformed ErrCredMalformed
@ -376,6 +378,12 @@ var errorCodes = errorCodeMap{
Description: "Access Denied.", Description: "Access Denied.",
HTTPStatusCode: http.StatusForbidden, HTTPStatusCode: http.StatusForbidden,
}, },
ErrAccessControlListNotSupported: {
ErrCode: ErrAccessControlListNotSupported,
Code: "AccessControlListNotSupported",
Description: "The bucket does not allow ACLs.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrBadDigest: { ErrBadDigest: {
ErrCode: ErrBadDigest, ErrCode: ErrBadDigest,
Code: "BadDigest", Code: "BadDigest",
@ -658,6 +666,12 @@ var errorCodes = errorCodeMap{
Description: "Policy has invalid resource.", Description: "Policy has invalid resource.",
HTTPStatusCode: http.StatusBadRequest, HTTPStatusCode: http.StatusBadRequest,
}, },
ErrMalformedPolicyNotPrincipal: {
ErrCode: ErrMalformedPolicyNotPrincipal,
Code: "MalformedPolicy",
Description: "Allow with NotPrincipal is not allowed.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingFields: { ErrMissingFields: {
ErrCode: ErrMissingFields, ErrCode: ErrMissingFields,
Code: "MissingFields", Code: "MissingFields",

View file

@ -20,6 +20,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
@ -27,7 +28,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
engineiam "git.frostfs.info/TrueCloudLab/policy-engine/iam" engineiam "git.frostfs.info/TrueCloudLab/policy-engine/iam"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain" "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -257,6 +257,20 @@ func (h *handler) GetBucketACLHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
settings, err := h.obj.GetBucketSettings(r.Context(), bktInfo)
if err != nil {
h.logAndSendError(w, "couldn't get bucket settings", reqInfo, err)
return
}
if bktInfo.APEEnabled || len(settings.CannedACL) != 0 {
if err = middleware.EncodeToResponse(w, h.encodeBucketCannedACL(ctx, bktInfo, settings)); err != nil {
h.logAndSendError(w, "something went wrong", reqInfo, err)
return
}
return
}
bucketACL, err := h.obj.GetBucketACL(ctx, bktInfo) bucketACL, err := h.obj.GetBucketACL(ctx, bktInfo)
if err != nil { if err != nil {
h.logAndSendError(w, "could not fetch bucket acl", reqInfo, err) h.logAndSendError(w, "could not fetch bucket acl", reqInfo, err)
@ -269,16 +283,75 @@ func (h *handler) GetBucketACLHandler(w http.ResponseWriter, r *http.Request) {
} }
} }
func (h *handler) encodeBucketCannedACL(ctx context.Context, bktInfo *data.BucketInfo, settings *data.BucketSettings) *AccessControlPolicy {
res := h.encodePrivateCannedACL(ctx, bktInfo, settings)
switch settings.CannedACL {
case basicACLPublic:
grantee := NewGrantee(acpGroup)
grantee.URI = allUsersGroup
res.AccessControlList = append(res.AccessControlList, &Grant{
Grantee: grantee,
Permission: aclWrite,
})
fallthrough
case basicACLReadOnly:
grantee := NewGrantee(acpGroup)
grantee.URI = allUsersGroup
res.AccessControlList = append(res.AccessControlList, &Grant{
Grantee: grantee,
Permission: aclRead,
})
}
return res
}
func (h *handler) encodePrivateCannedACL(ctx context.Context, bktInfo *data.BucketInfo, settings *data.BucketSettings) *AccessControlPolicy {
ownerDisplayName := bktInfo.Owner.EncodeToString()
ownerEncodedID := ownerDisplayName
if settings.OwnerKey == nil {
h.reqLogger(ctx).Warn(logs.BucketOwnerKeyIsMissing, zap.String("owner", bktInfo.Owner.String()))
} else {
ownerDisplayName = settings.OwnerKey.Address()
ownerEncodedID = hex.EncodeToString(settings.OwnerKey.Bytes())
}
res := &AccessControlPolicy{Owner: Owner{
ID: ownerEncodedID,
DisplayName: ownerDisplayName,
}}
granteeOwner := NewGrantee(acpCanonicalUser)
granteeOwner.ID = ownerEncodedID
granteeOwner.DisplayName = ownerDisplayName
res.AccessControlList = []*Grant{{
Grantee: granteeOwner,
Permission: aclFullControl,
}}
return res
}
func (h *handler) bearerTokenIssuerKey(ctx context.Context) (*keys.PublicKey, error) { func (h *handler) bearerTokenIssuerKey(ctx context.Context) (*keys.PublicKey, error) {
box, err := middleware.GetBoxData(ctx) box, err := middleware.GetBoxData(ctx)
if err != nil { if err != nil {
return nil, err return nil, err
} }
var btoken v2acl.BearerToken return getTokenIssuerKey(box)
box.Gate.BearerToken.WriteToV2(&btoken) }
key, err := keys.NewPublicKeyFromBytes(btoken.GetSignature().GetKey(), elliptic.P256()) func getTokenIssuerKey(box *accessbox.Box) (*keys.PublicKey, error) {
if box.Gate.BearerToken == nil {
return nil, stderrors.New("bearer token is missing")
}
key, err := keys.NewPublicKeyFromBytes(box.Gate.BearerToken.SigningKeyBytes(), elliptic.P256())
if err != nil { if err != nil {
return nil, fmt.Errorf("public key from bytes: %w", err) return nil, fmt.Errorf("public key from bytes: %w", err)
} }
@ -288,6 +361,24 @@ func (h *handler) bearerTokenIssuerKey(ctx context.Context) (*keys.PublicKey, er
func (h *handler) PutBucketACLHandler(w http.ResponseWriter, r *http.Request) { func (h *handler) PutBucketACLHandler(w http.ResponseWriter, r *http.Request) {
reqInfo := middleware.GetReqInfo(r.Context()) reqInfo := middleware.GetReqInfo(r.Context())
bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket info", reqInfo, err)
return
}
settings, err := h.obj.GetBucketSettings(r.Context(), bktInfo)
if err != nil {
h.logAndSendError(w, "couldn't get bucket settings", reqInfo, err)
return
}
if bktInfo.APEEnabled || len(settings.CannedACL) != 0 {
h.putBucketACLAPEHandler(w, r, reqInfo, bktInfo, settings)
return
}
key, err := h.bearerTokenIssuerKey(r.Context()) key, err := h.bearerTokenIssuerKey(r.Context())
if err != nil { if err != nil {
h.logAndSendError(w, "couldn't get bearer token issuer key", reqInfo, err) h.logAndSendError(w, "couldn't get bearer token issuer key", reqInfo, err)
@ -308,7 +399,7 @@ func (h *handler) PutBucketACLHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
} else if err = h.cfg.NewXMLDecoder(r.Body).Decode(list); err != nil { } else if err = h.cfg.NewXMLDecoder(r.Body).Decode(list); err != nil {
h.logAndSendError(w, "could not parse bucket acl", reqInfo, errors.GetAPIError(errors.ErrMalformedXML)) h.logAndSendError(w, "could not parse bucket acl", reqInfo, fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrMalformedXML), err.Error()))
return return
} }
@ -319,12 +410,6 @@ func (h *handler) PutBucketACLHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket info", reqInfo, err)
return
}
if _, err = h.updateBucketACL(r, astBucket, bktInfo, token); err != nil { if _, err = h.updateBucketACL(r, astBucket, bktInfo, token); err != nil {
h.logAndSendError(w, "could not update bucket acl", reqInfo, err) h.logAndSendError(w, "could not update bucket acl", reqInfo, err)
return return
@ -332,6 +417,60 @@ func (h *handler) PutBucketACLHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
} }
func (h *handler) putBucketACLAPEHandler(w http.ResponseWriter, r *http.Request, reqInfo *middleware.ReqInfo, bktInfo *data.BucketInfo, settings *data.BucketSettings) {
ctx := r.Context()
defer func() {
if errBody := r.Body.Close(); errBody != nil {
h.reqLogger(r.Context()).Warn(logs.CouldNotCloseRequestBody, zap.Error(errBody))
}
}()
written, err := io.Copy(io.Discard, r.Body)
if err != nil {
h.logAndSendError(w, "couldn't read request body", reqInfo, err)
return
}
if written != 0 || len(r.Header.Get(api.AmzACL)) == 0 {
h.logAndSendError(w, "acl not supported for this bucket", reqInfo, errors.GetAPIError(errors.ErrAccessControlListNotSupported))
return
}
cannedACL, err := parseCannedACL(r.Header)
if err != nil {
h.logAndSendError(w, "could not parse canned ACL", reqInfo, err)
return
}
key, err := h.bearerTokenIssuerKey(ctx)
if err != nil {
h.logAndSendError(w, "couldn't get bearer token issuer key", reqInfo, err)
return
}
chainRules := bucketCannedACLToAPERules(cannedACL, reqInfo, key, bktInfo.CID)
if err = h.ape.SaveACLChains(bktInfo.CID.EncodeToString(), chainRules); err != nil {
h.logAndSendError(w, "failed to add morph rule chains", reqInfo, err)
return
}
settings.CannedACL = cannedACL
sp := &layer.PutSettingsParams{
BktInfo: bktInfo,
Settings: settings,
}
if err = h.obj.PutBucketSettings(ctx, sp); err != nil {
h.logAndSendError(w, "couldn't save bucket settings", reqInfo, err,
zap.String("container_id", bktInfo.CID.EncodeToString()))
return
}
w.WriteHeader(http.StatusOK)
}
func (h *handler) updateBucketACL(r *http.Request, astChild *ast, bktInfo *data.BucketInfo, sessionToken *session.Container) (bool, error) { func (h *handler) updateBucketACL(r *http.Request, astChild *ast, bktInfo *data.BucketInfo, sessionToken *session.Container) (bool, error) {
bucketACL, err := h.obj.GetBucketACL(r.Context(), bktInfo) bucketACL, err := h.obj.GetBucketACL(r.Context(), bktInfo)
if err != nil { if err != nil {
@ -380,6 +519,20 @@ func (h *handler) GetObjectACLHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
settings, err := h.obj.GetBucketSettings(r.Context(), bktInfo)
if err != nil {
h.logAndSendError(w, "couldn't get bucket settings", reqInfo, err)
return
}
if bktInfo.APEEnabled || len(settings.CannedACL) != 0 {
if err = middleware.EncodeToResponse(w, h.encodePrivateCannedACL(ctx, bktInfo, settings)); err != nil {
h.logAndSendError(w, "something went wrong", reqInfo, err)
return
}
return
}
bucketACL, err := h.obj.GetBucketACL(ctx, bktInfo) bucketACL, err := h.obj.GetBucketACL(ctx, bktInfo)
if err != nil { if err != nil {
h.logAndSendError(w, "could not fetch bucket acl", reqInfo, err) h.logAndSendError(w, "could not fetch bucket acl", reqInfo, err)
@ -394,7 +547,7 @@ func (h *handler) GetObjectACLHandler(w http.ResponseWriter, r *http.Request) {
objInfo, err := h.obj.GetObjectInfo(ctx, prm) objInfo, err := h.obj.GetObjectInfo(ctx, prm)
if err != nil { if err != nil {
h.logAndSendError(w, "could not object info", reqInfo, err) h.logAndSendError(w, "could not get object info", reqInfo, err)
return return
} }
@ -406,6 +559,29 @@ func (h *handler) GetObjectACLHandler(w http.ResponseWriter, r *http.Request) {
func (h *handler) PutObjectACLHandler(w http.ResponseWriter, r *http.Request) { func (h *handler) PutObjectACLHandler(w http.ResponseWriter, r *http.Request) {
ctx := r.Context() ctx := r.Context()
reqInfo := middleware.GetReqInfo(ctx) reqInfo := middleware.GetReqInfo(ctx)
bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket info", reqInfo, err)
return
}
apeEnabled := bktInfo.APEEnabled
if !apeEnabled {
settings, err := h.obj.GetBucketSettings(r.Context(), bktInfo)
if err != nil {
h.logAndSendError(w, "couldn't get bucket settings", reqInfo, err)
return
}
apeEnabled = len(settings.CannedACL) != 0
}
if apeEnabled {
h.logAndSendError(w, "acl not supported for this bucket", reqInfo, errors.GetAPIError(errors.ErrAccessControlListNotSupported))
return
}
versionID := reqInfo.URL.Query().Get(api.QueryVersionID) versionID := reqInfo.URL.Query().Get(api.QueryVersionID)
key, err := h.bearerTokenIssuerKey(ctx) key, err := h.bearerTokenIssuerKey(ctx)
if err != nil { if err != nil {
@ -419,12 +595,6 @@ func (h *handler) PutObjectACLHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket info", reqInfo, err)
return
}
p := &layer.HeadObjectParams{ p := &layer.HeadObjectParams{
BktInfo: bktInfo, BktInfo: bktInfo,
Object: reqInfo.ObjectName, Object: reqInfo.ObjectName,
@ -445,7 +615,7 @@ func (h *handler) PutObjectACLHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
} else if err = h.cfg.NewXMLDecoder(r.Body).Decode(list); err != nil { } else if err = h.cfg.NewXMLDecoder(r.Body).Decode(list); err != nil {
h.logAndSendError(w, "could not parse bucket acl", reqInfo, errors.GetAPIError(errors.ErrMalformedXML)) h.logAndSendError(w, "could not parse bucket acl", reqInfo, fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrMalformedXML), err.Error()))
return return
} }
@ -480,6 +650,48 @@ func (h *handler) PutObjectACLHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
} }
func (h *handler) GetBucketPolicyStatusHandler(w http.ResponseWriter, r *http.Request) {
reqInfo := middleware.GetReqInfo(r.Context())
bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket info", reqInfo, err)
return
}
jsonPolicy, err := h.ape.GetBucketPolicy(reqInfo.Namespace, bktInfo.CID)
if err != nil {
if strings.Contains(err.Error(), "not found") {
err = fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrNoSuchBucketPolicy), err.Error())
}
h.logAndSendError(w, "failed to get policy from storage", reqInfo, err)
return
}
var bktPolicy engineiam.Policy
if err = json.Unmarshal(jsonPolicy, &bktPolicy); err != nil {
h.logAndSendError(w, "could not parse bucket policy", reqInfo, err)
return
}
policyStatus := &PolicyStatus{
IsPublic: PolicyStatusIsPublicFalse,
}
for _, st := range bktPolicy.Statement {
// https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html#access-control-block-public-access-policy-status
if _, ok := st.Principal[engineiam.Wildcard]; ok {
policyStatus.IsPublic = PolicyStatusIsPublicTrue
break
}
}
if err = middleware.EncodeToResponse(w, policyStatus); err != nil {
h.logAndSendError(w, "encode and write response", reqInfo, err)
return
}
}
func (h *handler) GetBucketPolicyHandler(w http.ResponseWriter, r *http.Request) { func (h *handler) GetBucketPolicyHandler(w http.ResponseWriter, r *http.Request) {
reqInfo := middleware.GetReqInfo(r.Context()) reqInfo := middleware.GetReqInfo(r.Context())
@ -489,8 +701,7 @@ func (h *handler) GetBucketPolicyHandler(w http.ResponseWriter, r *http.Request)
return return
} }
resolvedNamespace := h.cfg.ResolveNamespaceAlias(reqInfo.Namespace) jsonPolicy, err := h.ape.GetBucketPolicy(reqInfo.Namespace, bktInfo.CID)
jsonPolicy, err := h.ape.GetPolicy(resolvedNamespace, bktInfo.CID)
if err != nil { if err != nil {
if strings.Contains(err.Error(), "not found") { if strings.Contains(err.Error(), "not found") {
err = fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrNoSuchBucketPolicy), err.Error()) err = fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrNoSuchBucketPolicy), err.Error())
@ -516,16 +727,8 @@ func (h *handler) DeleteBucketPolicyHandler(w http.ResponseWriter, r *http.Reque
return return
} }
resolvedNamespace := h.cfg.ResolveNamespaceAlias(reqInfo.Namespace) chainIDs := []chain.ID{getBucketChainID(chain.S3, bktInfo), getBucketChainID(chain.Ingress, bktInfo)}
if err = h.ape.DeleteBucketPolicy(reqInfo.Namespace, bktInfo.CID, chainIDs); err != nil {
target := engine.NamespaceTarget(resolvedNamespace)
chainID := getBucketChainID(bktInfo)
if err = h.ape.RemoveChain(target, chainID); err != nil {
h.logAndSendError(w, "failed to remove morph rule chain", reqInfo, err)
return
}
if err = h.ape.DeletePolicy(resolvedNamespace, bktInfo.CID); err != nil {
h.logAndSendError(w, "failed to delete policy from storage", reqInfo, err) h.logAndSendError(w, "failed to delete policy from storage", reqInfo, err)
return return
} }
@ -565,15 +768,18 @@ func (h *handler) PutBucketPolicyHandler(w http.ResponseWriter, r *http.Request)
return return
} }
s3Chain, err := engineiam.ConvertToS3Chain(bktPolicy, h.frostfsid) for _, stat := range bktPolicy.Statement {
if err != nil { if len(stat.NotResource) != 0 {
h.logAndSendError(w, "could not convert s3 policy to chain policy", reqInfo, err) h.logAndSendError(w, "policy resource mismatched bucket", reqInfo, errors.GetAPIError(errors.ErrMalformedPolicy))
return return
} }
s3Chain.ID = getBucketChainID(bktInfo)
for _, rule := range s3Chain.Rules { if len(stat.NotPrincipal) != 0 && stat.Effect == engineiam.AllowEffect {
for _, resource := range rule.Resources.Names { h.logAndSendError(w, "invalid NotPrincipal", reqInfo, errors.GetAPIError(errors.ErrMalformedPolicyNotPrincipal))
return
}
for _, resource := range stat.Resource {
if reqInfo.BucketName != strings.Split(strings.TrimPrefix(resource, arnAwsPrefix), "/")[0] { if reqInfo.BucketName != strings.Split(strings.TrimPrefix(resource, arnAwsPrefix), "/")[0] {
h.logAndSendError(w, "policy resource mismatched bucket", reqInfo, errors.GetAPIError(errors.ErrMalformedPolicy)) h.logAndSendError(w, "policy resource mismatched bucket", reqInfo, errors.GetAPIError(errors.ErrMalformedPolicy))
return return
@ -581,22 +787,58 @@ func (h *handler) PutBucketPolicyHandler(w http.ResponseWriter, r *http.Request)
} }
} }
resolvedNamespace := h.cfg.ResolveNamespaceAlias(reqInfo.Namespace) s3Chain, err := engineiam.ConvertToS3Chain(bktPolicy, h.frostfsid)
if err != nil {
target := engine.NamespaceTarget(resolvedNamespace) h.logAndSendError(w, "could not convert s3 policy to chain policy", reqInfo, err)
if err = h.ape.AddChain(target, s3Chain); err != nil {
h.logAndSendError(w, "failed to add morph rule chain", reqInfo, err)
return return
} }
s3Chain.ID = getBucketChainID(chain.S3, bktInfo)
if err = h.ape.PutPolicy(resolvedNamespace, bktInfo.CID, jsonPolicy); err != nil { nativeChain, err := engineiam.ConvertToNativeChain(bktPolicy, h.nativeResolver(reqInfo.Namespace, bktInfo))
h.logAndSendError(w, "failed to save policy to storage", reqInfo, err) if err == nil {
nativeChain.ID = getBucketChainID(chain.Ingress, bktInfo)
} else if !stderrors.Is(err, engineiam.ErrActionsNotApplicable) {
h.logAndSendError(w, "could not convert s3 policy to native chain policy", reqInfo, err)
return
} else {
h.reqLogger(r.Context()).Warn(logs.PolicyCouldntBeConvertedToNativeRules)
}
chainsToSave := []*chain.Chain{s3Chain}
if nativeChain != nil {
chainsToSave = append(chainsToSave, nativeChain)
}
if err = h.ape.PutBucketPolicy(reqInfo.Namespace, bktInfo.CID, jsonPolicy, chainsToSave); err != nil {
h.logAndSendError(w, "failed to update policy in contract", reqInfo, err)
return return
} }
} }
func getBucketChainID(bktInfo *data.BucketInfo) chain.ID { type nativeResolver struct {
return chain.ID("bkt" + string(bktInfo.CID[:])) FrostFSID
namespace string
bktInfo *data.BucketInfo
}
func (n *nativeResolver) GetBucketInfo(bucket string) (*engineiam.BucketInfo, error) {
if n.bktInfo.Name != bucket {
return nil, fmt.Errorf("invalid bucket %s: %w", bucket, errors.GetAPIError(errors.ErrMalformedPolicy))
}
return &engineiam.BucketInfo{Namespace: n.namespace, Container: n.bktInfo.CID.EncodeToString()}, nil
}
func (h *handler) nativeResolver(ns string, bktInfo *data.BucketInfo) engineiam.NativeResolver {
return &nativeResolver{
FrostFSID: h.frostfsid,
namespace: ns,
bktInfo: bktInfo,
}
}
func getBucketChainID(prefix chain.Name, bktInfo *data.BucketInfo) chain.ID {
return chain.ID(string(prefix) + ":bkt" + string(bktInfo.CID[:]))
} }
func parseACLHeaders(header http.Header, key *keys.PublicKey) (*AccessControlPolicy, error) { func parseACLHeaders(header http.Header, key *keys.PublicKey) (*AccessControlPolicy, error) {
@ -1388,6 +1630,26 @@ func isWriteOperation(op eacl.Operation) bool {
return op == eacl.OperationDelete || op == eacl.OperationPut return op == eacl.OperationDelete || op == eacl.OperationPut
} }
type access struct {
recipient string
operations []eacl.Operation
}
type accessList struct {
list []access
}
func (c *accessList) addAccess(recipient string, operation eacl.Operation) {
for i, v := range c.list {
if v.recipient == recipient {
c.list[i].operations = append(c.list[i].operations, operation)
return
}
}
c.list = append(c.list, access{recipient, []eacl.Operation{operation}})
}
func (h *handler) encodeObjectACL(ctx context.Context, bucketACL *layer.BucketACL, bucketName, objectVersion string) *AccessControlPolicy { func (h *handler) encodeObjectACL(ctx context.Context, bucketACL *layer.BucketACL, bucketName, objectVersion string) *AccessControlPolicy {
res := &AccessControlPolicy{ res := &AccessControlPolicy{
Owner: Owner{ Owner: Owner{
@ -1396,7 +1658,7 @@ func (h *handler) encodeObjectACL(ctx context.Context, bucketACL *layer.BucketAC
}, },
} }
m := make(map[string][]eacl.Operation) m := &accessList{}
astList := tableToAst(bucketACL.EACL, bucketName) astList := tableToAst(bucketACL.EACL, bucketName)
@ -1411,22 +1673,20 @@ func (h *handler) encodeObjectACL(ctx context.Context, bucketACL *layer.BucketAC
} }
if len(op.Users) == 0 { if len(op.Users) == 0 {
list := append(m[allUsersGroup], op.Op) m.addAccess(allUsersGroup, op.Op)
m[allUsersGroup] = list
} else { } else {
for _, user := range op.Users { for _, user := range op.Users {
list := append(m[user], op.Op) m.addAccess(user, op.Op)
m[user] = list
} }
} }
} }
} }
for key, val := range m { for _, val := range m.list {
permission := aclFullControl permission := aclFullControl
read := true read := true
for op := eacl.OperationGet; op <= eacl.OperationRangeHash; op++ { for op := eacl.OperationGet; op <= eacl.OperationRangeHash; op++ {
if !contains(val, op) && !isWriteOperation(op) { if !contains(val.operations, op) && !isWriteOperation(op) {
read = false read = false
} }
} }
@ -1438,12 +1698,12 @@ func (h *handler) encodeObjectACL(ctx context.Context, bucketACL *layer.BucketAC
} }
var grantee *Grantee var grantee *Grantee
if key == allUsersGroup { if val.recipient == allUsersGroup {
grantee = NewGrantee(acpGroup) grantee = NewGrantee(acpGroup)
grantee.URI = allUsersGroup grantee.URI = allUsersGroup
} else { } else {
grantee = NewGrantee(acpCanonicalUser) grantee = NewGrantee(acpCanonicalUser)
grantee.ID = key grantee.ID = val.recipient
} }
grant := &Grant{ grant := &Grant{

View file

@ -7,6 +7,7 @@ import (
"crypto/sha256" "crypto/sha256"
"encoding/hex" "encoding/hex"
"encoding/json" "encoding/json"
"encoding/xml"
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
@ -16,6 +17,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
s3errors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors" s3errors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
@ -24,6 +26,7 @@ import (
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
engineiam "git.frostfs.info/TrueCloudLab/policy-engine/iam" engineiam "git.frostfs.info/TrueCloudLab/policy-engine/iam"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -1300,17 +1303,143 @@ func TestBucketAclToAst(t *testing.T) {
func TestPutBucketACL(t *testing.T) { func TestPutBucketACL(t *testing.T) {
tc := prepareHandlerContext(t) tc := prepareHandlerContext(t)
tc.config.aclEnabled = true
bktName := "bucket-for-acl" bktName := "bucket-for-acl"
box, _ := createAccessBox(t) info := createBucket(tc, bktName)
bktInfo := createBucket(t, tc, bktName, box)
header := map[string]string{api.AmzACL: "public-read"} header := map[string]string{api.AmzACL: "public-read"}
putBucketACL(t, tc, bktName, box, header) putBucketACL(tc, bktName, info.Box, header)
header = map[string]string{api.AmzACL: "private"} header = map[string]string{api.AmzACL: "private"}
putBucketACL(t, tc, bktName, box, header) putBucketACL(tc, bktName, info.Box, header)
checkLastRecords(t, tc, bktInfo, eacl.ActionDeny) checkLastRecords(t, tc, info.BktInfo, eacl.ActionDeny)
}
func TestPutBucketAPE(t *testing.T) {
hc := prepareHandlerContext(t)
bktName := "bucket-for-acl-ape"
info := createBucket(hc, bktName)
_, err := hc.tp.ContainerEACL(hc.Context(), layer.PrmContainerEACL{ContainerID: info.BktInfo.CID})
require.ErrorContains(t, err, "not found")
chains, err := hc.h.ape.(*apeMock).ListChains(engine.ContainerTarget(info.BktInfo.CID.EncodeToString()))
require.NoError(t, err)
require.Len(t, chains, 2)
}
func TestPutObjectACLErrorAPE(t *testing.T) {
hc := prepareHandlerContext(t)
bktName, objName := "bucket-for-acl-ape", "object"
info := createBucket(hc, bktName)
putObjectWithHeadersAssertS3Error(hc, bktName, objName, map[string]string{api.AmzACL: basicACLPublic}, s3errors.ErrAccessControlListNotSupported)
putObjectWithHeaders(hc, bktName, objName, map[string]string{api.AmzACL: basicACLPrivate}) // only `private` canned acl is allowed, that is actually ignored
putObjectWithHeaders(hc, bktName, objName, nil)
aclBody := &AccessControlPolicy{}
putObjectACLAssertS3Error(hc, bktName, objName, info.Box, nil, aclBody, s3errors.ErrAccessControlListNotSupported)
aclRes := getObjectACL(hc, bktName, objName)
checkPrivateACL(t, aclRes, info.Key.PublicKey())
}
func TestCreateObjectACLErrorAPE(t *testing.T) {
hc := prepareHandlerContext(t)
bktName, objName, objNameCopy := "bucket-for-acl-ape", "object", "copy"
createBucket(hc, bktName)
putObject(hc, bktName, objName)
copyObject(hc, bktName, objName, objNameCopy, CopyMeta{Headers: map[string]string{api.AmzACL: basicACLPublic}}, http.StatusBadRequest)
copyObject(hc, bktName, objName, objNameCopy, CopyMeta{Headers: map[string]string{api.AmzACL: basicACLPrivate}}, http.StatusOK)
createMultipartUploadAssertS3Error(hc, bktName, objName, map[string]string{api.AmzACL: basicACLPublic}, s3errors.ErrAccessControlListNotSupported)
createMultipartUpload(hc, bktName, objName, map[string]string{api.AmzACL: basicACLPrivate})
}
func TestPutObjectACLBackwardCompatibility(t *testing.T) {
hc := prepareHandlerContext(t)
hc.config.aclEnabled = true
bktName, objName := "bucket-for-acl-ape", "object"
info := createBucket(hc, bktName)
putObjectWithHeadersBase(hc, bktName, objName, map[string]string{api.AmzACL: basicACLPrivate}, info.Box, nil)
putObjectWithHeadersBase(hc, bktName, objName, map[string]string{api.AmzACL: basicACLPublic}, info.Box, nil)
aclRes := getObjectACL(hc, bktName, objName)
require.Len(t, aclRes.AccessControlList, 2)
require.Equal(t, hex.EncodeToString(info.Key.PublicKey().Bytes()), aclRes.AccessControlList[0].Grantee.ID)
require.Equal(t, aclFullControl, aclRes.AccessControlList[0].Permission)
require.Equal(t, allUsersGroup, aclRes.AccessControlList[1].Grantee.URI)
require.Equal(t, aclFullControl, aclRes.AccessControlList[1].Permission)
aclBody := &AccessControlPolicy{}
putObjectACLBase(hc, bktName, objName, info.Box, nil, aclBody)
}
func TestBucketACLAPE(t *testing.T) {
hc := prepareHandlerContext(t)
bktName := "bucket-for-acl-ape"
info := createBucket(hc, bktName)
aclBody := &AccessControlPolicy{}
putBucketACLAssertS3Error(hc, bktName, info.Box, nil, aclBody, s3errors.ErrAccessControlListNotSupported)
aclRes := getBucketACL(hc, bktName)
checkPrivateACL(t, aclRes, info.Key.PublicKey())
putBucketACL(hc, bktName, info.Box, map[string]string{api.AmzACL: basicACLPrivate})
aclRes = getBucketACL(hc, bktName)
checkPrivateACL(t, aclRes, info.Key.PublicKey())
putBucketACL(hc, bktName, info.Box, map[string]string{api.AmzACL: basicACLReadOnly})
aclRes = getBucketACL(hc, bktName)
checkPublicReadACL(t, aclRes, info.Key.PublicKey())
putBucketACL(hc, bktName, info.Box, map[string]string{api.AmzACL: basicACLPublic})
aclRes = getBucketACL(hc, bktName)
checkPublicReadWriteACL(t, aclRes, info.Key.PublicKey())
}
func checkPrivateACL(t *testing.T, aclRes *AccessControlPolicy, ownerKey *keys.PublicKey) {
checkACLOwner(t, aclRes, ownerKey, 1)
}
func checkPublicReadACL(t *testing.T, aclRes *AccessControlPolicy, ownerKey *keys.PublicKey) {
checkACLOwner(t, aclRes, ownerKey, 2)
require.Equal(t, allUsersGroup, aclRes.AccessControlList[1].Grantee.URI)
require.Equal(t, aclRead, aclRes.AccessControlList[1].Permission)
}
func checkPublicReadWriteACL(t *testing.T, aclRes *AccessControlPolicy, ownerKey *keys.PublicKey) {
checkACLOwner(t, aclRes, ownerKey, 3)
require.Equal(t, allUsersGroup, aclRes.AccessControlList[1].Grantee.URI)
require.Equal(t, aclWrite, aclRes.AccessControlList[1].Permission)
require.Equal(t, allUsersGroup, aclRes.AccessControlList[2].Grantee.URI)
require.Equal(t, aclRead, aclRes.AccessControlList[2].Permission)
}
func checkACLOwner(t *testing.T, aclRes *AccessControlPolicy, ownerKey *keys.PublicKey, ln int) {
ownerIDStr := hex.EncodeToString(ownerKey.Bytes())
ownerNameStr := ownerKey.Address()
require.Equal(t, ownerIDStr, aclRes.Owner.ID)
require.Equal(t, ownerNameStr, aclRes.Owner.DisplayName)
require.Len(t, aclRes.AccessControlList, ln)
require.Equal(t, ownerIDStr, aclRes.AccessControlList[0].Grantee.ID)
require.Equal(t, ownerNameStr, aclRes.AccessControlList[0].Grantee.DisplayName)
require.Equal(t, aclFullControl, aclRes.AccessControlList[0].Permission)
} }
func TestBucketPolicy(t *testing.T) { func TestBucketPolicy(t *testing.T) {
@ -1322,6 +1451,7 @@ func TestBucketPolicy(t *testing.T) {
getBucketPolicy(hc, bktName, s3errors.ErrNoSuchBucketPolicy) getBucketPolicy(hc, bktName, s3errors.ErrNoSuchBucketPolicy)
newPolicy := engineiam.Policy{ newPolicy := engineiam.Policy{
Version: "2012-10-17",
Statement: []engineiam.Statement{{ Statement: []engineiam.Statement{{
Principal: map[engineiam.PrincipalType][]string{engineiam.Wildcard: {}}, Principal: map[engineiam.PrincipalType][]string{engineiam.Wildcard: {}},
Effect: engineiam.DenyEffect, Effect: engineiam.DenyEffect,
@ -1339,6 +1469,71 @@ func TestBucketPolicy(t *testing.T) {
require.Equal(t, newPolicy, bktPolicy) require.Equal(t, newPolicy, bktPolicy)
} }
func TestBucketPolicyStatus(t *testing.T) {
hc := prepareHandlerContext(t)
bktName := "bucket-for-policy"
createTestBucket(hc, bktName)
getBucketPolicy(hc, bktName, s3errors.ErrNoSuchBucketPolicy)
newPolicy := engineiam.Policy{
Version: "2012-10-17",
Statement: []engineiam.Statement{{
NotPrincipal: engineiam.Principal{engineiam.Wildcard: {}},
Effect: engineiam.AllowEffect,
Action: engineiam.Action{"s3:PutObject"},
Resource: engineiam.Resource{arnAwsPrefix + bktName + "/*"},
}},
}
putBucketPolicy(hc, bktName, newPolicy, s3errors.ErrMalformedPolicyNotPrincipal)
newPolicy.Statement[0].NotPrincipal = nil
newPolicy.Statement[0].Principal = map[engineiam.PrincipalType][]string{engineiam.Wildcard: {}}
putBucketPolicy(hc, bktName, newPolicy)
bktPolicyStatus := getBucketPolicyStatus(hc, bktName)
require.True(t, PolicyStatusIsPublicTrue == bktPolicyStatus.IsPublic)
key, err := keys.NewPrivateKey()
require.NoError(t, err)
hc.Handler().frostfsid.(*frostfsidMock).data["devenv"] = key.PublicKey()
newPolicy.Statement[0].Principal = map[engineiam.PrincipalType][]string{engineiam.AWSPrincipalType: {"arn:aws:iam:::user/devenv"}}
putBucketPolicy(hc, bktName, newPolicy)
bktPolicyStatus = getBucketPolicyStatus(hc, bktName)
require.True(t, PolicyStatusIsPublicFalse == bktPolicyStatus.IsPublic)
}
func TestDeleteBucketWithPolicy(t *testing.T) {
hc := prepareHandlerContext(t)
bktName := "bucket-for-policy"
bi := createTestBucket(hc, bktName)
newPolicy := engineiam.Policy{
Version: "2012-10-17",
Statement: []engineiam.Statement{{
Principal: map[engineiam.PrincipalType][]string{engineiam.Wildcard: {}},
Effect: engineiam.AllowEffect,
Action: engineiam.Action{"s3:PutObject"},
Resource: engineiam.Resource{"arn:aws:s3:::bucket-for-policy/*"},
}},
}
putBucketPolicy(hc, bktName, newPolicy)
require.Len(t, hc.h.ape.(*apeMock).policyMap, 1)
require.Len(t, hc.h.ape.(*apeMock).chainMap[engine.ContainerTarget(bi.CID.EncodeToString())], 4)
deleteBucket(t, hc, bktName, http.StatusNoContent)
require.Empty(t, hc.h.ape.(*apeMock).policyMap)
chains, err := hc.h.ape.(*apeMock).ListChains(engine.ContainerTarget(bi.CID.EncodeToString()))
require.NoError(t, err)
require.Empty(t, chains)
}
func TestBucketPolicyUnmarshal(t *testing.T) { func TestBucketPolicyUnmarshal(t *testing.T) {
for _, tc := range []struct { for _, tc := range []struct {
name string name string
@ -1429,6 +1624,22 @@ func getBucketPolicy(hc *handlerContext, bktName string, errCode ...s3errors.Err
return policy return policy
} }
func getBucketPolicyStatus(hc *handlerContext, bktName string, errCode ...s3errors.ErrorCode) PolicyStatus {
w, r := prepareTestRequest(hc, bktName, "", nil)
hc.Handler().GetBucketPolicyStatusHandler(w, r)
var policyStatus PolicyStatus
if len(errCode) == 0 {
assertStatus(hc.t, w, http.StatusOK)
err := xml.NewDecoder(w.Result().Body).Decode(&policyStatus)
require.NoError(hc.t, err)
} else {
assertS3Error(hc.t, w, s3errors.GetAPIError(errCode[0]))
}
return policyStatus
}
func putBucketPolicy(hc *handlerContext, bktName string, bktPolicy engineiam.Policy, errCode ...s3errors.ErrorCode) { func putBucketPolicy(hc *handlerContext, bktName string, bktPolicy engineiam.Policy, errCode ...s3errors.ErrorCode) {
body, err := json.Marshal(bktPolicy) body, err := json.Marshal(bktPolicy)
require.NoError(hc.t, err) require.NoError(hc.t, err)
@ -1488,13 +1699,26 @@ func createAccessBox(t *testing.T) (*accessbox.Box, *keys.PrivateKey) {
return box, key return box, key
} }
func createBucket(t *testing.T, hc *handlerContext, bktName string, box *accessbox.Box) *data.BucketInfo { type createBucketInfo struct {
BktInfo *data.BucketInfo
Box *accessbox.Box
Key *keys.PrivateKey
}
func createBucket(hc *handlerContext, bktName string) *createBucketInfo {
box, key := createAccessBox(hc.t)
w := createBucketBase(hc, bktName, box) w := createBucketBase(hc, bktName, box)
assertStatus(t, w, http.StatusOK) assertStatus(hc.t, w, http.StatusOK)
bktInfo, err := hc.Layer().GetBucketInfo(hc.Context(), bktName) bktInfo, err := hc.Layer().GetBucketInfo(hc.Context(), bktName)
require.NoError(t, err) require.NoError(hc.t, err)
return bktInfo
return &createBucketInfo{
BktInfo: bktInfo,
Box: box,
Key: key,
}
} }
func createBucketAssertS3Error(hc *handlerContext, bktName string, box *accessbox.Box, code s3errors.ErrorCode) { func createBucketAssertS3Error(hc *handlerContext, bktName string, box *accessbox.Box, code s3errors.ErrorCode) {
@ -1504,19 +1728,99 @@ func createBucketAssertS3Error(hc *handlerContext, bktName string, box *accessbo
func createBucketBase(hc *handlerContext, bktName string, box *accessbox.Box) *httptest.ResponseRecorder { func createBucketBase(hc *handlerContext, bktName string, box *accessbox.Box) *httptest.ResponseRecorder {
w, r := prepareTestRequest(hc, bktName, "", nil) w, r := prepareTestRequest(hc, bktName, "", nil)
ctx := middleware.SetBoxData(r.Context(), box) ctx := middleware.SetBox(r.Context(), &middleware.Box{AccessBox: box})
r = r.WithContext(ctx) r = r.WithContext(ctx)
hc.Handler().CreateBucketHandler(w, r) hc.Handler().CreateBucketHandler(w, r)
return w return w
} }
func putBucketACL(t *testing.T, tc *handlerContext, bktName string, box *accessbox.Box, header map[string]string) { func putBucketACL(hc *handlerContext, bktName string, box *accessbox.Box, header map[string]string) {
w, r := prepareTestRequest(tc, bktName, "", nil) w := putBucketACLBase(hc, bktName, box, header, nil)
assertStatus(hc.t, w, http.StatusOK)
}
func putBucketACLAssertS3Error(hc *handlerContext, bktName string, box *accessbox.Box, header map[string]string, body *AccessControlPolicy, code s3errors.ErrorCode) {
w := putBucketACLBase(hc, bktName, box, header, body)
assertS3Error(hc.t, w, s3errors.GetAPIError(code))
}
func putBucketACLBase(hc *handlerContext, bktName string, box *accessbox.Box, header map[string]string, body *AccessControlPolicy) *httptest.ResponseRecorder {
w, r := prepareTestRequest(hc, bktName, "", body)
for key, val := range header { for key, val := range header {
r.Header.Set(key, val) r.Header.Set(key, val)
} }
ctx := middleware.SetBoxData(r.Context(), box) ctx := middleware.SetBox(r.Context(), &middleware.Box{AccessBox: box})
r = r.WithContext(ctx) r = r.WithContext(ctx)
tc.Handler().PutBucketACLHandler(w, r) hc.Handler().PutBucketACLHandler(w, r)
assertStatus(t, w, http.StatusOK) return w
}
func getBucketACL(hc *handlerContext, bktName string) *AccessControlPolicy {
w := getBucketACLBase(hc, bktName)
assertStatus(hc.t, w, http.StatusOK)
res := &AccessControlPolicy{}
parseTestResponse(hc.t, w, res)
return res
}
func getBucketACLBase(hc *handlerContext, bktName string) *httptest.ResponseRecorder {
w, r := prepareTestRequest(hc, bktName, "", nil)
hc.Handler().GetBucketACLHandler(w, r)
return w
}
func putObjectACLAssertS3Error(hc *handlerContext, bktName, objName string, box *accessbox.Box, header map[string]string, body *AccessControlPolicy, code s3errors.ErrorCode) {
w := putObjectACLBase(hc, bktName, objName, box, header, body)
assertS3Error(hc.t, w, s3errors.GetAPIError(code))
}
func putObjectACLBase(hc *handlerContext, bktName, objName string, box *accessbox.Box, header map[string]string, body *AccessControlPolicy) *httptest.ResponseRecorder {
w, r := prepareTestRequest(hc, bktName, objName, body)
for key, val := range header {
r.Header.Set(key, val)
}
ctx := middleware.SetBox(r.Context(), &middleware.Box{AccessBox: box})
r = r.WithContext(ctx)
hc.Handler().PutObjectACLHandler(w, r)
return w
}
func getObjectACL(hc *handlerContext, bktName, objName string) *AccessControlPolicy {
w := getObjectACLBase(hc, bktName, objName)
assertStatus(hc.t, w, http.StatusOK)
res := &AccessControlPolicy{}
parseTestResponse(hc.t, w, res)
return res
}
func getObjectACLBase(hc *handlerContext, bktName, objName string) *httptest.ResponseRecorder {
w, r := prepareTestRequest(hc, bktName, objName, nil)
hc.Handler().GetObjectACLHandler(w, r)
return w
}
func putObjectWithHeaders(hc *handlerContext, bktName, objName string, headers map[string]string) http.Header {
w := putObjectWithHeadersBase(hc, bktName, objName, headers, nil, nil)
assertStatus(hc.t, w, http.StatusOK)
return w.Header()
}
func putObjectWithHeadersAssertS3Error(hc *handlerContext, bktName, objName string, headers map[string]string, code s3errors.ErrorCode) {
w := putObjectWithHeadersBase(hc, bktName, objName, headers, nil, nil)
assertS3Error(hc.t, w, s3errors.GetAPIError(code))
}
func putObjectWithHeadersBase(hc *handlerContext, bktName, objName string, headers map[string]string, box *accessbox.Box, data []byte) *httptest.ResponseRecorder {
body := bytes.NewReader(data)
w, r := prepareTestPayloadRequest(hc, bktName, objName, body)
for k, v := range headers {
r.Header.Set(k, v)
}
ctx := middleware.SetBox(r.Context(), &middleware.Box{AccessBox: box})
r = r.WithContext(ctx)
hc.Handler().PutObjectHandler(w, r)
return w
} }

View file

@ -15,7 +15,6 @@ import (
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain" "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -47,35 +46,21 @@ type (
IsResolveListAllow() bool IsResolveListAllow() bool
BypassContentEncodingInChunks() bool BypassContentEncodingInChunks() bool
MD5Enabled() bool MD5Enabled() bool
ResolveNamespaceAlias(namespace string) string ACLEnabled() bool
} }
FrostFSID interface { FrostFSID interface {
GetUserAddress(account, user string) (string, error) GetUserAddress(account, user string) (string, error)
GetUserKey(account, name string) (string, error)
} }
// APE is Access Policy Engine that needs to save policy and acl info to different places. // APE is Access Policy Engine that needs to save policy and acl info to different places.
APE interface { APE interface {
MorphRuleChainStorage PutBucketPolicy(ns string, cnrID cid.ID, policy []byte, chains []*chain.Chain) error
PolicyStorage DeleteBucketPolicy(ns string, cnrID cid.ID, chainIDs []chain.ID) error
GetBucketPolicy(ns string, cnrID cid.ID) ([]byte, error)
SaveACLChains(cid string, chains []*chain.Chain) error
} }
// MorphRuleChainStorage is a similar to engine.MorphRuleChainStorage
// but doesn't know anything about tx.
MorphRuleChainStorage interface {
AddChain(target engine.Target, c *chain.Chain) error
RemoveChain(target engine.Target, chainID chain.ID) error
ListChains(target engine.Target) ([]*chain.Chain, error)
}
// PolicyStorage is interface to save intact initial user provided policy.
PolicyStorage interface {
PutPolicy(namespace string, cnrID cid.ID, policy []byte) error
GetPolicy(namespace string, cnrID cid.ID) ([]byte, error)
DeletePolicy(namespace string, cnrID cid.ID) error
}
frostfsIDDisabled struct{}
) )
var _ api.Handler = (*handler)(nil) var _ api.Handler = (*handler)(nil)
@ -89,10 +74,8 @@ func New(log *zap.Logger, obj layer.Client, notificator Notificator, cfg Config,
return nil, errors.New("empty logger") return nil, errors.New("empty logger")
case storage == nil: case storage == nil:
return nil, errors.New("empty policy storage") return nil, errors.New("empty policy storage")
} case ffsid == nil:
return nil, errors.New("empty frostfsid")
if ffsid == nil {
ffsid = frostfsIDDisabled{}
} }
if !cfg.NotificatorEnabled() { if !cfg.NotificatorEnabled() {
@ -111,10 +94,6 @@ func New(log *zap.Logger, obj layer.Client, notificator Notificator, cfg Config,
}, nil }, nil
} }
func (f frostfsIDDisabled) GetUserAddress(_, _ string) (string, error) {
return "", errors.New("frostfsid disabled")
}
// pickCopiesNumbers chooses the return values following this logic: // pickCopiesNumbers chooses the return values following this logic:
// 1) array of copies numbers sent in request's header has the highest priority. // 1) array of copies numbers sent in request's header has the highest priority.
// 2) array of copies numbers with corresponding location constraint provided in the config file. // 2) array of copies numbers with corresponding location constraint provided in the config file.

View file

@ -51,7 +51,7 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
ctx = r.Context() ctx = r.Context()
reqInfo = middleware.GetReqInfo(ctx) reqInfo = middleware.GetReqInfo(ctx)
containsACL = containsACLHeaders(r) cannedACLStatus = aclHeadersStatus(r)
) )
src := r.Header.Get(api.AmzCopySource) src := r.Header.Get(api.AmzCopySource)
@ -93,7 +93,14 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
if containsACL { apeEnabled := dstBktInfo.APEEnabled || settings.CannedACL != ""
if apeEnabled && cannedACLStatus == aclStatusYes {
h.logAndSendError(w, "acl not supported for this bucket", reqInfo, errors.GetAPIError(errors.ErrAccessControlListNotSupported))
return
}
needUpdateEACLTable := !(apeEnabled || cannedACLStatus == aclStatusNo)
if needUpdateEACLTable {
if sessionTokenEACL, err = getSessionTokenSetEACL(ctx); err != nil { if sessionTokenEACL, err = getSessionTokenSetEACL(ctx); err != nil {
h.logAndSendError(w, "could not get eacl session token from a box", reqInfo, err) h.logAndSendError(w, "could not get eacl session token from a box", reqInfo, err)
return return
@ -129,15 +136,15 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
} }
var dstSize uint64 var dstSize uint64
if srcSize, err := layer.GetObjectSize(srcObjInfo); err != nil { srcSize, err := layer.GetObjectSize(srcObjInfo)
if err != nil {
h.logAndSendError(w, "failed to get source object size", reqInfo, err) h.logAndSendError(w, "failed to get source object size", reqInfo, err)
return return
} else if srcSize > layer.UploadMaxSize { // https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html } else if srcSize > layer.UploadMaxSize { // https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html
h.logAndSendError(w, "too bid object to copy with single copy operation, use multipart upload copy instead", reqInfo, errors.GetAPIError(errors.ErrInvalidRequestLargeCopy)) h.logAndSendError(w, "too bid object to copy with single copy operation, use multipart upload copy instead", reqInfo, errors.GetAPIError(errors.ErrInvalidRequestLargeCopy))
return return
} else {
dstSize = srcSize
} }
dstSize = srcSize
args, err := parseCopyObjectArgs(r.Header) args, err := parseCopyObjectArgs(r.Header)
if err != nil { if err != nil {
@ -161,8 +168,8 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
} else { } else {
tagPrm := &layer.GetObjectTaggingParams{ tagPrm := &data.GetObjectTaggingParams{
ObjectVersion: &layer.ObjectVersion{ ObjectVersion: &data.ObjectVersion{
BktInfo: srcObjPrm.BktInfo, BktInfo: srcObjPrm.BktInfo,
ObjectName: srcObject, ObjectName: srcObject,
VersionID: srcObjInfo.VersionID(), VersionID: srcObjInfo.VersionID(),
@ -232,7 +239,7 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
if containsACL { if needUpdateEACLTable {
newEaclTable, err := h.getNewEAclTable(r, dstBktInfo, dstObjInfo) newEaclTable, err := h.getNewEAclTable(r, dstBktInfo, dstObjInfo)
if err != nil { if err != nil {
h.logAndSendError(w, "could not get new eacl table", reqInfo, err) h.logAndSendError(w, "could not get new eacl table", reqInfo, err)
@ -252,8 +259,8 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
} }
if tagSet != nil { if tagSet != nil {
tagPrm := &layer.PutObjectTaggingParams{ tagPrm := &data.PutObjectTaggingParams{
ObjectVersion: &layer.ObjectVersion{ ObjectVersion: &data.ObjectVersion{
BktInfo: dstBktInfo, BktInfo: dstBktInfo,
ObjectName: reqInfo.ObjectName, ObjectName: reqInfo.ObjectName,
VersionID: dstObjInfo.VersionID(), VersionID: dstObjInfo.VersionID(),

View file

@ -11,9 +11,11 @@ import (
"testing" "testing"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer/encryption" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer/encryption"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -22,6 +24,7 @@ type CopyMeta struct {
Tags map[string]string Tags map[string]string
MetadataDirective string MetadataDirective string
Metadata map[string]string Metadata map[string]string
Headers map[string]string
} }
func TestCopyWithTaggingDirective(t *testing.T) { func TestCopyWithTaggingDirective(t *testing.T) {
@ -279,28 +282,33 @@ func copyObject(hc *handlerContext, bktName, fromObject, toObject string, copyMe
} }
r.Header.Set(api.AmzTagging, tagsQuery.Encode()) r.Header.Set(api.AmzTagging, tagsQuery.Encode())
for key, val := range copyMeta.Headers {
r.Header.Set(key, val)
}
hc.Handler().CopyObjectHandler(w, r) hc.Handler().CopyObjectHandler(w, r)
assertStatus(hc.t, w, statusCode) assertStatus(hc.t, w, statusCode)
} }
func putObjectTagging(t *testing.T, tc *handlerContext, bktName, objName string, tags map[string]string) { func putObjectTagging(t *testing.T, tc *handlerContext, bktName, objName string, tags map[string]string) {
body := &Tagging{ body := &data.Tagging{
TagSet: make([]Tag, 0, len(tags)), TagSet: make([]data.Tag, 0, len(tags)),
} }
for key, val := range tags { for key, val := range tags {
body.TagSet = append(body.TagSet, Tag{ body.TagSet = append(body.TagSet, data.Tag{
Key: key, Key: key,
Value: val, Value: val,
}) })
} }
w, r := prepareTestRequest(tc, bktName, objName, body) w, r := prepareTestRequest(tc, bktName, objName, body)
middleware.GetReqInfo(r.Context()).Tagging = body
tc.Handler().PutObjectTaggingHandler(w, r) tc.Handler().PutObjectTaggingHandler(w, r)
assertStatus(t, w, http.StatusOK) assertStatus(t, w, http.StatusOK)
} }
func getObjectTagging(t *testing.T, tc *handlerContext, bktName, objName, version string) *Tagging { func getObjectTagging(t *testing.T, tc *handlerContext, bktName, objName, version string) *data.Tagging {
query := make(url.Values) query := make(url.Values)
query.Add(api.QueryVersionID, version) query.Add(api.QueryVersionID, version)
@ -308,7 +316,7 @@ func getObjectTagging(t *testing.T, tc *handlerContext, bktName, objName, versio
tc.Handler().GetObjectTaggingHandler(w, r) tc.Handler().GetObjectTaggingHandler(w, r)
assertStatus(t, w, http.StatusOK) assertStatus(t, w, http.StatusOK)
tagging := &Tagging{} tagging := &data.Tagging{}
err := xml.NewDecoder(w.Result().Body).Decode(tagging) err := xml.NewDecoder(w.Result().Body).Decode(tagging)
require.NoError(t, err) require.NoError(t, err)
return tagging return tagging

View file

@ -66,7 +66,10 @@ func (h *handler) PutBucketCorsHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
middleware.WriteSuccessResponseHeadersOnly(w) if err = middleware.WriteSuccessResponseHeadersOnly(w); err != nil {
h.logAndSendError(w, "write response", reqInfo, err)
return
}
} }
func (h *handler) DeleteBucketCorsHandler(w http.ResponseWriter, r *http.Request) { func (h *handler) DeleteBucketCorsHandler(w http.ResponseWriter, r *http.Request) {
@ -200,7 +203,10 @@ func (h *handler) Preflight(w http.ResponseWriter, r *http.Request) {
if o != wildcard { if o != wildcard {
w.Header().Set(api.AccessControlAllowCredentials, "true") w.Header().Set(api.AccessControlAllowCredentials, "true")
} }
middleware.WriteSuccessResponseHeadersOnly(w) if err = middleware.WriteSuccessResponseHeadersOnly(w); err != nil {
h.logAndSendError(w, "write response", reqInfo, err)
return
}
return return
} }
} }

View file

@ -23,14 +23,14 @@ func TestCORSOriginWildcard(t *testing.T) {
bktName := "bucket-for-cors" bktName := "bucket-for-cors"
box, _ := createAccessBox(t) box, _ := createAccessBox(t)
w, r := prepareTestRequest(hc, bktName, "", nil) w, r := prepareTestRequest(hc, bktName, "", nil)
ctx := middleware.SetBoxData(r.Context(), box) ctx := middleware.SetBox(r.Context(), &middleware.Box{AccessBox: box})
r = r.WithContext(ctx) r = r.WithContext(ctx)
r.Header.Add(api.AmzACL, "public-read") r.Header.Add(api.AmzACL, "public-read")
hc.Handler().CreateBucketHandler(w, r) hc.Handler().CreateBucketHandler(w, r)
assertStatus(t, w, http.StatusOK) assertStatus(t, w, http.StatusOK)
w, r = prepareTestPayloadRequest(hc, bktName, "", strings.NewReader(body)) w, r = prepareTestPayloadRequest(hc, bktName, "", strings.NewReader(body))
ctx = middleware.SetBoxData(r.Context(), box) ctx = middleware.SetBox(r.Context(), &middleware.Box{AccessBox: box})
r = r.WithContext(ctx) r = r.WithContext(ctx)
hc.Handler().PutBucketCorsHandler(w, r) hc.Handler().PutBucketCorsHandler(w, r)
assertStatus(t, w, http.StatusOK) assertStatus(t, w, http.StatusOK)

View file

@ -2,6 +2,7 @@ package handler
import ( import (
"encoding/xml" "encoding/xml"
"fmt"
"net/http" "net/http"
"strconv" "strconv"
"strings" "strings"
@ -15,8 +16,8 @@ import (
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status" apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"go.uber.org/zap" "go.uber.org/zap"
"go.uber.org/zap/zapcore"
) )
// limitation of AWS https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html // limitation of AWS https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html
@ -179,7 +180,7 @@ func (h *handler) DeleteMultipleObjectsHandler(w http.ResponseWriter, r *http.Re
// Unmarshal list of keys to be deleted. // Unmarshal list of keys to be deleted.
requested := &DeleteObjectsRequest{} requested := &DeleteObjectsRequest{}
if err := h.cfg.NewXMLDecoder(r.Body).Decode(requested); err != nil { if err := h.cfg.NewXMLDecoder(r.Body).Decode(requested); err != nil {
h.logAndSendError(w, "couldn't decode body", reqInfo, errors.GetAPIError(errors.ErrMalformedXML)) h.logAndSendError(w, "couldn't decode body", reqInfo, fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrMalformedXML), err.Error()))
return return
} }
@ -216,21 +217,14 @@ func (h *handler) DeleteMultipleObjectsHandler(w http.ResponseWriter, r *http.Re
return return
} }
marshaler := zapcore.ArrayMarshalerFunc(func(encoder zapcore.ArrayEncoder) error {
for _, obj := range toRemove {
encoder.AppendString(obj.String())
}
return nil
})
p := &layer.DeleteObjectParams{ p := &layer.DeleteObjectParams{
BktInfo: bktInfo, BktInfo: bktInfo,
Objects: toRemove, Objects: toRemove,
Settings: bktSettings, Settings: bktSettings,
IsMultiple: true,
} }
deletedObjects := h.obj.DeleteObjects(ctx, p) deletedObjects := h.obj.DeleteObjects(ctx, p)
var errs []error
for _, obj := range deletedObjects { for _, obj := range deletedObjects {
if obj.Error != nil { if obj.Error != nil {
code := "BadRequest" code := "BadRequest"
@ -243,7 +237,6 @@ func (h *handler) DeleteMultipleObjectsHandler(w http.ResponseWriter, r *http.Re
Key: obj.Name, Key: obj.Name,
VersionID: obj.VersionID, VersionID: obj.VersionID,
}) })
errs = append(errs, obj.Error)
} else if !requested.Quiet { } else if !requested.Quiet {
deletedObj := DeletedObject{ deletedObj := DeletedObject{
ObjectIdentifier: ObjectIdentifier{ ObjectIdentifier: ObjectIdentifier{
@ -258,16 +251,9 @@ func (h *handler) DeleteMultipleObjectsHandler(w http.ResponseWriter, r *http.Re
response.DeletedObjects = append(response.DeletedObjects, deletedObj) response.DeletedObjects = append(response.DeletedObjects, deletedObj)
} }
} }
if len(errs) != 0 {
fields := []zap.Field{
zap.Array("objects", marshaler),
zap.Errors("errors", errs),
}
h.reqLogger(ctx).Error(logs.CouldntDeleteObjects, fields...)
}
if err = middleware.EncodeToResponse(w, response); err != nil { if err = middleware.EncodeToResponse(w, response); err != nil {
h.logAndSendError(w, "could not write response", reqInfo, err, zap.Array("objects", marshaler)) h.logAndSendError(w, "could not write response", reqInfo, err)
return return
} }
} }
@ -293,5 +279,17 @@ func (h *handler) DeleteBucketHandler(w http.ResponseWriter, r *http.Request) {
}); err != nil { }); err != nil {
h.logAndSendError(w, "couldn't delete bucket", reqInfo, err) h.logAndSendError(w, "couldn't delete bucket", reqInfo, err)
} }
chainIDs := []chain.ID{
getBucketChainID(chain.S3, bktInfo),
getBucketChainID(chain.Ingress, bktInfo),
getBucketCannedChainID(chain.S3, bktInfo.CID),
getBucketCannedChainID(chain.Ingress, bktInfo.CID),
}
if err = h.ape.DeleteBucketPolicy(reqInfo.Namespace, bktInfo.CID, chainIDs); err != nil {
h.logAndSendError(w, "failed to delete policy from storage", reqInfo, err)
return
}
w.WriteHeader(http.StatusNoContent) w.WriteHeader(http.StatusNoContent)
} }

View file

@ -168,7 +168,7 @@ func TestDeleteDeletedObject(t *testing.T) {
}) })
t.Run("versioned bucket not found obj", func(t *testing.T) { t.Run("versioned bucket not found obj", func(t *testing.T) {
bktName, objName := "bucket-versioned-for-removal", "object-to-delete" bktName, objName := "bucket-versioned-for-removal-not-found", "object-to-delete"
_, objInfo := createVersionedBucketAndObject(t, tc, bktName, objName) _, objInfo := createVersionedBucketAndObject(t, tc, bktName, objName)
versionID, isDeleteMarker := deleteObject(t, tc, bktName, objName, objInfo.VersionID()) versionID, isDeleteMarker := deleteObject(t, tc, bktName, objName, objInfo.VersionID())

View file

@ -14,6 +14,7 @@ import (
"testing" "testing"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -234,24 +235,33 @@ func multipartUpload(hc *handlerContext, bktName, objName string, headers map[st
} }
func createMultipartUploadEncrypted(hc *handlerContext, bktName, objName string, headers map[string]string) *InitiateMultipartUploadResponse { func createMultipartUploadEncrypted(hc *handlerContext, bktName, objName string, headers map[string]string) *InitiateMultipartUploadResponse {
return createMultipartUploadBase(hc, bktName, objName, true, headers) return createMultipartUploadOkBase(hc, bktName, objName, true, headers)
} }
func createMultipartUpload(hc *handlerContext, bktName, objName string, headers map[string]string) *InitiateMultipartUploadResponse { func createMultipartUpload(hc *handlerContext, bktName, objName string, headers map[string]string) *InitiateMultipartUploadResponse {
return createMultipartUploadBase(hc, bktName, objName, false, headers) return createMultipartUploadOkBase(hc, bktName, objName, false, headers)
} }
func createMultipartUploadBase(hc *handlerContext, bktName, objName string, encrypted bool, headers map[string]string) *InitiateMultipartUploadResponse { func createMultipartUploadOkBase(hc *handlerContext, bktName, objName string, encrypted bool, headers map[string]string) *InitiateMultipartUploadResponse {
w := createMultipartUploadBase(hc, bktName, objName, encrypted, headers)
multipartInitInfo := &InitiateMultipartUploadResponse{}
readResponse(hc.t, w, http.StatusOK, multipartInitInfo)
return multipartInitInfo
}
func createMultipartUploadAssertS3Error(hc *handlerContext, bktName, objName string, headers map[string]string, code errors.ErrorCode) {
w := createMultipartUploadBase(hc, bktName, objName, false, headers)
assertS3Error(hc.t, w, errors.GetAPIError(code))
}
func createMultipartUploadBase(hc *handlerContext, bktName, objName string, encrypted bool, headers map[string]string) *httptest.ResponseRecorder {
w, r := prepareTestRequest(hc, bktName, objName, nil) w, r := prepareTestRequest(hc, bktName, objName, nil)
if encrypted { if encrypted {
setEncryptHeaders(r) setEncryptHeaders(r)
} }
setHeaders(r, headers) setHeaders(r, headers)
hc.Handler().CreateMultipartUploadHandler(w, r) hc.Handler().CreateMultipartUploadHandler(w, r)
multipartInitInfo := &InitiateMultipartUploadResponse{} return w
readResponse(hc.t, w, http.StatusOK, multipartInitInfo)
return multipartInitInfo
} }
func completeMultipartUpload(hc *handlerContext, bktName, objName, uploadID string, partsETags []string) { func completeMultipartUpload(hc *handlerContext, bktName, objName, uploadID string, partsETags []string) {

View file

@ -184,7 +184,7 @@ func (h *handler) GetObjectHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
t := &layer.ObjectVersion{ t := &data.ObjectVersion{
BktInfo: bktInfo, BktInfo: bktInfo,
ObjectName: info.Name, ObjectName: info.Name,
VersionID: info.VersionID(), VersionID: info.VersionID(),

View file

@ -4,8 +4,10 @@ import (
"bytes" "bytes"
"context" "context"
"crypto/rand" "crypto/rand"
"encoding/hex"
"encoding/xml" "encoding/xml"
"errors" "errors"
"fmt"
"io" "io"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
@ -21,7 +23,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/resolver" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/resolver"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/pkg/service/tree" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/pkg/service/tree"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
@ -71,6 +72,7 @@ type configMock struct {
defaultCopiesNumbers []uint32 defaultCopiesNumbers []uint32
bypassContentEncodingInChunks bool bypassContentEncodingInChunks bool
md5Enabled bool md5Enabled bool
aclEnabled bool
} }
func (c *configMock) DefaultPlacementPolicy(_ string) netmap.PlacementPolicy { func (c *configMock) DefaultPlacementPolicy(_ string) netmap.PlacementPolicy {
@ -122,6 +124,10 @@ func (c *configMock) MD5Enabled() bool {
return c.md5Enabled return c.md5Enabled
} }
func (c *configMock) ACLEnabled() bool {
return c.aclEnabled
}
func (c *configMock) ResolveNamespaceAlias(ns string) string { func (c *configMock) ResolveNamespaceAlias(ns string) string {
return ns return ns
} }
@ -177,6 +183,7 @@ func prepareHandlerContextBase(t *testing.T, cacheCfg *layer.CachesConfig) *hand
obj: layer.NewLayer(l, tp, layerCfg), obj: layer.NewLayer(l, tp, layerCfg),
cfg: cfg, cfg: cfg,
ape: newAPEMock(), ape: newAPEMock(),
frostfsid: newFrostfsIDMock(),
} }
return &handlerContext{ return &handlerContext{
@ -185,7 +192,7 @@ func prepareHandlerContextBase(t *testing.T, cacheCfg *layer.CachesConfig) *hand
h: h, h: h,
tp: tp, tp: tp,
tree: treeMock, tree: treeMock,
context: middleware.SetBoxData(context.Background(), newTestAccessBox(t, key)), context: middleware.SetBox(context.Background(), &middleware.Box{AccessBox: newTestAccessBox(t, key)}),
config: cfg, config: cfg,
layerFeatures: features, layerFeatures: features,
@ -252,8 +259,40 @@ func (a *apeMock) PutPolicy(namespace string, cnrID cid.ID, policy []byte) error
return nil return nil
} }
func (a *apeMock) GetPolicy(namespace string, cnrID cid.ID) ([]byte, error) { func (a *apeMock) DeletePolicy(namespace string, cnrID cid.ID) error {
policy, ok := a.policyMap[namespace+cnrID.EncodeToString()] delete(a.policyMap, namespace+cnrID.EncodeToString())
return nil
}
func (a *apeMock) PutBucketPolicy(ns string, cnrID cid.ID, policy []byte, chain []*chain.Chain) error {
if err := a.PutPolicy(ns, cnrID, policy); err != nil {
return err
}
for i := range chain {
if err := a.AddChain(engine.ContainerTarget(cnrID.EncodeToString()), chain[i]); err != nil {
return err
}
}
return nil
}
func (a *apeMock) DeleteBucketPolicy(ns string, cnrID cid.ID, chainIDs []chain.ID) error {
if err := a.DeletePolicy(ns, cnrID); err != nil {
return err
}
for i := range chainIDs {
if err := a.RemoveChain(engine.ContainerTarget(cnrID.EncodeToString()), chainIDs[i]); err != nil {
return err
}
}
return nil
}
func (a *apeMock) GetBucketPolicy(ns string, cnrID cid.ID) ([]byte, error) {
policy, ok := a.policyMap[ns+cnrID.EncodeToString()]
if !ok { if !ok {
return nil, errors.New("not found") return nil, errors.New("not found")
} }
@ -261,22 +300,45 @@ func (a *apeMock) GetPolicy(namespace string, cnrID cid.ID) ([]byte, error) {
return policy, nil return policy, nil
} }
func (a *apeMock) DeletePolicy(namespace string, cnrID cid.ID) error { func (a *apeMock) SaveACLChains(cid string, chains []*chain.Chain) error {
delete(a.policyMap, namespace+cnrID.EncodeToString()) for i := range chains {
if err := a.AddChain(engine.ContainerTarget(cid), chains[i]); err != nil {
return err
}
}
return nil return nil
} }
func createTestBucket(hc *handlerContext, bktName string) *data.BucketInfo { type frostfsidMock struct {
_, err := hc.MockedPool().CreateContainer(hc.Context(), layer.PrmContainerCreate{ data map[string]*keys.PublicKey
Creator: hc.owner, }
Name: bktName,
BasicACL: acl.PublicRWExtended,
})
require.NoError(hc.t, err)
bktInfo, err := hc.Layer().GetBucketInfo(hc.Context(), bktName) func newFrostfsIDMock() *frostfsidMock {
require.NoError(hc.t, err) return &frostfsidMock{data: map[string]*keys.PublicKey{}}
return bktInfo }
func (f *frostfsidMock) GetUserAddress(account, user string) (string, error) {
res, ok := f.data[account+user]
if !ok {
return "", fmt.Errorf("not found")
}
return res.Address(), nil
}
func (f *frostfsidMock) GetUserKey(account, user string) (string, error) {
res, ok := f.data[account+user]
if !ok {
return "", fmt.Errorf("not found")
}
return hex.EncodeToString(res.Bytes()), nil
}
func createTestBucket(hc *handlerContext, bktName string) *data.BucketInfo {
info := createBucket(hc, bktName)
return info.BktInfo
} }
func createTestBucketWithLock(hc *handlerContext, bktName string, conf *data.ObjectLockConfiguration) *data.BucketInfo { func createTestBucketWithLock(hc *handlerContext, bktName string, conf *data.ObjectLockConfiguration) *data.BucketInfo {
@ -297,11 +359,15 @@ func createTestBucketWithLock(hc *handlerContext, bktName string, conf *data.Obj
HomomorphicHashDisabled: res.HomomorphicHashDisabled, HomomorphicHashDisabled: res.HomomorphicHashDisabled,
} }
key, err := keys.NewPrivateKey()
require.NoError(hc.t, err)
sp := &layer.PutSettingsParams{ sp := &layer.PutSettingsParams{
BktInfo: bktInfo, BktInfo: bktInfo,
Settings: &data.BucketSettings{ Settings: &data.BucketSettings{
Versioning: data.VersioningEnabled, Versioning: data.VersioningEnabled,
LockConfiguration: conf, LockConfiguration: conf,
OwnerKey: key.PublicKey(),
}, },
} }
@ -349,7 +415,7 @@ func prepareTestRequestWithQuery(hc *handlerContext, bktName, objName string, qu
r := httptest.NewRequest(http.MethodPut, defaultURL, bytes.NewReader(body)) r := httptest.NewRequest(http.MethodPut, defaultURL, bytes.NewReader(body))
r.URL.RawQuery = query.Encode() r.URL.RawQuery = query.Encode()
reqInfo := middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: bktName, Object: objName}) reqInfo := middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: bktName, Object: objName}, "")
r = r.WithContext(middleware.SetReqInfo(hc.Context(), reqInfo)) r = r.WithContext(middleware.SetReqInfo(hc.Context(), reqInfo))
return w, r return w, r
@ -359,7 +425,7 @@ func prepareTestPayloadRequest(hc *handlerContext, bktName, objName string, payl
w := httptest.NewRecorder() w := httptest.NewRecorder()
r := httptest.NewRequest(http.MethodPut, defaultURL, payload) r := httptest.NewRequest(http.MethodPut, defaultURL, payload)
reqInfo := middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: bktName, Object: objName}) reqInfo := middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: bktName, Object: objName}, "")
r = r.WithContext(middleware.SetReqInfo(hc.Context(), reqInfo)) r = r.WithContext(middleware.SetReqInfo(hc.Context(), reqInfo))
return w, r return w, r

View file

@ -70,7 +70,7 @@ func (h *handler) HeadObjectHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
t := &layer.ObjectVersion{ t := &data.ObjectVersion{
BktInfo: bktInfo, BktInfo: bktInfo,
ObjectName: info.Name, ObjectName: info.Name,
VersionID: info.VersionID(), VersionID: info.VersionID(),
@ -140,10 +140,13 @@ func (h *handler) HeadBucketHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContainerZone, bktInfo.Zone) w.Header().Set(api.ContainerZone, bktInfo.Zone)
} }
middleware.WriteResponse(w, http.StatusOK, nil, middleware.MimeNone) if err = middleware.WriteResponse(w, http.StatusOK, nil, middleware.MimeNone); err != nil {
h.logAndSendError(w, "write response", reqInfo, err)
return
}
} }
func (h *handler) setLockingHeaders(bktInfo *data.BucketInfo, lockInfo *data.LockInfo, header http.Header) error { func (h *handler) setLockingHeaders(bktInfo *data.BucketInfo, lockInfo data.LockInfo, header http.Header) error {
if !bktInfo.ObjectLockEnabled { if !bktInfo.ObjectLockEnabled {
return nil return nil
} }

View file

@ -94,7 +94,7 @@ func TestInvalidAccessThroughCache(t *testing.T) {
headObject(t, hc, bktName, objName, nil, http.StatusOK) headObject(t, hc, bktName, objName, nil, http.StatusOK)
w, r := prepareTestRequest(hc, bktName, objName, nil) w, r := prepareTestRequest(hc, bktName, objName, nil)
hc.Handler().HeadObjectHandler(w, r.WithContext(middleware.SetBoxData(r.Context(), newTestAccessBox(t, nil)))) hc.Handler().HeadObjectHandler(w, r.WithContext(middleware.SetBox(r.Context(), &middleware.Box{AccessBox: newTestAccessBox(t, nil)})))
assertStatus(t, w, http.StatusForbidden) assertStatus(t, w, http.StatusForbidden)
} }

View file

@ -133,7 +133,7 @@ func (h *handler) PutObjectLegalHoldHandler(w http.ResponseWriter, r *http.Reque
} }
p := &layer.PutLockInfoParams{ p := &layer.PutLockInfoParams{
ObjVersion: &layer.ObjectVersion{ ObjVersion: &data.ObjectVersion{
BktInfo: bktInfo, BktInfo: bktInfo,
ObjectName: reqInfo.ObjectName, ObjectName: reqInfo.ObjectName,
VersionID: reqInfo.URL.Query().Get(api.QueryVersionID), VersionID: reqInfo.URL.Query().Get(api.QueryVersionID),
@ -172,7 +172,7 @@ func (h *handler) GetObjectLegalHoldHandler(w http.ResponseWriter, r *http.Reque
return return
} }
p := &layer.ObjectVersion{ p := &data.ObjectVersion{
BktInfo: bktInfo, BktInfo: bktInfo,
ObjectName: reqInfo.ObjectName, ObjectName: reqInfo.ObjectName,
VersionID: reqInfo.URL.Query().Get(api.QueryVersionID), VersionID: reqInfo.URL.Query().Get(api.QueryVersionID),
@ -221,7 +221,7 @@ func (h *handler) PutObjectRetentionHandler(w http.ResponseWriter, r *http.Reque
} }
p := &layer.PutLockInfoParams{ p := &layer.PutLockInfoParams{
ObjVersion: &layer.ObjectVersion{ ObjVersion: &data.ObjectVersion{
BktInfo: bktInfo, BktInfo: bktInfo,
ObjectName: reqInfo.ObjectName, ObjectName: reqInfo.ObjectName,
VersionID: reqInfo.URL.Query().Get(api.QueryVersionID), VersionID: reqInfo.URL.Query().Get(api.QueryVersionID),
@ -256,7 +256,7 @@ func (h *handler) GetObjectRetentionHandler(w http.ResponseWriter, r *http.Reque
return return
} }
p := &layer.ObjectVersion{ p := &data.ObjectVersion{
BktInfo: bktInfo, BktInfo: bktInfo,
ObjectName: reqInfo.ObjectName, ObjectName: reqInfo.ObjectName,
VersionID: reqInfo.URL.Query().Get(api.QueryVersionID), VersionID: reqInfo.URL.Query().Get(api.QueryVersionID),

View file

@ -315,7 +315,7 @@ func TestPutBucketLockConfigurationHandler(t *testing.T) {
w := httptest.NewRecorder() w := httptest.NewRecorder()
r := httptest.NewRequest(http.MethodPut, defaultURL, bytes.NewReader(body)) r := httptest.NewRequest(http.MethodPut, defaultURL, bytes.NewReader(body))
r = r.WithContext(middleware.SetReqInfo(r.Context(), middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: tc.bucket}))) r = r.WithContext(middleware.SetReqInfo(r.Context(), middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: tc.bucket}, "")))
hc.Handler().PutBucketObjectLockConfigHandler(w, r) hc.Handler().PutBucketObjectLockConfigHandler(w, r)
@ -388,7 +388,7 @@ func TestGetBucketLockConfigurationHandler(t *testing.T) {
t.Run(tc.name, func(t *testing.T) { t.Run(tc.name, func(t *testing.T) {
w := httptest.NewRecorder() w := httptest.NewRecorder()
r := httptest.NewRequest(http.MethodPut, defaultURL, bytes.NewReader(nil)) r := httptest.NewRequest(http.MethodPut, defaultURL, bytes.NewReader(nil))
r = r.WithContext(middleware.SetReqInfo(r.Context(), middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: tc.bucket}))) r = r.WithContext(middleware.SetReqInfo(r.Context(), middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: tc.bucket}, "")))
hc.Handler().GetBucketObjectLockConfigHandler(w, r) hc.Handler().GetBucketObjectLockConfigHandler(w, r)

View file

@ -103,6 +103,9 @@ const (
func (h *handler) CreateMultipartUploadHandler(w http.ResponseWriter, r *http.Request) { func (h *handler) CreateMultipartUploadHandler(w http.ResponseWriter, r *http.Request) {
reqInfo := middleware.GetReqInfo(r.Context()) reqInfo := middleware.GetReqInfo(r.Context())
uploadID := uuid.New()
cannedACLStatus := aclHeadersStatus(r)
additional := []zap.Field{zap.String("uploadID", uploadID.String())}
bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName) bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil { if err != nil {
@ -110,8 +113,17 @@ func (h *handler) CreateMultipartUploadHandler(w http.ResponseWriter, r *http.Re
return return
} }
uploadID := uuid.New() settings, err := h.obj.GetBucketSettings(r.Context(), bktInfo)
additional := []zap.Field{zap.String("uploadID", uploadID.String())} if err != nil {
h.logAndSendError(w, "couldn't get bucket settings", reqInfo, err)
return
}
apeEnabled := bktInfo.APEEnabled || settings.CannedACL != ""
if apeEnabled && cannedACLStatus == aclStatusYes {
h.logAndSendError(w, "acl not supported for this bucket", reqInfo, errors.GetAPIError(errors.ErrAccessControlListNotSupported))
return
}
p := &layer.CreateMultipartParams{ p := &layer.CreateMultipartParams{
Info: &layer.UploadInfoParams{ Info: &layer.UploadInfoParams{
@ -122,7 +134,8 @@ func (h *handler) CreateMultipartUploadHandler(w http.ResponseWriter, r *http.Re
Data: &layer.UploadData{}, Data: &layer.UploadData{},
} }
if containsACLHeaders(r) { needUpdateEACLTable := !(apeEnabled || cannedACLStatus == aclStatusNo)
if needUpdateEACLTable {
key, err := h.bearerTokenIssuerKey(r.Context()) key, err := h.bearerTokenIssuerKey(r.Context())
if err != nil { if err != nil {
h.logAndSendError(w, "couldn't get gate key", reqInfo, err, additional...) h.logAndSendError(w, "couldn't get gate key", reqInfo, err, additional...)
@ -266,7 +279,10 @@ func (h *handler) UploadPartHandler(w http.ResponseWriter, r *http.Request) {
} }
w.Header().Set(api.ETag, data.Quote(hash)) w.Header().Set(api.ETag, data.Quote(hash))
middleware.WriteSuccessResponseHeadersOnly(w) if err = middleware.WriteSuccessResponseHeadersOnly(w); err != nil {
h.logAndSendError(w, "write response", reqInfo, err)
return
}
} }
func (h *handler) UploadPartCopy(w http.ResponseWriter, r *http.Request) { func (h *handler) UploadPartCopy(w http.ResponseWriter, r *http.Request) {
@ -425,7 +441,7 @@ func (h *handler) CompleteMultipartUploadHandler(w http.ResponseWriter, r *http.
reqBody := new(CompleteMultipartUpload) reqBody := new(CompleteMultipartUpload)
if err = h.cfg.NewXMLDecoder(r.Body).Decode(reqBody); err != nil { if err = h.cfg.NewXMLDecoder(r.Body).Decode(reqBody); err != nil {
h.logAndSendError(w, "could not read complete multipart upload xml", reqInfo, h.logAndSendError(w, "could not read complete multipart upload xml", reqInfo,
errors.GetAPIError(errors.ErrMalformedXML), additional...) fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrMalformedXML), err.Error()), additional...)
return return
} }
if len(reqBody.Parts) == 0 { if len(reqBody.Parts) == 0 {
@ -471,8 +487,8 @@ func (h *handler) completeMultipartUpload(r *http.Request, c *layer.CompleteMult
objInfo := extendedObjInfo.ObjectInfo objInfo := extendedObjInfo.ObjectInfo
if len(uploadData.TagSet) != 0 { if len(uploadData.TagSet) != 0 {
tagPrm := &layer.PutObjectTaggingParams{ tagPrm := &data.PutObjectTaggingParams{
ObjectVersion: &layer.ObjectVersion{ ObjectVersion: &data.ObjectVersion{
BktInfo: bktInfo, BktInfo: bktInfo,
ObjectName: objInfo.Name, ObjectName: objInfo.Name,
VersionID: objInfo.VersionID(), VersionID: objInfo.VersionID(),

View file

@ -38,6 +38,36 @@ func TestMultipartUploadInvalidPart(t *testing.T) {
assertS3Error(hc.t, w, s3Errors.GetAPIError(s3Errors.ErrEntityTooSmall)) assertS3Error(hc.t, w, s3Errors.GetAPIError(s3Errors.ErrEntityTooSmall))
} }
func TestDeleteMultipartAllParts(t *testing.T) {
hc := prepareHandlerContext(t)
partSize := layer.UploadMinSize
objLen := 6 * partSize
bktName, bktName2, objName := "bucket", "bucket2", "object"
// unversioned bucket
createTestBucket(hc, bktName)
multipartUpload(hc, bktName, objName, nil, objLen, partSize)
deleteObject(t, hc, bktName, objName, emptyVersion)
require.Empty(t, hc.tp.Objects())
// encrypted multipart
multipartUploadEncrypted(hc, bktName, objName, nil, objLen, partSize)
deleteObject(t, hc, bktName, objName, emptyVersion)
require.Empty(t, hc.tp.Objects())
// versions bucket
createTestBucket(hc, bktName2)
putBucketVersioning(t, hc, bktName2, true)
multipartUpload(hc, bktName2, objName, nil, objLen, partSize)
_, hdr := getObject(hc, bktName2, objName)
versionID := hdr.Get("X-Amz-Version-Id")
deleteObject(t, hc, bktName2, objName, emptyVersion)
deleteObject(t, hc, bktName2, objName, versionID)
require.Empty(t, hc.tp.Objects())
}
func TestMultipartReUploadPart(t *testing.T) { func TestMultipartReUploadPart(t *testing.T) {
hc := prepareHandlerContext(t) hc := prepareHandlerContext(t)
@ -180,7 +210,7 @@ func TestMultipartUploadSize(t *testing.T) {
equalDataSlices(t, data[partSize:], part) equalDataSlices(t, data[partSize:], part)
}) })
t.Run("check correct size when part copy", func(t *testing.T) { t.Run("check correct size when part copy", func(_ *testing.T) {
objName2 := "obj2" objName2 := "obj2"
uploadInfo := createMultipartUpload(hc, bktName, objName2, headers) uploadInfo := createMultipartUpload(hc, bktName, objName2, headers)
sourceCopy := bktName + "/" + objName sourceCopy := bktName + "/" + objName

View file

@ -4,8 +4,10 @@ import (
"bytes" "bytes"
"crypto/md5" "crypto/md5"
"encoding/base64" "encoding/base64"
"encoding/hex"
"encoding/json" "encoding/json"
"encoding/xml" "encoding/xml"
stderrors "errors"
"fmt" "fmt"
"io" "io"
"net" "net"
@ -24,8 +26,13 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"git.frostfs.info/TrueCloudLab/policy-engine/schema/native"
"git.frostfs.info/TrueCloudLab/policy-engine/schema/s3"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -179,12 +186,31 @@ func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
err error err error
newEaclTable *eacl.Table newEaclTable *eacl.Table
sessionTokenEACL *session.Container sessionTokenEACL *session.Container
containsACL = containsACLHeaders(r) cannedACLStatus = aclHeadersStatus(r)
ctx = r.Context() ctx = r.Context()
reqInfo = middleware.GetReqInfo(ctx) reqInfo = middleware.GetReqInfo(ctx)
) )
if containsACL { bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket objInfo", reqInfo, err)
return
}
settings, err := h.obj.GetBucketSettings(ctx, bktInfo)
if err != nil {
h.logAndSendError(w, "could not get bucket settings", reqInfo, err)
return
}
apeEnabled := bktInfo.APEEnabled || settings.CannedACL != ""
if apeEnabled && cannedACLStatus == aclStatusYes {
h.logAndSendError(w, "acl not supported for this bucket", reqInfo, errors.GetAPIError(errors.ErrAccessControlListNotSupported))
return
}
needUpdateEACLTable := !(apeEnabled || cannedACLStatus == aclStatusNo)
if needUpdateEACLTable {
if sessionTokenEACL, err = getSessionTokenSetEACL(r.Context()); err != nil { if sessionTokenEACL, err = getSessionTokenSetEACL(r.Context()); err != nil {
h.logAndSendError(w, "could not get eacl session token from a box", reqInfo, err) h.logAndSendError(w, "could not get eacl session token from a box", reqInfo, err)
return return
@ -197,12 +223,6 @@ func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket objInfo", reqInfo, err)
return
}
metadata := parseMetadata(r) metadata := parseMetadata(r)
if contentType := r.Header.Get(api.ContentType); len(contentType) > 0 { if contentType := r.Header.Get(api.ContentType); len(contentType) > 0 {
metadata[api.ContentType] = contentType metadata[api.ContentType] = contentType
@ -254,12 +274,6 @@ func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
settings, err := h.obj.GetBucketSettings(ctx, bktInfo)
if err != nil {
h.logAndSendError(w, "could not get bucket settings", reqInfo, err)
return
}
params.Lock, err = formObjectLock(ctx, bktInfo, settings.LockConfiguration, r.Header) params.Lock, err = formObjectLock(ctx, bktInfo, settings.LockConfiguration, r.Header)
if err != nil { if err != nil {
h.logAndSendError(w, "could not form object lock", reqInfo, err) h.logAndSendError(w, "could not form object lock", reqInfo, err)
@ -285,7 +299,7 @@ func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
h.reqLogger(ctx).Error(logs.CouldntSendNotification, zap.Error(err)) h.reqLogger(ctx).Error(logs.CouldntSendNotification, zap.Error(err))
} }
if containsACL { if needUpdateEACLTable {
if newEaclTable, err = h.getNewEAclTable(r, bktInfo, objInfo); err != nil { if newEaclTable, err = h.getNewEAclTable(r, bktInfo, objInfo); err != nil {
h.logAndSendError(w, "could not get new eacl table", reqInfo, err) h.logAndSendError(w, "could not get new eacl table", reqInfo, err)
return return
@ -293,8 +307,8 @@ func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
} }
if tagSet != nil { if tagSet != nil {
tagPrm := &layer.PutObjectTaggingParams{ tagPrm := &data.PutObjectTaggingParams{
ObjectVersion: &layer.ObjectVersion{ ObjectVersion: &data.ObjectVersion{
BktInfo: bktInfo, BktInfo: bktInfo,
ObjectName: objInfo.Name, ObjectName: objInfo.Name,
VersionID: objInfo.VersionID(), VersionID: objInfo.VersionID(),
@ -330,7 +344,10 @@ func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ETag, data.Quote(objInfo.ETag(h.cfg.MD5Enabled()))) w.Header().Set(api.ETag, data.Quote(objInfo.ETag(h.cfg.MD5Enabled())))
middleware.WriteSuccessResponseHeadersOnly(w) if err = middleware.WriteSuccessResponseHeadersOnly(w); err != nil {
h.logAndSendError(w, "write response", reqInfo, err)
return
}
} }
func (h *handler) getBodyReader(r *http.Request) (io.ReadCloser, error) { func (h *handler) getBodyReader(r *http.Request) (io.ReadCloser, error) {
@ -455,7 +472,7 @@ func (h *handler) PostObject(w http.ResponseWriter, r *http.Request) {
ctx = r.Context() ctx = r.Context()
reqInfo = middleware.GetReqInfo(ctx) reqInfo = middleware.GetReqInfo(ctx)
metadata = make(map[string]string) metadata = make(map[string]string)
containsACL = containsACLHeaders(r) cannedACLStatus = aclHeadersStatus(r)
) )
policy, err := checkPostPolicy(r, reqInfo, metadata) policy, err := checkPostPolicy(r, reqInfo, metadata)
@ -466,14 +483,39 @@ func (h *handler) PostObject(w http.ResponseWriter, r *http.Request) {
if tagging := auth.MultipartFormValue(r, "tagging"); tagging != "" { if tagging := auth.MultipartFormValue(r, "tagging"); tagging != "" {
buffer := bytes.NewBufferString(tagging) buffer := bytes.NewBufferString(tagging)
tagSet, err = h.readTagSet(buffer) tags := new(data.Tagging)
if err = h.cfg.NewXMLDecoder(buffer).Decode(tags); err != nil {
h.logAndSendError(w, "could not decode tag set", reqInfo,
fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrMalformedXML), err.Error()))
return
}
tagSet, err = h.readTagSet(tags)
if err != nil { if err != nil {
h.logAndSendError(w, "could not read tag set", reqInfo, err) h.logAndSendError(w, "could not read tag set", reqInfo, err)
return return
} }
} }
if containsACL { bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket objInfo", reqInfo, err)
return
}
settings, err := h.obj.GetBucketSettings(ctx, bktInfo)
if err != nil {
h.logAndSendError(w, "could not get bucket settings", reqInfo, err)
return
}
apeEnabled := bktInfo.APEEnabled || settings.CannedACL != ""
if apeEnabled && cannedACLStatus == aclStatusYes {
h.logAndSendError(w, "acl not supported for this bucket", reqInfo, errors.GetAPIError(errors.ErrAccessControlListNotSupported))
return
}
needUpdateEACLTable := !(apeEnabled || cannedACLStatus == aclStatusNo)
if needUpdateEACLTable {
if sessionTokenEACL, err = getSessionTokenSetEACL(ctx); err != nil { if sessionTokenEACL, err = getSessionTokenSetEACL(ctx); err != nil {
h.logAndSendError(w, "could not get eacl session token from a box", reqInfo, err) h.logAndSendError(w, "could not get eacl session token from a box", reqInfo, err)
return return
@ -500,12 +542,6 @@ func (h *handler) PostObject(w http.ResponseWriter, r *http.Request) {
return return
} }
bktInfo, err := h.obj.GetBucketInfo(ctx, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket info", reqInfo, err)
return
}
params := &layer.PutObjectParams{ params := &layer.PutObjectParams{
BktInfo: bktInfo, BktInfo: bktInfo,
Object: reqInfo.ObjectName, Object: reqInfo.ObjectName,
@ -544,8 +580,8 @@ func (h *handler) PostObject(w http.ResponseWriter, r *http.Request) {
} }
if tagSet != nil { if tagSet != nil {
tagPrm := &layer.PutObjectTaggingParams{ tagPrm := &data.PutObjectTaggingParams{
ObjectVersion: &layer.ObjectVersion{ ObjectVersion: &data.ObjectVersion{
BktInfo: bktInfo, BktInfo: bktInfo,
ObjectName: objInfo.Name, ObjectName: objInfo.Name,
VersionID: objInfo.VersionID(), VersionID: objInfo.VersionID(),
@ -572,9 +608,7 @@ func (h *handler) PostObject(w http.ResponseWriter, r *http.Request) {
} }
} }
if settings, err := h.obj.GetBucketSettings(ctx, bktInfo); err != nil { if settings.VersioningEnabled() {
h.reqLogger(ctx).Warn(logs.CouldntGetBucketVersioning, zap.String("bucket name", reqInfo.BucketName), zap.Error(err))
} else if settings.VersioningEnabled() {
w.Header().Set(api.AmzVersionID, objInfo.VersionID()) w.Header().Set(api.AmzVersionID, objInfo.VersionID())
} }
@ -595,7 +629,11 @@ func (h *handler) PostObject(w http.ResponseWriter, r *http.Request) {
ETag: data.Quote(objInfo.ETag(h.cfg.MD5Enabled())), ETag: data.Quote(objInfo.ETag(h.cfg.MD5Enabled())),
} }
w.WriteHeader(status) w.WriteHeader(status)
if _, err = w.Write(middleware.EncodeResponse(resp)); err != nil { respData, err := middleware.EncodeResponse(resp)
if err != nil {
h.logAndSendError(w, "encode response", reqInfo, err)
}
if _, err = w.Write(respData); err != nil {
h.logAndSendError(w, "something went wrong", reqInfo, err) h.logAndSendError(w, "something went wrong", reqInfo, err)
} }
return return
@ -622,11 +660,21 @@ func checkPostPolicy(r *http.Request, reqInfo *middleware.ReqInfo, metadata map[
policy.empty = false policy.empty = false
} }
if r.MultipartForm == nil {
return nil, stderrors.New("empty multipart form")
}
for key, v := range r.MultipartForm.Value { for key, v := range r.MultipartForm.Value {
value := v[0]
if key == "file" || key == "policy" || key == "x-amz-signature" || strings.HasPrefix(key, "x-ignore-") { if key == "file" || key == "policy" || key == "x-amz-signature" || strings.HasPrefix(key, "x-ignore-") {
continue continue
} }
if len(v) != 1 {
return nil, fmt.Errorf("empty multipart value for key '%s'", key)
}
value := v[0]
if err := policy.CheckField(key, value); err != nil { if err := policy.CheckField(key, value); err != nil {
return nil, fmt.Errorf("'%s' form field doesn't match the policy: %w", key, err) return nil, fmt.Errorf("'%s' form field doesn't match the policy: %w", key, err)
} }
@ -656,9 +704,33 @@ func checkPostPolicy(r *http.Request, reqInfo *middleware.ReqInfo, metadata map[
return policy, nil return policy, nil
} }
func containsACLHeaders(r *http.Request) bool { type aclStatus int
return r.Header.Get(api.AmzACL) != "" || r.Header.Get(api.AmzGrantRead) != "" ||
r.Header.Get(api.AmzGrantFullControl) != "" || r.Header.Get(api.AmzGrantWrite) != "" const (
// aclStatusNo means no acl headers at all.
aclStatusNo aclStatus = iota
// aclStatusYesAPECompatible means that only X-Amz-Acl present and equals to private.
aclStatusYesAPECompatible
// aclStatusYes means any other acl headers configuration.
aclStatusYes
)
func aclHeadersStatus(r *http.Request) aclStatus {
if r.Header.Get(api.AmzGrantRead) != "" ||
r.Header.Get(api.AmzGrantFullControl) != "" ||
r.Header.Get(api.AmzGrantWrite) != "" {
return aclStatusYes
}
cannedACL := r.Header.Get(api.AmzACL)
if cannedACL != "" {
if cannedACL == basicACLPrivate {
return aclStatusYesAPECompatible
}
return aclStatusYes
}
return aclStatusNo
} }
func (h *handler) getNewEAclTable(r *http.Request, bktInfo *data.BucketInfo, objInfo *data.ObjectInfo) (*eacl.Table, error) { func (h *handler) getNewEAclTable(r *http.Request, bktInfo *data.BucketInfo, objInfo *data.ObjectInfo) (*eacl.Table, error) {
@ -723,7 +795,7 @@ func parseTaggingHeader(header http.Header) (map[string]string, error) {
} }
tagSet = make(map[string]string, len(queries)) tagSet = make(map[string]string, len(queries))
for k, v := range queries { for k, v := range queries {
tag := Tag{Key: k, Value: v[0]} tag := data.Tag{Key: k, Value: v[0]}
if err = checkTag(tag); err != nil { if err = checkTag(tag); err != nil {
return nil, err return nil, err
} }
@ -744,22 +816,143 @@ func parseMetadata(r *http.Request) map[string]string {
return res return res
} }
func (h *handler) CreateBucketHandler(w http.ResponseWriter, r *http.Request) { func parseCannedACL(header http.Header) (string, error) {
ctx := r.Context() acl := header.Get(api.AmzACL)
reqInfo := middleware.GetReqInfo(ctx) if len(acl) == 0 {
p := &layer.CreateBucketParams{ return basicACLPrivate, nil
Name: reqInfo.BucketName,
Namespace: reqInfo.Namespace,
} }
if err := checkBucketName(reqInfo.BucketName); err != nil { if acl == basicACLPrivate || acl == basicACLPublic ||
h.logAndSendError(w, "invalid bucket name", reqInfo, err) acl == cannedACLAuthRead || acl == basicACLReadOnly {
return acl, nil
}
return "", fmt.Errorf("unknown acl: %s", acl)
}
func (h *handler) CreateBucketHandler(w http.ResponseWriter, r *http.Request) {
if h.cfg.ACLEnabled() {
h.createBucketHandlerACL(w, r)
return return
} }
key, err := h.bearerTokenIssuerKey(ctx) h.createBucketHandlerPolicy(w, r)
}
func (h *handler) parseCommonCreateBucketParams(reqInfo *middleware.ReqInfo, boxData *accessbox.Box, r *http.Request) (*keys.PublicKey, *layer.CreateBucketParams, error) {
p := &layer.CreateBucketParams{
Name: reqInfo.BucketName,
Namespace: reqInfo.Namespace,
SessionContainerCreation: boxData.Gate.SessionTokenForPut(),
}
if p.SessionContainerCreation == nil {
return nil, nil, fmt.Errorf("%w: couldn't find session token for put", errors.GetAPIError(errors.ErrAccessDenied))
}
if err := checkBucketName(reqInfo.BucketName); err != nil {
return nil, nil, fmt.Errorf("invalid bucket name: %w", err)
}
key, err := getTokenIssuerKey(boxData)
if err != nil { if err != nil {
h.logAndSendError(w, "couldn't get bearer token signature key", reqInfo, err) return nil, nil, fmt.Errorf("couldn't get bearer token signature key: %w", err)
}
createParams, err := h.parseLocationConstraint(r)
if err != nil {
return nil, nil, fmt.Errorf("could not parse location contraint: %w", err)
}
if err = h.setPlacementPolicy(p, reqInfo.Namespace, createParams.LocationConstraint, boxData.Policies); err != nil {
return nil, nil, fmt.Errorf("couldn't set placement policy: %w", err)
}
p.ObjectLockEnabled = isLockEnabled(h.reqLogger(r.Context()), r.Header)
return key, p, nil
}
func (h *handler) createBucketHandlerPolicy(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
reqInfo := middleware.GetReqInfo(ctx)
boxData, err := middleware.GetBoxData(ctx)
if err != nil {
h.logAndSendError(w, "get access box from request", reqInfo, err)
return
}
key, p, err := h.parseCommonCreateBucketParams(reqInfo, boxData, r)
if err != nil {
h.logAndSendError(w, "parse create bucket params", reqInfo, err)
return
}
cannedACL, err := parseCannedACL(r.Header)
if err != nil {
h.logAndSendError(w, "could not parse canned ACL", reqInfo, err)
return
}
p.APEEnabled = true
bktInfo, err := h.obj.CreateBucket(ctx, p)
if err != nil {
h.logAndSendError(w, "could not create bucket", reqInfo, err)
return
}
h.reqLogger(ctx).Info(logs.BucketIsCreated, zap.Stringer("container_id", bktInfo.CID))
chains := bucketCannedACLToAPERules(cannedACL, reqInfo, key, bktInfo.CID)
if err = h.ape.SaveACLChains(bktInfo.CID.EncodeToString(), chains); err != nil {
h.logAndSendError(w, "failed to add morph rule chain", reqInfo, err)
return
}
sp := &layer.PutSettingsParams{
BktInfo: bktInfo,
Settings: &data.BucketSettings{
CannedACL: cannedACL,
OwnerKey: key,
Versioning: data.VersioningUnversioned,
},
}
if p.ObjectLockEnabled {
sp.Settings.Versioning = data.VersioningEnabled
}
if err = h.obj.PutBucketSettings(ctx, sp); err != nil {
h.logAndSendError(w, "couldn't save bucket settings", reqInfo, err,
zap.String("container_id", bktInfo.CID.EncodeToString()))
return
}
if err = middleware.WriteSuccessResponseHeadersOnly(w); err != nil {
h.logAndSendError(w, "write response", reqInfo, err)
return
}
}
func (h *handler) createBucketHandlerACL(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
reqInfo := middleware.GetReqInfo(ctx)
boxData, err := middleware.GetBoxData(ctx)
if err != nil {
h.logAndSendError(w, "get access box from request", reqInfo, err)
return
}
key, p, err := h.parseCommonCreateBucketParams(reqInfo, boxData, r)
if err != nil {
h.logAndSendError(w, "parse create bucket params", reqInfo, err)
return
}
aclPrm := &layer.PutBucketACLParams{SessionToken: boxData.Gate.SessionTokenForSetEACL()}
if aclPrm.SessionToken == nil {
h.logAndSendError(w, "couldn't find session token for setEACL", reqInfo, errors.GetAPIError(errors.ErrAccessDenied))
return return
} }
@ -770,67 +963,178 @@ func (h *handler) CreateBucketHandler(w http.ResponseWriter, r *http.Request) {
} }
resInfo := &resourceInfo{Bucket: reqInfo.BucketName} resInfo := &resourceInfo{Bucket: reqInfo.BucketName}
p.EACL, err = bucketACLToTable(bktACL, resInfo) aclPrm.EACL, err = bucketACLToTable(bktACL, resInfo)
if err != nil { if err != nil {
h.logAndSendError(w, "could translate bucket acl to eacl", reqInfo, err) h.logAndSendError(w, "could translate bucket acl to eacl", reqInfo, err)
return return
} }
createParams, err := h.parseLocationConstraint(r)
if err != nil {
h.logAndSendError(w, "could not parse body", reqInfo, err)
return
}
var policies []*accessbox.ContainerPolicy
boxData, err := middleware.GetBoxData(ctx)
if err == nil {
policies = boxData.Policies
p.SessionContainerCreation = boxData.Gate.SessionTokenForPut()
p.SessionEACL = boxData.Gate.SessionTokenForSetEACL()
}
if p.SessionContainerCreation == nil {
h.logAndSendError(w, "couldn't find session token for put", reqInfo, errors.GetAPIError(errors.ErrAccessDenied))
return
}
if p.SessionEACL == nil {
h.logAndSendError(w, "couldn't find session token for setEACL", reqInfo, errors.GetAPIError(errors.ErrAccessDenied))
return
}
if err = h.setPolicy(p, reqInfo.Namespace, createParams.LocationConstraint, policies); err != nil {
h.logAndSendError(w, "couldn't set placement policy", reqInfo, err)
return
}
p.ObjectLockEnabled = isLockEnabled(r.Header)
bktInfo, err := h.obj.CreateBucket(ctx, p) bktInfo, err := h.obj.CreateBucket(ctx, p)
if err != nil { if err != nil {
h.logAndSendError(w, "could not create bucket", reqInfo, err) h.logAndSendError(w, "could not create bucket", reqInfo, err)
return return
} }
h.reqLogger(ctx).Info(logs.BucketIsCreated, zap.Stringer("container_id", bktInfo.CID)) h.reqLogger(ctx).Info(logs.BucketIsCreated, zap.Stringer("container_id", bktInfo.CID))
if p.ObjectLockEnabled { aclPrm.BktInfo = bktInfo
if err = h.obj.PutBucketACL(r.Context(), aclPrm); err != nil {
h.logAndSendError(w, "could not put bucket e/ACL", reqInfo, err)
return
}
sp := &layer.PutSettingsParams{ sp := &layer.PutSettingsParams{
BktInfo: bktInfo, BktInfo: bktInfo,
Settings: &data.BucketSettings{Versioning: data.VersioningEnabled}, Settings: &data.BucketSettings{
OwnerKey: key,
Versioning: data.VersioningUnversioned,
},
} }
if p.ObjectLockEnabled {
sp.Settings.Versioning = data.VersioningEnabled
}
if err = h.obj.PutBucketSettings(ctx, sp); err != nil { if err = h.obj.PutBucketSettings(ctx, sp); err != nil {
h.logAndSendError(w, "couldn't enable bucket versioning", reqInfo, err, h.logAndSendError(w, "couldn't save bucket settings", reqInfo, err,
zap.String("container_id", bktInfo.CID.EncodeToString())) zap.String("container_id", bktInfo.CID.EncodeToString()))
return return
} }
if err = middleware.WriteSuccessResponseHeadersOnly(w); err != nil {
h.logAndSendError(w, "write response", reqInfo, err)
return
}
} }
middleware.WriteSuccessResponseHeadersOnly(w) const s3ActionPrefix = "s3:"
var (
// https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html
writeACLBucketS3Actions = []string{
s3ActionPrefix + middleware.PutObjectOperation,
s3ActionPrefix + middleware.PostObjectOperation,
s3ActionPrefix + middleware.CopyObjectOperation,
s3ActionPrefix + middleware.UploadPartOperation,
s3ActionPrefix + middleware.UploadPartCopyOperation,
s3ActionPrefix + middleware.CreateMultipartUploadOperation,
s3ActionPrefix + middleware.CompleteMultipartUploadOperation,
} }
func (h handler) setPolicy(prm *layer.CreateBucketParams, namespace, locationConstraint string, userPolicies []*accessbox.ContainerPolicy) error { readACLBucketS3Actions = []string{
s3ActionPrefix + middleware.HeadBucketOperation,
s3ActionPrefix + middleware.GetBucketLocationOperation,
s3ActionPrefix + middleware.ListObjectsV1Operation,
s3ActionPrefix + middleware.ListObjectsV2Operation,
s3ActionPrefix + middleware.ListBucketObjectVersionsOperation,
s3ActionPrefix + middleware.ListMultipartUploadsOperation,
}
writeACLBucketNativeActions = []string{
native.MethodPutObject,
}
readACLBucketNativeActions = []string{
native.MethodGetContainer,
native.MethodGetObject,
native.MethodHeadObject,
native.MethodSearchObject,
native.MethodRangeObject,
native.MethodHashObject,
}
)
func bucketCannedACLToAPERules(cannedACL string, reqInfo *middleware.ReqInfo, key *keys.PublicKey, cnrID cid.ID) []*chain.Chain {
cnrIDStr := cnrID.EncodeToString()
chains := []*chain.Chain{
{
ID: getBucketCannedChainID(chain.S3, cnrID),
Rules: []chain.Rule{{
Status: chain.Allow,
Actions: chain.Actions{Names: []string{"s3:*"}},
Resources: chain.Resources{Names: []string{
fmt.Sprintf(s3.ResourceFormatS3Bucket, reqInfo.BucketName),
fmt.Sprintf(s3.ResourceFormatS3BucketObjects, reqInfo.BucketName),
}},
Condition: []chain.Condition{{
Op: chain.CondStringEquals,
Object: chain.ObjectRequest,
Key: s3.PropertyKeyOwner,
Value: key.Address(),
}},
}}},
{
ID: getBucketCannedChainID(chain.Ingress, cnrID),
Rules: []chain.Rule{{
Status: chain.Allow,
Actions: chain.Actions{Names: []string{"*"}},
Resources: chain.Resources{Names: []string{
fmt.Sprintf(native.ResourceFormatNamespaceContainer, reqInfo.Namespace, cnrIDStr),
fmt.Sprintf(native.ResourceFormatNamespaceContainerObjects, reqInfo.Namespace, cnrIDStr),
}},
Condition: []chain.Condition{{
Op: chain.CondStringEquals,
Object: chain.ObjectRequest,
Key: native.PropertyKeyActorPublicKey,
Value: hex.EncodeToString(key.Bytes()),
}},
}},
},
}
switch cannedACL {
case basicACLPrivate:
case cannedACLAuthRead:
fallthrough
case basicACLReadOnly:
chains[0].Rules = append(chains[0].Rules, chain.Rule{
Status: chain.Allow,
Actions: chain.Actions{Names: readACLBucketS3Actions},
Resources: chain.Resources{Names: []string{
fmt.Sprintf(s3.ResourceFormatS3Bucket, reqInfo.BucketName),
fmt.Sprintf(s3.ResourceFormatS3BucketObjects, reqInfo.BucketName),
}},
})
chains[1].Rules = append(chains[1].Rules, chain.Rule{
Status: chain.Allow,
Actions: chain.Actions{Names: readACLBucketNativeActions},
Resources: chain.Resources{Names: []string{
fmt.Sprintf(native.ResourceFormatNamespaceContainer, reqInfo.Namespace, cnrIDStr),
fmt.Sprintf(native.ResourceFormatNamespaceContainerObjects, reqInfo.Namespace, cnrIDStr),
}},
})
case basicACLPublic:
chains[0].Rules = append(chains[0].Rules, chain.Rule{
Status: chain.Allow,
Actions: chain.Actions{Names: append(readACLBucketS3Actions, writeACLBucketS3Actions...)},
Resources: chain.Resources{Names: []string{
fmt.Sprintf(s3.ResourceFormatS3Bucket, reqInfo.BucketName),
fmt.Sprintf(s3.ResourceFormatS3BucketObjects, reqInfo.BucketName),
}},
})
chains[1].Rules = append(chains[1].Rules, chain.Rule{
Status: chain.Allow,
Actions: chain.Actions{Names: append(readACLBucketNativeActions, writeACLBucketNativeActions...)},
Resources: chain.Resources{Names: []string{
fmt.Sprintf(native.ResourceFormatNamespaceContainer, reqInfo.Namespace, cnrIDStr),
fmt.Sprintf(native.ResourceFormatNamespaceContainerObjects, reqInfo.Namespace, cnrIDStr),
}},
})
default:
panic("unknown canned acl") // this should never happen
}
return chains
}
func getBucketCannedChainID(prefix chain.Name, cnrID cid.ID) chain.ID {
return chain.ID(string(prefix) + ":bktCanned" + string(cnrID[:]))
}
func (h handler) setPlacementPolicy(prm *layer.CreateBucketParams, namespace, locationConstraint string, userPolicies []*accessbox.ContainerPolicy) error {
prm.Policy = h.cfg.DefaultPlacementPolicy(namespace) prm.Policy = h.cfg.DefaultPlacementPolicy(namespace)
prm.LocationConstraint = locationConstraint prm.LocationConstraint = locationConstraint
@ -853,9 +1157,17 @@ func (h handler) setPolicy(prm *layer.CreateBucketParams, namespace, locationCon
return errors.GetAPIError(errors.ErrInvalidLocationConstraint) return errors.GetAPIError(errors.ErrInvalidLocationConstraint)
} }
func isLockEnabled(header http.Header) bool { func isLockEnabled(log *zap.Logger, header http.Header) bool {
lockEnabledStr := header.Get(api.AmzBucketObjectLockEnabled) lockEnabledStr := header.Get(api.AmzBucketObjectLockEnabled)
lockEnabled, _ := strconv.ParseBool(lockEnabledStr) if len(lockEnabledStr) == 0 {
return false
}
lockEnabled, err := strconv.ParseBool(lockEnabledStr)
if err != nil {
log.Warn(logs.InvalidBucketObjectLockEnabledHeader, zap.String("header", lockEnabledStr), zap.Error(err))
}
return lockEnabled return lockEnabled
} }
@ -900,7 +1212,7 @@ func (h *handler) parseLocationConstraint(r *http.Request) (*createBucketParams,
params := new(createBucketParams) params := new(createBucketParams)
if err := h.cfg.NewXMLDecoder(r.Body).Decode(params); err != nil { if err := h.cfg.NewXMLDecoder(r.Body).Decode(params); err != nil {
return nil, errors.GetAPIError(errors.ErrMalformedXML) return nil, fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrMalformedXML), err.Error())
} }
return params, nil return params, nil
} }

View file

@ -351,18 +351,20 @@ func getChunkedRequest(ctx context.Context, t *testing.T, bktName, objName strin
req.Body = io.NopCloser(reqBody) req.Body = io.NopCloser(reqBody)
w := httptest.NewRecorder() w := httptest.NewRecorder()
reqInfo := middleware.NewReqInfo(w, req, middleware.ObjectRequest{Bucket: bktName, Object: objName}) reqInfo := middleware.NewReqInfo(w, req, middleware.ObjectRequest{Bucket: bktName, Object: objName}, "")
req = req.WithContext(middleware.SetReqInfo(ctx, reqInfo)) req = req.WithContext(middleware.SetReqInfo(ctx, reqInfo))
req = req.WithContext(middleware.SetClientTime(req.Context(), signTime)) req = req.WithContext(middleware.SetBox(req.Context(), &middleware.Box{
req = req.WithContext(middleware.SetAuthHeaders(req.Context(), &middleware.AuthHeader{ ClientTime: signTime,
AuthHeaders: &middleware.AuthHeader{
AccessKeyID: AWSAccessKeyID, AccessKeyID: AWSAccessKeyID,
SignatureV4: "4f232c4386841ef735655705268965c44a0e4690baa4adea153f7db9fa80a0a9", SignatureV4: "4f232c4386841ef735655705268965c44a0e4690baa4adea153f7db9fa80a0a9",
Region: "us-east-1", Region: "us-east-1",
})) },
req = req.WithContext(middleware.SetBoxData(req.Context(), &accessbox.Box{ AccessBox: &accessbox.Box{
Gate: &accessbox.GateData{ Gate: &accessbox.GateData{
SecretKey: AWSSecretAccessKey, SecretKey: AWSSecretAccessKey,
}, },
},
})) }))
return w, req, chunk return w, req, chunk
@ -372,14 +374,28 @@ func TestCreateBucket(t *testing.T) {
hc := prepareHandlerContext(t) hc := prepareHandlerContext(t)
bktName := "bkt-name" bktName := "bkt-name"
box, _ := createAccessBox(t) info := createBucket(hc, bktName)
createBucket(t, hc, bktName, box) createBucketAssertS3Error(hc, bktName, info.Box, s3errors.ErrBucketAlreadyOwnedByYou)
createBucketAssertS3Error(hc, bktName, box, s3errors.ErrBucketAlreadyOwnedByYou)
box2, _ := createAccessBox(t) box2, _ := createAccessBox(t)
createBucketAssertS3Error(hc, bktName, box2, s3errors.ErrBucketAlreadyExists) createBucketAssertS3Error(hc, bktName, box2, s3errors.ErrBucketAlreadyExists)
} }
func TestCreateOldBucketPutVersioning(t *testing.T) {
hc := prepareHandlerContext(t)
hc.config.aclEnabled = true
bktName := "bkt-name"
info := createBucket(hc, bktName)
settings, err := hc.tree.GetSettingsNode(hc.Context(), info.BktInfo)
require.NoError(t, err)
settings.OwnerKey = nil
err = hc.tree.PutSettingsNode(hc.Context(), info.BktInfo, settings)
require.NoError(t, err)
putBucketVersioning(t, hc, bktName, true)
}
func TestCreateNamespacedBucket(t *testing.T) { func TestCreateNamespacedBucket(t *testing.T) {
hc := prepareHandlerContext(t) hc := prepareHandlerContext(t)
bktName := "bkt-name" bktName := "bkt-name"
@ -387,7 +403,7 @@ func TestCreateNamespacedBucket(t *testing.T) {
box, _ := createAccessBox(t) box, _ := createAccessBox(t)
w, r := prepareTestRequest(hc, bktName, "", nil) w, r := prepareTestRequest(hc, bktName, "", nil)
ctx := middleware.SetBoxData(r.Context(), box) ctx := middleware.SetBox(r.Context(), &middleware.Box{AccessBox: box})
reqInfo := middleware.GetReqInfo(ctx) reqInfo := middleware.GetReqInfo(ctx)
reqInfo.Namespace = namespace reqInfo.Namespace = namespace
r = r.WithContext(middleware.SetReqInfo(ctx, reqInfo)) r = r.WithContext(middleware.SetReqInfo(ctx, reqInfo))

View file

@ -55,6 +55,19 @@ type Bucket struct {
CreationDate string // time string of format "2006-01-02T15:04:05.000Z" CreationDate string // time string of format "2006-01-02T15:04:05.000Z"
} }
// PolicyStatus contains status of bucket policy.
type PolicyStatus struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ PolicyStatus" json:"-"`
IsPublic PolicyStatusIsPublic `xml:"IsPublic"`
}
type PolicyStatusIsPublic string
const (
PolicyStatusIsPublicFalse = "FALSE"
PolicyStatusIsPublicTrue = "TRUE"
)
// AccessControlPolicy contains ACL. // AccessControlPolicy contains ACL.
type AccessControlPolicy struct { type AccessControlPolicy struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ AccessControlPolicy" json:"-"` XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ AccessControlPolicy" json:"-"`
@ -172,12 +185,6 @@ type VersioningConfiguration struct {
MfaDelete string `xml:"MfaDelete,omitempty"` MfaDelete string `xml:"MfaDelete,omitempty"`
} }
// Tagging contains tag set.
type Tagging struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ Tagging"`
TagSet []Tag `xml:"TagSet>Tag"`
}
// PostResponse contains result of posting object. // PostResponse contains result of posting object.
type PostResponse struct { type PostResponse struct {
Bucket string `xml:"Bucket"` Bucket string `xml:"Bucket"`
@ -185,12 +192,6 @@ type PostResponse struct {
ETag string `xml:"Etag"` ETag string `xml:"Etag"`
} }
// Tag is an AWS key-value tag.
type Tag struct {
Key string
Value string
}
// MarshalXML -- StringMap marshals into XML. // MarshalXML -- StringMap marshals into XML.
func (s StringMap) MarshalXML(e *xml.Encoder, start xml.StartElement) error { func (s StringMap) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
tokens := []xml.Token{start} tokens := []xml.Token{start}

View file

@ -1,7 +1,6 @@
package handler package handler
import ( import (
"io"
"net/http" "net/http"
"sort" "sort"
"strings" "strings"
@ -10,7 +9,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"go.uber.org/zap" "go.uber.org/zap"
@ -28,7 +26,7 @@ func (h *handler) PutObjectTaggingHandler(w http.ResponseWriter, r *http.Request
ctx := r.Context() ctx := r.Context()
reqInfo := middleware.GetReqInfo(ctx) reqInfo := middleware.GetReqInfo(ctx)
tagSet, err := h.readTagSet(r.Body) tagSet, err := h.readTagSet(reqInfo.Tagging)
if err != nil { if err != nil {
h.logAndSendError(w, "could not read tag set", reqInfo, err) h.logAndSendError(w, "could not read tag set", reqInfo, err)
return return
@ -40,8 +38,8 @@ func (h *handler) PutObjectTaggingHandler(w http.ResponseWriter, r *http.Request
return return
} }
tagPrm := &layer.PutObjectTaggingParams{ tagPrm := &data.PutObjectTaggingParams{
ObjectVersion: &layer.ObjectVersion{ ObjectVersion: &data.ObjectVersion{
BktInfo: bktInfo, BktInfo: bktInfo,
ObjectName: reqInfo.ObjectName, ObjectName: reqInfo.ObjectName,
VersionID: reqInfo.URL.Query().Get(api.QueryVersionID), VersionID: reqInfo.URL.Query().Get(api.QueryVersionID),
@ -87,8 +85,8 @@ func (h *handler) GetObjectTaggingHandler(w http.ResponseWriter, r *http.Request
return return
} }
tagPrm := &layer.GetObjectTaggingParams{ tagPrm := &data.GetObjectTaggingParams{
ObjectVersion: &layer.ObjectVersion{ ObjectVersion: &data.ObjectVersion{
BktInfo: bktInfo, BktInfo: bktInfo,
ObjectName: reqInfo.ObjectName, ObjectName: reqInfo.ObjectName,
VersionID: reqInfo.URL.Query().Get(api.QueryVersionID), VersionID: reqInfo.URL.Query().Get(api.QueryVersionID),
@ -119,7 +117,7 @@ func (h *handler) DeleteObjectTaggingHandler(w http.ResponseWriter, r *http.Requ
return return
} }
p := &layer.ObjectVersion{ p := &data.ObjectVersion{
BktInfo: bktInfo, BktInfo: bktInfo,
ObjectName: reqInfo.ObjectName, ObjectName: reqInfo.ObjectName,
VersionID: reqInfo.URL.Query().Get(api.QueryVersionID), VersionID: reqInfo.URL.Query().Get(api.QueryVersionID),
@ -152,7 +150,7 @@ func (h *handler) DeleteObjectTaggingHandler(w http.ResponseWriter, r *http.Requ
func (h *handler) PutBucketTaggingHandler(w http.ResponseWriter, r *http.Request) { func (h *handler) PutBucketTaggingHandler(w http.ResponseWriter, r *http.Request) {
reqInfo := middleware.GetReqInfo(r.Context()) reqInfo := middleware.GetReqInfo(r.Context())
tagSet, err := h.readTagSet(r.Body) tagSet, err := h.readTagSet(reqInfo.Tagging)
if err != nil { if err != nil {
h.logAndSendError(w, "could not read tag set", reqInfo, err) h.logAndSendError(w, "could not read tag set", reqInfo, err)
return return
@ -207,12 +205,7 @@ func (h *handler) DeleteBucketTaggingHandler(w http.ResponseWriter, r *http.Requ
w.WriteHeader(http.StatusNoContent) w.WriteHeader(http.StatusNoContent)
} }
func (h *handler) readTagSet(reader io.Reader) (map[string]string, error) { func (h *handler) readTagSet(tagging *data.Tagging) (map[string]string, error) {
tagging := new(Tagging)
if err := h.cfg.NewXMLDecoder(reader).Decode(tagging); err != nil {
return nil, errors.GetAPIError(errors.ErrMalformedXML)
}
if err := checkTagSet(tagging.TagSet); err != nil { if err := checkTagSet(tagging.TagSet); err != nil {
return nil, err return nil, err
} }
@ -228,10 +221,10 @@ func (h *handler) readTagSet(reader io.Reader) (map[string]string, error) {
return tagSet, nil return tagSet, nil
} }
func encodeTagging(tagSet map[string]string) *Tagging { func encodeTagging(tagSet map[string]string) *data.Tagging {
tagging := &Tagging{} tagging := &data.Tagging{}
for k, v := range tagSet { for k, v := range tagSet {
tagging.TagSet = append(tagging.TagSet, Tag{Key: k, Value: v}) tagging.TagSet = append(tagging.TagSet, data.Tag{Key: k, Value: v})
} }
sort.Slice(tagging.TagSet, func(i, j int) bool { sort.Slice(tagging.TagSet, func(i, j int) bool {
return tagging.TagSet[i].Key < tagging.TagSet[j].Key return tagging.TagSet[i].Key < tagging.TagSet[j].Key
@ -240,7 +233,7 @@ func encodeTagging(tagSet map[string]string) *Tagging {
return tagging return tagging
} }
func checkTagSet(tagSet []Tag) error { func checkTagSet(tagSet []data.Tag) error {
if len(tagSet) > maxTags { if len(tagSet) > maxTags {
return errors.GetAPIError(errors.ErrInvalidTagsSizeExceed) return errors.GetAPIError(errors.ErrInvalidTagsSizeExceed)
} }
@ -254,7 +247,7 @@ func checkTagSet(tagSet []Tag) error {
return nil return nil
} }
func checkTag(tag Tag) error { func checkTag(tag data.Tag) error {
if len(tag.Key) < 1 || len(tag.Key) > keyTagMaxLength { if len(tag.Key) < 1 || len(tag.Key) > keyTagMaxLength {
return errors.GetAPIError(errors.ErrInvalidTagKey) return errors.GetAPIError(errors.ErrInvalidTagKey)
} }

View file

@ -5,7 +5,9 @@ import (
"strings" "strings"
"testing" "testing"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
apiErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors" apiErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -20,23 +22,23 @@ func TestTagsValidity(t *testing.T) {
} }
for _, tc := range []struct { for _, tc := range []struct {
tag Tag tag data.Tag
valid bool valid bool
}{ }{
{tag: Tag{}, valid: false}, {tag: data.Tag{}, valid: false},
{tag: Tag{Key: "", Value: "1"}, valid: false}, {tag: data.Tag{Key: "", Value: "1"}, valid: false},
{tag: Tag{Key: "aws:key", Value: "val"}, valid: false}, {tag: data.Tag{Key: "aws:key", Value: "val"}, valid: false},
{tag: Tag{Key: "key~", Value: "val"}, valid: false}, {tag: data.Tag{Key: "key~", Value: "val"}, valid: false},
{tag: Tag{Key: "key\\", Value: "val"}, valid: false}, {tag: data.Tag{Key: "key\\", Value: "val"}, valid: false},
{tag: Tag{Key: "key?", Value: "val"}, valid: false}, {tag: data.Tag{Key: "key?", Value: "val"}, valid: false},
{tag: Tag{Key: sbKey.String() + "b", Value: "val"}, valid: false}, {tag: data.Tag{Key: sbKey.String() + "b", Value: "val"}, valid: false},
{tag: Tag{Key: "key", Value: sbValue.String() + "b"}, valid: false}, {tag: data.Tag{Key: "key", Value: sbValue.String() + "b"}, valid: false},
{tag: Tag{Key: sbKey.String(), Value: "val"}, valid: true}, {tag: data.Tag{Key: sbKey.String(), Value: "val"}, valid: true},
{tag: Tag{Key: "key", Value: sbValue.String()}, valid: true}, {tag: data.Tag{Key: "key", Value: sbValue.String()}, valid: true},
{tag: Tag{Key: "k e y", Value: "v a l"}, valid: true}, {tag: data.Tag{Key: "k e y", Value: "v a l"}, valid: true},
{tag: Tag{Key: "12345", Value: "1234"}, valid: true}, {tag: data.Tag{Key: "12345", Value: "1234"}, valid: true},
{tag: Tag{Key: allowedTagChars, Value: allowedTagChars}, valid: true}, {tag: data.Tag{Key: allowedTagChars, Value: allowedTagChars}, valid: true},
} { } {
err := checkTag(tc.tag) err := checkTag(tc.tag)
if tc.valid { if tc.valid {
@ -55,13 +57,13 @@ func TestPutObjectTaggingCheckUniqueness(t *testing.T) {
for _, tc := range []struct { for _, tc := range []struct {
name string name string
body *Tagging body *data.Tagging
error bool error bool
}{ }{
{ {
name: "Two tags with unique keys", name: "Two tags with unique keys",
body: &Tagging{ body: &data.Tagging{
TagSet: []Tag{ TagSet: []data.Tag{
{ {
Key: "key-1", Key: "key-1",
Value: "val-1", Value: "val-1",
@ -76,8 +78,8 @@ func TestPutObjectTaggingCheckUniqueness(t *testing.T) {
}, },
{ {
name: "Two tags with the same keys", name: "Two tags with the same keys",
body: &Tagging{ body: &data.Tagging{
TagSet: []Tag{ TagSet: []data.Tag{
{ {
Key: "key-1", Key: "key-1",
Value: "val-1", Value: "val-1",
@ -93,6 +95,7 @@ func TestPutObjectTaggingCheckUniqueness(t *testing.T) {
} { } {
t.Run(tc.name, func(t *testing.T) { t.Run(tc.name, func(t *testing.T) {
w, r := prepareTestRequest(hc, bktName, objName, tc.body) w, r := prepareTestRequest(hc, bktName, objName, tc.body)
middleware.GetReqInfo(r.Context()).Tagging = tc.body
hc.Handler().PutObjectTaggingHandler(w, r) hc.Handler().PutObjectTaggingHandler(w, r)
if tc.error { if tc.error {
assertS3Error(t, w, apiErrors.GetAPIError(apiErrors.ErrInvalidTagKeyUniqueness)) assertS3Error(t, w, apiErrors.GetAPIError(apiErrors.ErrInvalidTagKeyUniqueness))

View file

@ -15,6 +15,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
frosterrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/errors" frosterrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
"go.opentelemetry.io/otel/trace" "go.opentelemetry.io/otel/trace"
"go.uber.org/zap" "go.uber.org/zap"
@ -30,9 +31,12 @@ func (h *handler) reqLogger(ctx context.Context) *zap.Logger {
func (h *handler) logAndSendError(w http.ResponseWriter, logText string, reqInfo *middleware.ReqInfo, err error, additional ...zap.Field) { func (h *handler) logAndSendError(w http.ResponseWriter, logText string, reqInfo *middleware.ReqInfo, err error, additional ...zap.Field) {
err = handleDeleteMarker(w, err) err = handleDeleteMarker(w, err)
code := middleware.WriteErrorResponse(w, reqInfo, transformToS3Error(err)) if code, wrErr := middleware.WriteErrorResponse(w, reqInfo, transformToS3Error(err)); wrErr != nil {
additional = append(additional, zap.NamedError("write_response_error", wrErr))
} else {
additional = append(additional, zap.Int("status", code))
}
fields := []zap.Field{ fields := []zap.Field{
zap.Int("status", code),
zap.String("request_id", reqInfo.RequestID), zap.String("request_id", reqInfo.RequestID),
zap.String("method", reqInfo.API), zap.String("method", reqInfo.API),
zap.String("bucket", reqInfo.BucketName), zap.String("bucket", reqInfo.BucketName),
@ -78,6 +82,10 @@ func (h *handler) ResolveBucket(ctx context.Context, bucket string) (*data.Bucke
return h.obj.GetBucketInfo(ctx, bucket) return h.obj.GetBucketInfo(ctx, bucket)
} }
func (h *handler) ResolveCID(ctx context.Context, bucket string) (cid.ID, error) {
return h.obj.ResolveCID(ctx, bucket)
}
func (h *handler) getBucketAndCheckOwner(r *http.Request, bucket string, header ...string) (*data.BucketInfo, error) { func (h *handler) getBucketAndCheckOwner(r *http.Request, bucket string, header ...string) (*data.BucketInfo, error) {
bktInfo, err := h.obj.GetBucketInfo(r.Context(), bucket) bktInfo, err := h.obj.GetBucketInfo(r.Context(), bucket)
if err != nil { if err != nil {

View file

@ -60,8 +60,8 @@ func NewCache(cfg *CachesConfig) *Cache {
} }
} }
func (c *Cache) GetBucket(ns, name string) *data.BucketInfo { func (c *Cache) GetBucket(zone, name string) *data.BucketInfo {
return c.bucketCache.Get(ns, name) return c.bucketCache.Get(zone, name)
} }
func (c *Cache) PutBucket(bktInfo *data.BucketInfo) { func (c *Cache) PutBucket(bktInfo *data.BucketInfo) {

View file

@ -9,7 +9,7 @@ import (
s3errors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors" s3errors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
) )
func (n *layer) GetObjectTaggingAndLock(ctx context.Context, objVersion *ObjectVersion, nodeVersion *data.NodeVersion) (map[string]string, *data.LockInfo, error) { func (n *layer) GetObjectTaggingAndLock(ctx context.Context, objVersion *data.ObjectVersion, nodeVersion *data.NodeVersion) (map[string]string, data.LockInfo, error) {
var err error var err error
owner := n.BearerOwner(ctx) owner := n.BearerOwner(ctx)
@ -17,26 +17,26 @@ func (n *layer) GetObjectTaggingAndLock(ctx context.Context, objVersion *ObjectV
lockInfo := n.cache.GetLockInfo(owner, lockObjectKey(objVersion)) lockInfo := n.cache.GetLockInfo(owner, lockObjectKey(objVersion))
if tags != nil && lockInfo != nil { if tags != nil && lockInfo != nil {
return tags, lockInfo, nil return tags, *lockInfo, nil
} }
if nodeVersion == nil { if nodeVersion == nil {
nodeVersion, err = n.getNodeVersion(ctx, objVersion) nodeVersion, err = n.getNodeVersion(ctx, objVersion)
if err != nil { if err != nil {
return nil, nil, err return nil, data.LockInfo{}, err
} }
} }
tags, lockInfo, err = n.treeService.GetObjectTaggingAndLock(ctx, objVersion.BktInfo, nodeVersion) tags, lockInfo, err = n.treeService.GetObjectTaggingAndLock(ctx, objVersion.BktInfo, nodeVersion)
if err != nil { if err != nil {
if errors.Is(err, ErrNodeNotFound) { if errors.Is(err, ErrNodeNotFound) {
return nil, nil, fmt.Errorf("%w: %s", s3errors.GetAPIError(s3errors.ErrNoSuchKey), err.Error()) return nil, data.LockInfo{}, fmt.Errorf("%w: %s", s3errors.GetAPIError(s3errors.ErrNoSuchKey), err.Error())
} }
return nil, nil, err return nil, data.LockInfo{}, err
} }
n.cache.PutTagging(owner, objectTaggingCacheKey(objVersion), tags) n.cache.PutTagging(owner, objectTaggingCacheKey(objVersion), tags)
n.cache.PutLockInfo(owner, lockObjectKey(objVersion), lockInfo) n.cache.PutLockInfo(owner, lockObjectKey(objVersion), lockInfo)
return tags, lockInfo, nil return tags, *lockInfo, nil
} }

View file

@ -12,6 +12,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
@ -31,20 +32,21 @@ const (
AttributeLockEnabled = "LockEnabled" AttributeLockEnabled = "LockEnabled"
) )
func (n *layer) containerInfo(ctx context.Context, idCnr cid.ID) (*data.BucketInfo, error) { func (n *layer) containerInfo(ctx context.Context, prm PrmContainer) (*data.BucketInfo, error) {
var ( var (
err error err error
res *container.Container res *container.Container
log = n.reqLogger(ctx).With(zap.Stringer("cid", idCnr)) log = n.reqLogger(ctx).With(zap.Stringer("cid", prm.ContainerID))
info = &data.BucketInfo{ info = &data.BucketInfo{
CID: idCnr, CID: prm.ContainerID,
Name: idCnr.EncodeToString(), Name: prm.ContainerID.EncodeToString(),
} }
reqInfo = middleware.GetReqInfo(ctx) reqInfo = middleware.GetReqInfo(ctx)
) )
res, err = n.frostFS.Container(ctx, idCnr)
res, err = n.frostFS.Container(ctx, prm)
if err != nil { if err != nil {
if client.IsErrContainerNotFound(err) { if client.IsErrContainerNotFound(err) {
return nil, fmt.Errorf("%w: %s", s3errors.GetAPIError(s3errors.ErrNoSuchBucket), err.Error()) return nil, fmt.Errorf("%w: %s", s3errors.GetAPIError(s3errors.ErrNoSuchBucket), err.Error())
@ -62,6 +64,7 @@ func (n *layer) containerInfo(ctx context.Context, idCnr cid.ID) (*data.BucketIn
info.Created = container.CreatedAt(cnr) info.Created = container.CreatedAt(cnr)
info.LocationConstraint = cnr.Attribute(attributeLocationConstraint) info.LocationConstraint = cnr.Attribute(attributeLocationConstraint)
info.HomomorphicHashDisabled = container.IsHomomorphicHashingDisabled(cnr) info.HomomorphicHashDisabled = container.IsHomomorphicHashingDisabled(cnr)
info.APEEnabled = cnr.BasicACL().Bits() == 0
attrLockEnabled := cnr.Attribute(AttributeLockEnabled) attrLockEnabled := cnr.Attribute(AttributeLockEnabled)
if len(attrLockEnabled) > 0 { if len(attrLockEnabled) > 0 {
@ -76,7 +79,7 @@ func (n *layer) containerInfo(ctx context.Context, idCnr cid.ID) (*data.BucketIn
zone, _ := n.features.FormContainerZone(reqInfo.Namespace) zone, _ := n.features.FormContainerZone(reqInfo.Namespace)
if zone != info.Zone { if zone != info.Zone {
return nil, fmt.Errorf("ns '%s' and zone '%s' are mismatched for container '%s'", zone, info.Zone, idCnr) return nil, fmt.Errorf("ns '%s' and zone '%s' are mismatched for container '%s'", zone, info.Zone, prm.ContainerID)
} }
n.cache.PutBucket(info) n.cache.PutBucket(info)
@ -85,7 +88,14 @@ func (n *layer) containerInfo(ctx context.Context, idCnr cid.ID) (*data.BucketIn
} }
func (n *layer) containerList(ctx context.Context) ([]*data.BucketInfo, error) { func (n *layer) containerList(ctx context.Context) ([]*data.BucketInfo, error) {
res, err := n.frostFS.UserContainers(ctx, n.BearerOwner(ctx)) stoken := n.SessionTokenForRead(ctx)
prm := PrmUserContainers{
UserID: n.BearerOwner(ctx),
SessionToken: stoken,
}
res, err := n.frostFS.UserContainers(ctx, prm)
if err != nil { if err != nil {
n.reqLogger(ctx).Error(logs.CouldNotListUserContainers, zap.Error(err)) n.reqLogger(ctx).Error(logs.CouldNotListUserContainers, zap.Error(err))
return nil, err return nil, err
@ -93,7 +103,11 @@ func (n *layer) containerList(ctx context.Context) ([]*data.BucketInfo, error) {
list := make([]*data.BucketInfo, 0, len(res)) list := make([]*data.BucketInfo, 0, len(res))
for i := range res { for i := range res {
info, err := n.containerInfo(ctx, res[i]) getPrm := PrmContainer{
ContainerID: res[i],
SessionToken: stoken,
}
info, err := n.containerInfo(ctx, getPrm)
if err != nil { if err != nil {
n.reqLogger(ctx).Error(logs.CouldNotFetchContainerInfo, zap.Error(err)) n.reqLogger(ctx).Error(logs.CouldNotFetchContainerInfo, zap.Error(err))
continue continue
@ -119,13 +133,12 @@ func (n *layer) createContainer(ctx context.Context, p *CreateBucketParams) (*da
Created: TimeNow(ctx), Created: TimeNow(ctx),
LocationConstraint: p.LocationConstraint, LocationConstraint: p.LocationConstraint,
ObjectLockEnabled: p.ObjectLockEnabled, ObjectLockEnabled: p.ObjectLockEnabled,
APEEnabled: p.APEEnabled,
} }
var attributes [][2]string attributes := [][2]string{
{attributeLocationConstraint, p.LocationConstraint},
attributes = append(attributes, [2]string{ }
attributeLocationConstraint, p.LocationConstraint,
})
if p.ObjectLockEnabled { if p.ObjectLockEnabled {
attributes = append(attributes, [2]string{ attributes = append(attributes, [2]string{
@ -133,6 +146,11 @@ func (n *layer) createContainer(ctx context.Context, p *CreateBucketParams) (*da
}) })
} }
basicACL := acl.PublicRWExtended
if p.APEEnabled {
basicACL = 0
}
res, err := n.frostFS.CreateContainer(ctx, PrmContainerCreate{ res, err := n.frostFS.CreateContainer(ctx, PrmContainerCreate{
Creator: bktInfo.Owner, Creator: bktInfo.Owner,
Policy: p.Policy, Policy: p.Policy,
@ -141,6 +159,7 @@ func (n *layer) createContainer(ctx context.Context, p *CreateBucketParams) (*da
SessionToken: p.SessionContainerCreation, SessionToken: p.SessionContainerCreation,
CreationTime: bktInfo.Created, CreationTime: bktInfo.Created,
AdditionalAttributes: attributes, AdditionalAttributes: attributes,
BasicACL: basicACL,
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("create container: %w", err) return nil, fmt.Errorf("create container: %w", err)
@ -149,10 +168,6 @@ func (n *layer) createContainer(ctx context.Context, p *CreateBucketParams) (*da
bktInfo.CID = res.ContainerID bktInfo.CID = res.ContainerID
bktInfo.HomomorphicHashDisabled = res.HomomorphicHashDisabled bktInfo.HomomorphicHashDisabled = res.HomomorphicHashDisabled
if err = n.setContainerEACLTable(ctx, bktInfo.CID, p.EACL, p.SessionEACL); err != nil {
return nil, fmt.Errorf("set container eacl: %w", err)
}
n.cache.PutBucket(bktInfo) n.cache.PutBucket(bktInfo)
return bktInfo, nil return bktInfo, nil
@ -164,6 +179,10 @@ func (n *layer) setContainerEACLTable(ctx context.Context, idCnr cid.ID, table *
return n.frostFS.SetContainerEACL(ctx, *table, sessionToken) return n.frostFS.SetContainerEACL(ctx, *table, sessionToken)
} }
func (n *layer) GetContainerEACL(ctx context.Context, idCnr cid.ID) (*eacl.Table, error) { func (n *layer) GetContainerEACL(ctx context.Context, cnrID cid.ID) (*eacl.Table, error) {
return n.frostFS.ContainerEACL(ctx, idCnr) prm := PrmContainerEACL{
ContainerID: cnrID,
SessionToken: n.SessionTokenForRead(ctx),
}
return n.frostFS.ContainerEACL(ctx, prm)
} }

View file

@ -46,6 +46,33 @@ type PrmContainerCreate struct {
AdditionalAttributes [][2]string AdditionalAttributes [][2]string
} }
// PrmContainer groups parameters of FrostFS.Container operation.
type PrmContainer struct {
// Container identifier.
ContainerID cid.ID
// Token of the container's creation session. Nil means session absence.
SessionToken *session.Container
}
// PrmUserContainers groups parameters of FrostFS.UserContainers operation.
type PrmUserContainers struct {
// User identifier.
UserID user.ID
// Token of the container's creation session. Nil means session absence.
SessionToken *session.Container
}
// PrmContainerEACL groups parameters of FrostFS.ContainerEACL operation.
type PrmContainerEACL struct {
// Container identifier.
ContainerID cid.ID
// Token of the container's creation session. Nil means session absence.
SessionToken *session.Container
}
// ContainerCreateResult is a result parameter of FrostFS.CreateContainer operation. // ContainerCreateResult is a result parameter of FrostFS.CreateContainer operation.
type ContainerCreateResult struct { type ContainerCreateResult struct {
ContainerID cid.ID ContainerID cid.ID
@ -173,8 +200,6 @@ type FrostFS interface {
// It sets 'Timestamp' attribute to the current time. // It sets 'Timestamp' attribute to the current time.
// It returns the ID of the saved container. // It returns the ID of the saved container.
// //
// Created container is public with enabled ACL extension.
//
// It returns exactly one non-zero value. It returns any error encountered which // It returns exactly one non-zero value. It returns any error encountered which
// prevented the container from being created. // prevented the container from being created.
CreateContainer(context.Context, PrmContainerCreate) (*ContainerCreateResult, error) CreateContainer(context.Context, PrmContainerCreate) (*ContainerCreateResult, error)
@ -183,13 +208,13 @@ type FrostFS interface {
// //
// It returns exactly one non-nil value. It returns any error encountered which // It returns exactly one non-nil value. It returns any error encountered which
// prevented the container from being read. // prevented the container from being read.
Container(context.Context, cid.ID) (*container.Container, error) Container(context.Context, PrmContainer) (*container.Container, error)
// UserContainers reads a list of the containers owned by the specified user. // UserContainers reads a list of the containers owned by the specified user.
// //
// It returns exactly one non-nil value. It returns any error encountered which // It returns exactly one non-nil value. It returns any error encountered which
// prevented the containers from being listed. // prevented the containers from being listed.
UserContainers(context.Context, user.ID) ([]cid.ID, error) UserContainers(context.Context, PrmUserContainers) ([]cid.ID, error)
// SetContainerEACL saves the eACL table of the container in FrostFS. The // SetContainerEACL saves the eACL table of the container in FrostFS. The
// extended ACL is modified within session if session token is not nil. // extended ACL is modified within session if session token is not nil.
@ -201,7 +226,7 @@ type FrostFS interface {
// //
// It returns exactly one non-nil value. It returns any error encountered which // It returns exactly one non-nil value. It returns any error encountered which
// prevented the eACL from being read. // prevented the eACL from being read.
ContainerEACL(context.Context, cid.ID) (*eacl.Table, error) ContainerEACL(context.Context, PrmContainerEACL) (*eacl.Table, error)
// DeleteContainer marks the container to be removed from FrostFS by ID. // DeleteContainer marks the container to be removed from FrostFS by ID.
// Request is sent within session if the session token is specified. // Request is sent within session if the session token is specified.

View file

@ -8,8 +8,10 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"strings"
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/acl"
v2container "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/container" v2container "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/container"
objectv2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object" objectv2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
@ -136,6 +138,10 @@ func (t *TestFrostFS) ContainerID(name string) (cid.ID, error) {
return cid.ID{}, fmt.Errorf("not found") return cid.ID{}, fmt.Errorf("not found")
} }
func (t *TestFrostFS) SetContainer(cnrID cid.ID, cnr *container.Container) {
t.containers[cnrID.EncodeToString()] = cnr
}
func (t *TestFrostFS) CreateContainer(_ context.Context, prm PrmContainerCreate) (*ContainerCreateResult, error) { func (t *TestFrostFS) CreateContainer(_ context.Context, prm PrmContainerCreate) (*ContainerCreateResult, error) {
var cnr container.Container var cnr container.Container
cnr.Init() cnr.Init()
@ -180,17 +186,17 @@ func (t *TestFrostFS) DeleteContainer(_ context.Context, cnrID cid.ID, _ *sessio
return nil return nil
} }
func (t *TestFrostFS) Container(_ context.Context, id cid.ID) (*container.Container, error) { func (t *TestFrostFS) Container(_ context.Context, prm PrmContainer) (*container.Container, error) {
for k, v := range t.containers { for k, v := range t.containers {
if k == id.EncodeToString() { if k == prm.ContainerID.EncodeToString() {
return v, nil return v, nil
} }
} }
return nil, fmt.Errorf("container not found %s", id) return nil, fmt.Errorf("container not found %s", prm.ContainerID)
} }
func (t *TestFrostFS) UserContainers(_ context.Context, _ user.ID) ([]cid.ID, error) { func (t *TestFrostFS) UserContainers(context.Context, PrmUserContainers) ([]cid.ID, error) {
var res []cid.ID var res []cid.ID
for k := range t.containers { for k := range t.containers {
var idCnr cid.ID var idCnr cid.ID
@ -216,7 +222,7 @@ func (t *TestFrostFS) ReadObject(ctx context.Context, prm PrmObjectRead) (*Objec
if obj, ok := t.objects[sAddr]; ok { if obj, ok := t.objects[sAddr]; ok {
owner := getBearerOwner(ctx) owner := getBearerOwner(ctx)
if !t.checkAccess(prm.Container, owner, eacl.OperationGet) { if !t.checkAccess(prm.Container, owner, eacl.OperationGet, obj) {
return nil, ErrAccessDenied return nil, ErrAccessDenied
} }
@ -280,7 +286,7 @@ func (t *TestFrostFS) CreateObject(_ context.Context, prm PrmObjectCreate) (oid.
obj.SetPayloadSize(prm.PayloadSize) obj.SetPayloadSize(prm.PayloadSize)
obj.SetAttributes(attrs...) obj.SetAttributes(attrs...)
obj.SetCreationEpoch(t.currentEpoch) obj.SetCreationEpoch(t.currentEpoch)
obj.SetOwnerID(&owner) obj.SetOwnerID(owner)
t.currentEpoch++ t.currentEpoch++
if len(prm.Locks) > 0 { if len(prm.Locks) > 0 {
@ -318,9 +324,9 @@ func (t *TestFrostFS) DeleteObject(ctx context.Context, prm PrmObjectDelete) err
return err return err
} }
if _, ok := t.objects[addr.EncodeToString()]; ok { if obj, ok := t.objects[addr.EncodeToString()]; ok {
owner := getBearerOwner(ctx) owner := getBearerOwner(ctx)
if !t.checkAccess(prm.Container, owner, eacl.OperationDelete) { if !t.checkAccess(prm.Container, owner, eacl.OperationDelete, obj) {
return ErrAccessDenied return ErrAccessDenied
} }
@ -363,8 +369,8 @@ func (t *TestFrostFS) SetContainerEACL(_ context.Context, table eacl.Table, _ *s
return nil return nil
} }
func (t *TestFrostFS) ContainerEACL(_ context.Context, cnrID cid.ID) (*eacl.Table, error) { func (t *TestFrostFS) ContainerEACL(_ context.Context, prm PrmContainerEACL) (*eacl.Table, error) {
table, ok := t.eaclTables[cnrID.EncodeToString()] table, ok := t.eaclTables[prm.ContainerID.EncodeToString()]
if !ok { if !ok {
return nil, errors.New("not found") return nil, errors.New("not found")
} }
@ -372,7 +378,44 @@ func (t *TestFrostFS) ContainerEACL(_ context.Context, cnrID cid.ID) (*eacl.Tabl
return table, nil return table, nil
} }
func (t *TestFrostFS) checkAccess(cnrID cid.ID, owner user.ID, op eacl.Operation) bool { func (t *TestFrostFS) SearchObjects(_ context.Context, prm PrmObjectSearch) ([]oid.ID, error) {
filters := object.NewSearchFilters()
filters.AddRootFilter()
if prm.ExactAttribute[0] != "" {
filters.AddFilter(prm.ExactAttribute[0], prm.ExactAttribute[1], object.MatchStringEqual)
}
cidStr := prm.Container.EncodeToString()
var res []oid.ID
if len(filters) == 1 {
for k, v := range t.objects {
if strings.Contains(k, cidStr) {
id, _ := v.ID()
res = append(res, id)
}
}
return res, nil
}
filter := filters[1]
if len(filters) != 2 || filter.Operation() != object.MatchStringEqual {
return nil, fmt.Errorf("usupported filters")
}
for k, v := range t.objects {
if strings.Contains(k, cidStr) && isMatched(v.Attributes(), filter) {
id, _ := v.ID()
res = append(res, id)
}
}
return res, nil
}
func (t *TestFrostFS) checkAccess(cnrID cid.ID, owner user.ID, op eacl.Operation, obj *object.Object) bool {
cnr, ok := t.containers[cnrID.EncodeToString()] cnr, ok := t.containers[cnrID.EncodeToString()]
if !ok { if !ok {
return false return false
@ -388,19 +431,48 @@ func (t *TestFrostFS) checkAccess(cnrID cid.ID, owner user.ID, op eacl.Operation
} }
for _, rec := range table.Records() { for _, rec := range table.Records() {
if rec.Operation() == op && len(rec.Filters()) == 0 { if rec.Operation() != op {
continue
}
if !matchTarget(rec, owner) {
continue
}
if matchFilter(rec.Filters(), obj) {
return rec.Action() == eacl.ActionAllow
}
}
return true
}
func matchTarget(rec eacl.Record, owner user.ID) bool {
for _, trgt := range rec.Targets() { for _, trgt := range rec.Targets() {
if trgt.Role() == eacl.RoleOthers { if trgt.Role() == eacl.RoleOthers {
return rec.Action() == eacl.ActionAllow return true
} }
var targetOwner user.ID var targetOwner user.ID
for _, pk := range eacl.TargetECDSAKeys(&trgt) { for _, pk := range eacl.TargetECDSAKeys(&trgt) {
user.IDFromKey(&targetOwner, *pk) user.IDFromKey(&targetOwner, *pk)
if targetOwner.Equals(owner) { if targetOwner.Equals(owner) {
return rec.Action() == eacl.ActionAllow return true
} }
} }
} }
return false
}
func matchFilter(filters []eacl.Filter, obj *object.Object) bool {
objID, _ := obj.ID()
for _, f := range filters {
fv2 := f.ToV2()
if fv2.GetMatchType() != acl.MatchTypeStringEqual ||
fv2.GetHeaderType() != acl.HeaderTypeObject ||
fv2.GetKey() != acl.FilterObjectID ||
fv2.GetValue() != objID.EncodeToString() {
return false
} }
} }
@ -414,3 +486,12 @@ func getBearerOwner(ctx context.Context) user.ID {
return user.ID{} return user.ID{}
} }
func isMatched(attributes []object.Attribute, filter object.SearchFilter) bool {
for _, attr := range attributes {
if attr.Key() == filter.Header() && attr.Value() == filter.Value() {
return true
}
}
return false
}

View file

@ -4,6 +4,7 @@ import (
"context" "context"
"crypto/ecdsa" "crypto/ecdsa"
"crypto/rand" "crypto/rand"
"encoding/json"
"encoding/xml" "encoding/xml"
"fmt" "fmt"
"io" "io"
@ -97,14 +98,6 @@ type (
VersionID string VersionID string
} }
// ObjectVersion stores object version info.
ObjectVersion struct {
BktInfo *data.BucketInfo
ObjectName string
VersionID string
NoErrorOnDeleteMarker bool
}
// RangeParams stores range header request parameters. // RangeParams stores range header request parameters.
RangeParams struct { RangeParams struct {
Start uint64 Start uint64
@ -139,6 +132,7 @@ type (
BktInfo *data.BucketInfo BktInfo *data.BucketInfo
Objects []*VersionedObject Objects []*VersionedObject
Settings *data.BucketSettings Settings *data.BucketSettings
IsMultiple bool
} }
// PutSettingsParams stores object copy request parameters. // PutSettingsParams stores object copy request parameters.
@ -175,11 +169,10 @@ type (
Name string Name string
Namespace string Namespace string
Policy netmap.PlacementPolicy Policy netmap.PlacementPolicy
EACL *eacl.Table
SessionContainerCreation *session.Container SessionContainerCreation *session.Container
SessionEACL *session.Container
LocationConstraint string LocationConstraint string
ObjectLockEnabled bool ObjectLockEnabled bool
APEEnabled bool
} }
// PutBucketACLParams stores put bucket acl request parameters. // PutBucketACLParams stores put bucket acl request parameters.
PutBucketACLParams struct { PutBucketACLParams struct {
@ -234,6 +227,7 @@ type (
ListBuckets(ctx context.Context) ([]*data.BucketInfo, error) ListBuckets(ctx context.Context) ([]*data.BucketInfo, error)
GetBucketInfo(ctx context.Context, name string) (*data.BucketInfo, error) GetBucketInfo(ctx context.Context, name string) (*data.BucketInfo, error)
ResolveCID(ctx context.Context, name string) (cid.ID, error)
GetBucketACL(ctx context.Context, bktInfo *data.BucketInfo) (*BucketACL, error) GetBucketACL(ctx context.Context, bktInfo *data.BucketInfo) (*BucketACL, error)
PutBucketACL(ctx context.Context, p *PutBucketACLParams) error PutBucketACL(ctx context.Context, p *PutBucketACLParams) error
CreateBucket(ctx context.Context, p *CreateBucketParams) (*data.BucketInfo, error) CreateBucket(ctx context.Context, p *CreateBucketParams) (*data.BucketInfo, error)
@ -243,16 +237,16 @@ type (
GetObjectInfo(ctx context.Context, p *HeadObjectParams) (*data.ObjectInfo, error) GetObjectInfo(ctx context.Context, p *HeadObjectParams) (*data.ObjectInfo, error)
GetExtendedObjectInfo(ctx context.Context, p *HeadObjectParams) (*data.ExtendedObjectInfo, error) GetExtendedObjectInfo(ctx context.Context, p *HeadObjectParams) (*data.ExtendedObjectInfo, error)
GetLockInfo(ctx context.Context, obj *ObjectVersion) (*data.LockInfo, error) GetLockInfo(ctx context.Context, obj *data.ObjectVersion) (*data.LockInfo, error)
PutLockInfo(ctx context.Context, p *PutLockInfoParams) error PutLockInfo(ctx context.Context, p *PutLockInfoParams) error
GetBucketTagging(ctx context.Context, bktInfo *data.BucketInfo) (map[string]string, error) GetBucketTagging(ctx context.Context, bktInfo *data.BucketInfo) (map[string]string, error)
PutBucketTagging(ctx context.Context, bktInfo *data.BucketInfo, tagSet map[string]string) error PutBucketTagging(ctx context.Context, bktInfo *data.BucketInfo, tagSet map[string]string) error
DeleteBucketTagging(ctx context.Context, bktInfo *data.BucketInfo) error DeleteBucketTagging(ctx context.Context, bktInfo *data.BucketInfo) error
GetObjectTagging(ctx context.Context, p *GetObjectTaggingParams) (string, map[string]string, error) GetObjectTagging(ctx context.Context, p *data.GetObjectTaggingParams) (string, map[string]string, error)
PutObjectTagging(ctx context.Context, p *PutObjectTaggingParams) (*data.NodeVersion, error) PutObjectTagging(ctx context.Context, p *data.PutObjectTaggingParams) (*data.NodeVersion, error)
DeleteObjectTagging(ctx context.Context, p *ObjectVersion) (*data.NodeVersion, error) DeleteObjectTagging(ctx context.Context, p *data.ObjectVersion) (*data.NodeVersion, error)
PutObject(ctx context.Context, p *PutObjectParams) (*data.ExtendedObjectInfo, error) PutObject(ctx context.Context, p *PutObjectParams) (*data.ExtendedObjectInfo, error)
@ -278,7 +272,7 @@ type (
// Compound methods for optimizations // Compound methods for optimizations
// GetObjectTaggingAndLock unifies GetObjectTagging and GetLock methods in single tree service invocation. // GetObjectTaggingAndLock unifies GetObjectTagging and GetLock methods in single tree service invocation.
GetObjectTaggingAndLock(ctx context.Context, p *ObjectVersion, nodeVersion *data.NodeVersion) (map[string]string, *data.LockInfo, error) GetObjectTaggingAndLock(ctx context.Context, p *data.ObjectVersion, nodeVersion *data.NodeVersion) (map[string]string, data.LockInfo, error)
} }
) )
@ -377,6 +371,15 @@ func (n *layer) BearerOwner(ctx context.Context) user.ID {
return ownerID return ownerID
} }
// SessionTokenForRead returns session container token.
func (n *layer) SessionTokenForRead(ctx context.Context) *session.Container {
if bd, err := middleware.GetBoxData(ctx); err == nil && bd.Gate != nil {
return bd.Gate.SessionToken()
}
return nil
}
func (n *layer) reqLogger(ctx context.Context) *zap.Logger { func (n *layer) reqLogger(ctx context.Context) *zap.Logger {
reqLogger := middleware.GetReqLog(ctx) reqLogger := middleware.GetReqLog(ctx)
if reqLogger != nil { if reqLogger != nil {
@ -404,8 +407,9 @@ func (n *layer) GetBucketInfo(ctx context.Context, name string) (*data.BucketInf
} }
reqInfo := middleware.GetReqInfo(ctx) reqInfo := middleware.GetReqInfo(ctx)
zone, _ := n.features.FormContainerZone(reqInfo.Namespace)
if bktInfo := n.cache.GetBucket(reqInfo.Namespace, name); bktInfo != nil { if bktInfo := n.cache.GetBucket(zone, name); bktInfo != nil {
return bktInfo, nil return bktInfo, nil
} }
@ -417,7 +421,29 @@ func (n *layer) GetBucketInfo(ctx context.Context, name string) (*data.BucketInf
return nil, err return nil, err
} }
return n.containerInfo(ctx, containerID) prm := PrmContainer{
ContainerID: containerID,
SessionToken: n.SessionTokenForRead(ctx),
}
return n.containerInfo(ctx, prm)
}
// ResolveCID returns container id by name.
func (n *layer) ResolveCID(ctx context.Context, name string) (cid.ID, error) {
name, err := url.QueryUnescape(name)
if err != nil {
return cid.ID{}, fmt.Errorf("unescape bucket name: %w", err)
}
reqInfo := middleware.GetReqInfo(ctx)
zone, _ := n.features.FormContainerZone(reqInfo.Namespace)
if bktInfo := n.cache.GetBucket(zone, name); bktInfo != nil {
return bktInfo.CID, nil
}
return n.ResolveBucket(ctx, name)
} }
// GetBucketACL returns bucket acl info by name. // GetBucketACL returns bucket acl info by name.
@ -732,7 +758,7 @@ func isNotFoundError(err error) bool {
} }
func (n *layer) getNodeVersionToDelete(ctx context.Context, bkt *data.BucketInfo, obj *VersionedObject) (*data.NodeVersion, error) { func (n *layer) getNodeVersionToDelete(ctx context.Context, bkt *data.BucketInfo, obj *VersionedObject) (*data.NodeVersion, error) {
objVersion := &ObjectVersion{ objVersion := &data.ObjectVersion{
BktInfo: bkt, BktInfo: bkt,
ObjectName: obj.Name, ObjectName: obj.Name,
VersionID: obj.VersionID, VersionID: obj.VersionID,
@ -743,7 +769,7 @@ func (n *layer) getNodeVersionToDelete(ctx context.Context, bkt *data.BucketInfo
} }
func (n *layer) getLastNodeVersion(ctx context.Context, bkt *data.BucketInfo, obj *VersionedObject) (*data.NodeVersion, error) { func (n *layer) getLastNodeVersion(ctx context.Context, bkt *data.BucketInfo, obj *VersionedObject) (*data.NodeVersion, error) {
objVersion := &ObjectVersion{ objVersion := &data.ObjectVersion{
BktInfo: bkt, BktInfo: bkt,
ObjectName: obj.Name, ObjectName: obj.Name,
VersionID: "", VersionID: "",
@ -758,13 +784,47 @@ func (n *layer) removeOldVersion(ctx context.Context, bkt *data.BucketInfo, node
return obj.VersionID, nil return obj.VersionID, nil
} }
if nodeVersion.IsCombined {
return "", n.removeCombinedObject(ctx, bkt, nodeVersion)
}
return "", n.objectDelete(ctx, bkt, nodeVersion.OID) return "", n.objectDelete(ctx, bkt, nodeVersion.OID)
} }
func (n *layer) removeCombinedObject(ctx context.Context, bkt *data.BucketInfo, nodeVersion *data.NodeVersion) error {
combinedObj, err := n.objectGet(ctx, bkt, nodeVersion.OID)
if err != nil {
return fmt.Errorf("get combined object '%s': %w", nodeVersion.OID.EncodeToString(), err)
}
var parts []*data.PartInfo
if err = json.Unmarshal(combinedObj.Payload(), &parts); err != nil {
return fmt.Errorf("unmarshal combined object parts: %w", err)
}
for _, part := range parts {
if err = n.objectDelete(ctx, bkt, part.OID); err == nil {
continue
}
if !client.IsErrObjectAlreadyRemoved(err) && !client.IsErrObjectNotFound(err) {
return fmt.Errorf("couldn't delete part '%s': %w", part.OID.EncodeToString(), err)
}
n.reqLogger(ctx).Warn(logs.CouldntDeletePart, zap.String("cid", bkt.CID.EncodeToString()),
zap.String("oid", part.OID.EncodeToString()), zap.Int("part number", part.Number), zap.Error(err))
}
return n.objectDelete(ctx, bkt, nodeVersion.OID)
}
// DeleteObjects from the storage. // DeleteObjects from the storage.
func (n *layer) DeleteObjects(ctx context.Context, p *DeleteObjectParams) []*VersionedObject { func (n *layer) DeleteObjects(ctx context.Context, p *DeleteObjectParams) []*VersionedObject {
for i, obj := range p.Objects { for i, obj := range p.Objects {
p.Objects[i] = n.deleteObject(ctx, p.BktInfo, p.Settings, obj) p.Objects[i] = n.deleteObject(ctx, p.BktInfo, p.Settings, obj)
if p.IsMultiple && p.Objects[i].Error != nil {
n.reqLogger(ctx).Error(logs.CouldntDeleteObject, zap.String("object", obj.String()), zap.Error(p.Objects[i].Error))
}
} }
return p.Objects return p.Objects

View file

@ -334,7 +334,7 @@ func (n *layer) initNewVersionsByPrefixSession(ctx context.Context, p commonVers
session.Context, session.Cancel = context.WithCancel(context.Background()) session.Context, session.Cancel = context.WithCancel(context.Background())
if bd, err := middleware.GetBoxData(ctx); err == nil { if bd, err := middleware.GetBoxData(ctx); err == nil {
session.Context = middleware.SetBoxData(session.Context, bd) session.Context = middleware.SetBox(session.Context, &middleware.Box{AccessBox: bd})
} }
session.Stream, err = n.treeService.InitVersionsByPrefixStream(session.Context, p.BktInfo, p.Prefix, latestOnly) session.Stream, err = n.treeService.InitVersionsByPrefixStream(session.Context, p.BktInfo, p.Prefix, latestOnly)

View file

@ -20,7 +20,7 @@ func TestObjectLockAttributes(t *testing.T) {
obj := tc.putObject([]byte("content obj1 v1")) obj := tc.putObject([]byte("content obj1 v1"))
p := &PutLockInfoParams{ p := &PutLockInfoParams{
ObjVersion: &ObjectVersion{ ObjVersion: &data.ObjectVersion{
BktInfo: tc.bktInfo, BktInfo: tc.bktInfo,
ObjectName: obj.Name, ObjectName: obj.Name,
VersionID: obj.VersionID(), VersionID: obj.VersionID(),

View file

@ -321,7 +321,7 @@ func (n *layer) PutObject(ctx context.Context, p *PutObjectParams) (*data.Extend
if p.Lock != nil && (p.Lock.Retention != nil || p.Lock.LegalHold != nil) { if p.Lock != nil && (p.Lock.Retention != nil || p.Lock.LegalHold != nil) {
putLockInfoPrms := &PutLockInfoParams{ putLockInfoPrms := &PutLockInfoParams{
ObjVersion: &ObjectVersion{ ObjVersion: &data.ObjectVersion{
BktInfo: p.BktInfo, BktInfo: p.BktInfo,
ObjectName: p.Object, ObjectName: p.Object,
VersionID: id.EncodeToString(), VersionID: id.EncodeToString(),
@ -384,7 +384,7 @@ func (n *layer) headLastVersionIfNotDeleted(ctx context.Context, bkt *data.Bucke
meta, err := n.objectHead(ctx, bkt, node.OID) meta, err := n.objectHead(ctx, bkt, node.OID)
if err != nil { if err != nil {
if client.IsErrObjectNotFound(err) { if client.IsErrObjectNotFound(err) {
return nil, fmt.Errorf("%w: %s", apiErrors.GetAPIError(apiErrors.ErrNoSuchKey), err.Error()) return nil, fmt.Errorf("%w: %s; %s", apiErrors.GetAPIError(apiErrors.ErrNoSuchKey), err.Error(), node.OID.EncodeToString())
} }
return nil, err return nil, err
} }

View file

@ -20,7 +20,7 @@ const (
) )
type PutLockInfoParams struct { type PutLockInfoParams struct {
ObjVersion *ObjectVersion ObjVersion *data.ObjectVersion
NewLock *data.ObjectLock NewLock *data.ObjectLock
CopiesNumbers []uint32 CopiesNumbers []uint32
NodeVersion *data.NodeVersion // optional NodeVersion *data.NodeVersion // optional
@ -100,7 +100,7 @@ func (n *layer) PutLockInfo(ctx context.Context, p *PutLockInfoParams) (err erro
return nil return nil
} }
func (n *layer) getNodeVersionFromCacheOrFrostfs(ctx context.Context, objVersion *ObjectVersion) (nodeVersion *data.NodeVersion, err error) { func (n *layer) getNodeVersionFromCacheOrFrostfs(ctx context.Context, objVersion *data.ObjectVersion) (nodeVersion *data.NodeVersion, err error) {
// check cache if node version is stored inside extendedObjectVersion // check cache if node version is stored inside extendedObjectVersion
nodeVersion = n.getNodeVersionFromCache(n.BearerOwner(ctx), objVersion) nodeVersion = n.getNodeVersionFromCache(n.BearerOwner(ctx), objVersion)
if nodeVersion == nil { if nodeVersion == nil {
@ -129,7 +129,7 @@ func (n *layer) putLockObject(ctx context.Context, bktInfo *data.BucketInfo, obj
return id, err return id, err
} }
func (n *layer) GetLockInfo(ctx context.Context, objVersion *ObjectVersion) (*data.LockInfo, error) { func (n *layer) GetLockInfo(ctx context.Context, objVersion *data.ObjectVersion) (*data.LockInfo, error) {
owner := n.BearerOwner(ctx) owner := n.BearerOwner(ctx)
if lockInfo := n.cache.GetLockInfo(owner, lockObjectKey(objVersion)); lockInfo != nil { if lockInfo := n.cache.GetLockInfo(owner, lockObjectKey(objVersion)); lockInfo != nil {
return lockInfo, nil return lockInfo, nil
@ -185,7 +185,7 @@ func (n *layer) getCORS(ctx context.Context, bkt *data.BucketInfo) (*data.CORSCo
return cors, nil return cors, nil
} }
func lockObjectKey(objVersion *ObjectVersion) string { func lockObjectKey(objVersion *data.ObjectVersion) string {
// todo reconsider forming name since versionID can be "null" or "" // todo reconsider forming name since versionID can be "null" or ""
return ".lock." + objVersion.BktInfo.CID.EncodeToString() + "." + objVersion.ObjectName + "." + objVersion.VersionID return ".lock." + objVersion.BktInfo.CID.EncodeToString() + "." + objVersion.ObjectName + "." + objVersion.VersionID
} }

View file

@ -14,22 +14,7 @@ import (
"go.uber.org/zap" "go.uber.org/zap"
) )
type GetObjectTaggingParams struct { func (n *layer) GetObjectTagging(ctx context.Context, p *data.GetObjectTaggingParams) (string, map[string]string, error) {
ObjectVersion *ObjectVersion
// NodeVersion can be nil. If not nil we save one request to tree service.
NodeVersion *data.NodeVersion // optional
}
type PutObjectTaggingParams struct {
ObjectVersion *ObjectVersion
TagSet map[string]string
// NodeVersion can be nil. If not nil we save one request to tree service.
NodeVersion *data.NodeVersion // optional
}
func (n *layer) GetObjectTagging(ctx context.Context, p *GetObjectTaggingParams) (string, map[string]string, error) {
var err error var err error
owner := n.BearerOwner(ctx) owner := n.BearerOwner(ctx)
@ -65,7 +50,7 @@ func (n *layer) GetObjectTagging(ctx context.Context, p *GetObjectTaggingParams)
return p.ObjectVersion.VersionID, tags, nil return p.ObjectVersion.VersionID, tags, nil
} }
func (n *layer) PutObjectTagging(ctx context.Context, p *PutObjectTaggingParams) (nodeVersion *data.NodeVersion, err error) { func (n *layer) PutObjectTagging(ctx context.Context, p *data.PutObjectTaggingParams) (nodeVersion *data.NodeVersion, err error) {
nodeVersion = p.NodeVersion nodeVersion = p.NodeVersion
if nodeVersion == nil { if nodeVersion == nil {
nodeVersion, err = n.getNodeVersionFromCacheOrFrostfs(ctx, p.ObjectVersion) nodeVersion, err = n.getNodeVersionFromCacheOrFrostfs(ctx, p.ObjectVersion)
@ -88,7 +73,7 @@ func (n *layer) PutObjectTagging(ctx context.Context, p *PutObjectTaggingParams)
return nodeVersion, nil return nodeVersion, nil
} }
func (n *layer) DeleteObjectTagging(ctx context.Context, p *ObjectVersion) (*data.NodeVersion, error) { func (n *layer) DeleteObjectTagging(ctx context.Context, p *data.ObjectVersion) (*data.NodeVersion, error) {
version, err := n.getNodeVersion(ctx, p) version, err := n.getNodeVersion(ctx, p)
if err != nil { if err != nil {
return nil, err return nil, err
@ -142,7 +127,7 @@ func (n *layer) DeleteBucketTagging(ctx context.Context, bktInfo *data.BucketInf
return n.treeService.DeleteBucketTagging(ctx, bktInfo) return n.treeService.DeleteBucketTagging(ctx, bktInfo)
} }
func objectTaggingCacheKey(p *ObjectVersion) string { func objectTaggingCacheKey(p *data.ObjectVersion) string {
return ".tagset." + p.BktInfo.CID.EncodeToString() + "." + p.ObjectName + "." + p.VersionID return ".tagset." + p.BktInfo.CID.EncodeToString() + "." + p.ObjectName + "." + p.VersionID
} }
@ -150,7 +135,7 @@ func bucketTaggingCacheKey(cnrID cid.ID) string {
return ".tagset." + cnrID.EncodeToString() return ".tagset." + cnrID.EncodeToString()
} }
func (n *layer) getNodeVersion(ctx context.Context, objVersion *ObjectVersion) (*data.NodeVersion, error) { func (n *layer) getNodeVersion(ctx context.Context, objVersion *data.ObjectVersion) (*data.NodeVersion, error) {
var err error var err error
var version *data.NodeVersion var version *data.NodeVersion
@ -188,7 +173,7 @@ func (n *layer) getNodeVersion(ctx context.Context, objVersion *ObjectVersion) (
return version, err return version, err
} }
func (n *layer) getNodeVersionFromCache(owner user.ID, o *ObjectVersion) *data.NodeVersion { func (n *layer) getNodeVersionFromCache(owner user.ID, o *data.ObjectVersion) *data.NodeVersion {
if len(o.VersionID) == 0 || o.VersionID == data.UnversionedObjectVersionID { if len(o.VersionID) == 0 || o.VersionID == data.UnversionedObjectVersionID {
return nil return nil
} }

View file

@ -56,7 +56,7 @@ func objectInfoFromMeta(bkt *data.BucketInfo, meta *object.Object) *data.ObjectI
Created: creation, Created: creation,
ContentType: mimeType, ContentType: mimeType,
Headers: headers, Headers: headers,
Owner: *meta.OwnerID(), Owner: meta.OwnerID(),
Size: meta.PayloadSize(), Size: meta.PayloadSize(),
CreationEpoch: meta.CreationEpoch(), CreationEpoch: meta.CreationEpoch(),
HashSum: hex.EncodeToString(payloadChecksum.Value()), HashSum: hex.EncodeToString(payloadChecksum.Value()),

View file

@ -145,12 +145,12 @@ func prepareContext(t *testing.T, cachesConfig ...*CachesConfig) *testContext {
bearerToken := bearertest.Token() bearerToken := bearertest.Token()
require.NoError(t, bearerToken.Sign(key.PrivateKey)) require.NoError(t, bearerToken.Sign(key.PrivateKey))
ctx := middleware.SetBoxData(context.Background(), &accessbox.Box{ ctx := middleware.SetBox(context.Background(), &middleware.Box{AccessBox: &accessbox.Box{
Gate: &accessbox.GateData{ Gate: &accessbox.GateData{
BearerToken: &bearerToken, BearerToken: &bearerToken,
GateKey: key.PublicKey(), GateKey: key.PublicKey(),
}, },
}) }})
tp := NewTestFrostFS(key) tp := NewTestFrostFS(key)
bktName := "testbucket1" bktName := "testbucket1"

View file

@ -2,16 +2,18 @@ package middleware
import ( import (
"crypto/elliptic" "crypto/elliptic"
stderrors "errors" "errors"
"fmt" "fmt"
"net/http" "net/http"
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/acl" "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/acl"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors" apiErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
frostfsErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -22,6 +24,7 @@ type (
AccessBox *accessbox.Box AccessBox *accessbox.Box
ClientTime time.Time ClientTime time.Time
AuthHeaders *AuthHeader AuthHeaders *AuthHeader
Attributes []object.Attribute
} }
// Center is a user authentication interface. // Center is a user authentication interface.
@ -40,30 +43,36 @@ type (
) )
// ErrNoAuthorizationHeader is returned for unauthenticated requests. // ErrNoAuthorizationHeader is returned for unauthenticated requests.
var ErrNoAuthorizationHeader = stderrors.New("no authorization header") var ErrNoAuthorizationHeader = errors.New("no authorization header")
func Auth(center Center, log *zap.Logger) Func { func Auth(center Center, log *zap.Logger) Func {
return func(h http.Handler) http.Handler { return func(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context() ctx := r.Context()
reqInfo := GetReqInfo(ctx)
reqInfo.User = "anon"
box, err := center.Authenticate(r) box, err := center.Authenticate(r)
if err != nil { if err != nil {
if err == ErrNoAuthorizationHeader { if errors.Is(err, ErrNoAuthorizationHeader) {
reqLogOrDefault(ctx, log).Debug(logs.CouldntReceiveAccessBoxForGateKeyRandomKeyWillBeUsed) reqLogOrDefault(ctx, log).Debug(logs.CouldntReceiveAccessBoxForGateKeyRandomKeyWillBeUsed, zap.Error(err))
} else { } else {
reqLogOrDefault(ctx, log).Error(logs.FailedToPassAuthentication, zap.Error(err)) reqLogOrDefault(ctx, log).Error(logs.FailedToPassAuthentication, zap.Error(err))
if _, ok := err.(errors.Error); !ok { err = frostfsErrors.UnwrapErr(err)
err = errors.GetAPIError(errors.ErrAccessDenied) if _, ok := err.(apiErrors.Error); !ok {
err = apiErrors.GetAPIError(apiErrors.ErrAccessDenied)
}
if _, wrErr := WriteErrorResponse(w, GetReqInfo(r.Context()), err); wrErr != nil {
reqLogOrDefault(ctx, log).Error(logs.FailedToWriteResponse, zap.Error(wrErr))
} }
WriteErrorResponse(w, GetReqInfo(r.Context()), err)
return return
} }
} else { } else {
ctx = SetBoxData(ctx, box.AccessBox) ctx = SetBox(ctx, box)
if !box.ClientTime.IsZero() {
ctx = SetClientTime(ctx, box.ClientTime) if box.AccessBox.Gate.BearerToken != nil {
reqInfo.User = bearer.ResolveIssuer(*box.AccessBox.Gate.BearerToken).String()
} }
ctx = SetAuthHeaders(ctx, box.AuthHeaders) reqLogOrDefault(ctx, log).Debug(logs.SuccessfulAuth, zap.String("accessKeyID", box.AuthHeaders.AccessKeyID))
} }
h.ServeHTTP(w, r.WithContext(ctx)) h.ServeHTTP(w, r.WithContext(ctx))
@ -88,7 +97,9 @@ func FrostfsIDValidation(frostfsID FrostFSIDValidator, log *zap.Logger) Func {
if err = validateBearerToken(frostfsID, bd.Gate.BearerToken); err != nil { if err = validateBearerToken(frostfsID, bd.Gate.BearerToken); err != nil {
reqLogOrDefault(ctx, log).Error(logs.FrostfsIDValidationFailed, zap.Error(err)) reqLogOrDefault(ctx, log).Error(logs.FrostfsIDValidationFailed, zap.Error(err))
WriteErrorResponse(w, GetReqInfo(r.Context()), err) if _, wrErr := WriteErrorResponse(w, GetReqInfo(r.Context()), err); wrErr != nil {
reqLogOrDefault(ctx, log).Error(logs.FailedToWriteResponse, zap.Error(wrErr))
}
return return
} }

View file

@ -9,6 +9,7 @@ const (
HeadBucketOperation = "HeadBucket" HeadBucketOperation = "HeadBucket"
ListMultipartUploadsOperation = "ListMultipartUploads" ListMultipartUploadsOperation = "ListMultipartUploads"
GetBucketLocationOperation = "GetBucketLocation" GetBucketLocationOperation = "GetBucketLocation"
GetBucketPolicyStatusOperation = "GetBucketPolicyStatus"
GetBucketPolicyOperation = "GetBucketPolicy" GetBucketPolicyOperation = "GetBucketPolicy"
GetBucketLifecycleOperation = "GetBucketLifecycle" GetBucketLifecycleOperation = "GetBucketLifecycle"
GetBucketEncryptionOperation = "GetBucketEncryption" GetBucketEncryptionOperation = "GetBucketEncryption"
@ -77,6 +78,7 @@ const (
const ( const (
UploadsQuery = "uploads" UploadsQuery = "uploads"
LocationQuery = "location" LocationQuery = "location"
PolicyStatusQuery = "policyStatus"
PolicyQuery = "policy" PolicyQuery = "policy"
LifecycleQuery = "lifecycle" LifecycleQuery = "lifecycle"
EncryptionQuery = "encryption" EncryptionQuery = "encryption"

View file

@ -9,10 +9,9 @@ import (
"sync/atomic" "sync/atomic"
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/metrics" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/metrics"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -39,8 +38,8 @@ type (
ResolveNamespaceAlias(namespace string) string ResolveNamespaceAlias(namespace string) string
} }
// BucketResolveFunc is a func to resolve bucket info by name. // ContainerIDResolveFunc is a func to resolve container id by name.
BucketResolveFunc func(ctx context.Context, bucket string) (*data.BucketInfo, error) ContainerIDResolveFunc func(ctx context.Context, bucket string) (cid.ID, error)
// cidResolveFunc is a func to resolve CID in Stats handler. // cidResolveFunc is a func to resolve CID in Stats handler.
cidResolveFunc func(ctx context.Context, reqInfo *ReqInfo) (cnrID string) cidResolveFunc func(ctx context.Context, reqInfo *ReqInfo) (cnrID string)
@ -49,7 +48,7 @@ type (
const systemPath = "/system" const systemPath = "/system"
// Metrics wraps http handler for api with basic statistics collection. // Metrics wraps http handler for api with basic statistics collection.
func Metrics(log *zap.Logger, resolveBucket BucketResolveFunc, appMetrics *metrics.AppMetrics, settings MetricsSettings) Func { func Metrics(log *zap.Logger, resolveBucket ContainerIDResolveFunc, appMetrics *metrics.AppMetrics, settings MetricsSettings) Func {
return func(h http.Handler) http.Handler { return func(h http.Handler) http.Handler {
return stats(h.ServeHTTP, resolveCID(log, resolveBucket), appMetrics, settings) return stats(h.ServeHTTP, resolveCID(log, resolveBucket), appMetrics, settings)
} }
@ -80,9 +79,8 @@ func stats(f http.HandlerFunc, resolveCID cidResolveFunc, appMetrics *metrics.Ap
// simply for the fact that it is not human-readable. // simply for the fact that it is not human-readable.
durationSecs := time.Since(statsWriter.startTime).Seconds() durationSecs := time.Since(statsWriter.startTime).Seconds()
user := resolveUser(r.Context())
cnrID := resolveCID(r.Context(), reqInfo) cnrID := resolveCID(r.Context(), reqInfo)
appMetrics.Update(user, reqInfo.BucketName, cnrID, settings.ResolveNamespaceAlias(reqInfo.Namespace), appMetrics.UsersAPIStats().Update(reqInfo.User, reqInfo.BucketName, cnrID, settings.ResolveNamespaceAlias(reqInfo.Namespace),
requestTypeFromAPI(reqInfo.API), in.countBytes, out.countBytes) requestTypeFromAPI(reqInfo.API), in.countBytes, out.countBytes)
code := statsWriter.statusCode code := statsWriter.statusCode
@ -133,30 +131,22 @@ func requestTypeFromAPI(api string) metrics.RequestType {
} }
// resolveCID forms CIDResolveFunc using BucketResolveFunc. // resolveCID forms CIDResolveFunc using BucketResolveFunc.
func resolveCID(log *zap.Logger, resolveBucket BucketResolveFunc) cidResolveFunc { func resolveCID(log *zap.Logger, resolveContainerID ContainerIDResolveFunc) cidResolveFunc {
return func(ctx context.Context, reqInfo *ReqInfo) (cnrID string) { return func(ctx context.Context, reqInfo *ReqInfo) (cnrID string) {
if reqInfo.BucketName == "" || reqInfo.API == CreateBucketOperation || reqInfo.API == "" { if reqInfo.BucketName == "" || reqInfo.API == CreateBucketOperation || reqInfo.API == "" {
return "" return ""
} }
bktInfo, err := resolveBucket(ctx, reqInfo.BucketName) containerID, err := resolveContainerID(ctx, reqInfo.BucketName)
if err != nil { if err != nil {
reqLogOrDefault(ctx, log).Debug(logs.FailedToResolveCID, zap.Error(err)) reqLogOrDefault(ctx, log).Debug(logs.FailedToResolveCID, zap.Error(err))
return "" return ""
} }
return bktInfo.CID.EncodeToString() return containerID.EncodeToString()
} }
} }
func resolveUser(ctx context.Context) string {
user := "anon"
if bd, err := GetBoxData(ctx); err == nil && bd.Gate.BearerToken != nil {
user = bearer.ResolveIssuer(*bd.Gate.BearerToken).String()
}
return user
}
// WriteHeader -- writes http status code. // WriteHeader -- writes http status code.
func (w *responseWrapper) WriteHeader(code int) { func (w *responseWrapper) WriteHeader(code int) {
w.Do(func() { w.Do(func() {

View file

@ -1,12 +1,18 @@
package middleware package middleware
import ( import (
"context"
"crypto/elliptic" "crypto/elliptic"
"encoding/xml"
"fmt" "fmt"
"io"
"net/http" "net/http"
"net/url"
"strings" "strings"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
apiErr "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors" apiErr "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
frostfsErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain" "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine" "git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
@ -18,29 +24,69 @@ import (
"go.uber.org/zap" "go.uber.org/zap"
) )
const (
QueryVersionID = "versionId"
QueryPrefix = "prefix"
QueryDelimiter = "delimiter"
QueryMaxKeys = "max-keys"
amzTagging = "x-amz-tagging"
)
// At the beginning of these operations resources haven't yet been created.
var withoutResourceOps = []string{
CreateBucketOperation,
CreateMultipartUploadOperation,
AbortMultipartUploadOperation,
CompleteMultipartUploadOperation,
UploadPartOperation,
UploadPartCopyOperation,
ListPartsOperation,
PutObjectOperation,
CopyObjectOperation,
}
type PolicySettings interface { type PolicySettings interface {
ResolveNamespaceAlias(ns string) string
PolicyDenyByDefault() bool PolicyDenyByDefault() bool
ACLEnabled() bool
} }
type FrostFSIDInformer interface { type FrostFSIDInformer interface {
GetUserGroupIDs(userHash util.Uint160) ([]string, error) GetUserGroupIDsAndClaims(userHash util.Uint160) ([]string, map[string]string, error)
} }
func PolicyCheck(storage engine.ChainRouter, frostfsid FrostFSIDInformer, settings PolicySettings, domains []string, log *zap.Logger) Func { type XMLDecoder interface {
NewXMLDecoder(io.Reader) *xml.Decoder
}
type ResourceTagging interface {
GetBucketTagging(ctx context.Context, bktInfo *data.BucketInfo) (map[string]string, error)
GetObjectTagging(ctx context.Context, p *data.GetObjectTaggingParams) (string, map[string]string, error)
}
// BucketResolveFunc is a func to resolve bucket info by name.
type BucketResolveFunc func(ctx context.Context, bucket string) (*data.BucketInfo, error)
type PolicyConfig struct {
Storage engine.ChainRouter
FrostfsID FrostFSIDInformer
Settings PolicySettings
Domains []string
Log *zap.Logger
BucketResolver BucketResolveFunc
Decoder XMLDecoder
Tagging ResourceTagging
}
func PolicyCheck(cfg PolicyConfig) Func {
return func(h http.Handler) http.Handler { return func(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context() ctx := r.Context()
if err := policyCheck(r, cfg); err != nil {
st, err := policyCheck(storage, frostfsid, settings, domains, r) reqLogOrDefault(ctx, cfg.Log).Error(logs.PolicyValidationFailed, zap.Error(err))
if err == nil { err = frostfsErrors.UnwrapErr(err)
if st != chain.Allow && (st != chain.NoRuleFound || settings.PolicyDenyByDefault()) { if _, wrErr := WriteErrorResponse(w, GetReqInfo(ctx), err); wrErr != nil {
err = apiErr.GetAPIErrorWithError(apiErr.ErrAccessDenied, fmt.Errorf("policy check: %s", st.String())) reqLogOrDefault(ctx, cfg.Log).Error(logs.FailedToWriteResponse, zap.Error(wrErr))
} }
}
if err != nil {
reqLogOrDefault(ctx, log).Error(logs.PolicyValidationFailed, zap.Error(err))
WriteErrorResponse(w, GetReqInfo(ctx), err)
return return
} }
@ -49,55 +95,110 @@ func PolicyCheck(storage engine.ChainRouter, frostfsid FrostFSIDInformer, settin
} }
} }
func policyCheck(storage engine.ChainRouter, frostfsid FrostFSIDInformer, settings PolicySettings, domains []string, r *http.Request) (chain.Status, error) { func policyCheck(r *http.Request, cfg PolicyConfig) error {
req, err := getPolicyRequest(r, frostfsid, domains) reqType, bktName, objName := getBucketObject(r, cfg.Domains)
req, userKey, userGroups, err := getPolicyRequest(r, cfg, reqType, bktName, objName)
if err != nil { if err != nil {
return 0, err return err
}
var bktInfo *data.BucketInfo
if reqType != noneType && !strings.HasSuffix(req.Operation(), CreateBucketOperation) {
bktInfo, err = cfg.BucketResolver(r.Context(), bktName)
if err != nil {
return err
}
} }
reqInfo := GetReqInfo(r.Context()) reqInfo := GetReqInfo(r.Context())
target := engine.NewRequestTargetWithNamespace(settings.ResolveNamespaceAlias(reqInfo.Namespace)) target := engine.NewRequestTargetWithNamespace(reqInfo.Namespace)
st, found, err := storage.IsAllowed(chain.S3, target, req) if bktInfo != nil {
cnrTarget := engine.ContainerTarget(bktInfo.CID.EncodeToString())
target.Container = &cnrTarget
}
if userKey != nil {
entityName := fmt.Sprintf("%s:%s", reqInfo.Namespace, userKey.Address())
uTarget := engine.UserTarget(entityName)
target.User = &uTarget
}
gts := make([]engine.Target, len(userGroups))
for i, group := range userGroups {
entityName := fmt.Sprintf("%s:%s", reqInfo.Namespace, group)
gts[i] = engine.GroupTarget(entityName)
}
target.Groups = gts
st, found, err := cfg.Storage.IsAllowed(chain.S3, target, req)
if err != nil { if err != nil {
return 0, err return err
} }
if !found { if !found {
st = chain.NoRuleFound st = chain.NoRuleFound
} }
return st, nil switch {
case st == chain.Allow:
return nil
case st != chain.NoRuleFound:
return apiErr.GetAPIErrorWithError(apiErr.ErrAccessDenied, fmt.Errorf("policy check: %s", st.String()))
} }
func getPolicyRequest(r *http.Request, frostfsid FrostFSIDInformer, domains []string) (*testutil.Request, error) { isAPE := !cfg.Settings.ACLEnabled()
if bktInfo != nil {
isAPE = bktInfo.APEEnabled
}
if isAPE && cfg.Settings.PolicyDenyByDefault() {
return apiErr.GetAPIErrorWithError(apiErr.ErrAccessDenied, fmt.Errorf("policy check: %s", st.String()))
}
return nil
}
func getPolicyRequest(r *http.Request, cfg PolicyConfig, reqType ReqType, bktName string, objName string) (*testutil.Request, *keys.PublicKey, []string, error) {
var ( var (
owner string owner string
groups []string groups []string
tags map[string]string
pk *keys.PublicKey
) )
ctx := r.Context() ctx := r.Context()
bd, err := GetBoxData(ctx) bd, err := GetBoxData(ctx)
if err == nil && bd.Gate.BearerToken != nil { if err == nil && bd.Gate.BearerToken != nil {
pk, err := keys.NewPublicKeyFromBytes(bd.Gate.BearerToken.SigningKeyBytes(), elliptic.P256()) pk, err = keys.NewPublicKeyFromBytes(bd.Gate.BearerToken.SigningKeyBytes(), elliptic.P256())
if err != nil { if err != nil {
return nil, fmt.Errorf("parse pubclic key from btoken: %w", err) return nil, nil, nil, fmt.Errorf("parse pubclic key from btoken: %w", err)
} }
owner = pk.Address() owner = pk.Address()
groups, err = frostfsid.GetUserGroupIDs(pk.GetScriptHash()) groups, tags, err = cfg.FrostfsID.GetUserGroupIDsAndClaims(pk.GetScriptHash())
if err != nil { if err != nil {
return nil, fmt.Errorf("get group ids: %w", err) return nil, nil, nil, fmt.Errorf("get group ids: %w", err)
} }
} }
op, res := determineOperationAndResource(r, domains) op := determineOperation(r, reqType)
var res string
switch reqType {
case objectType:
res = fmt.Sprintf(s3.ResourceFormatS3BucketObject, bktName, objName)
default:
res = fmt.Sprintf(s3.ResourceFormatS3Bucket, bktName)
}
return testutil.NewRequest(op, testutil.NewResource(res, nil), properties, err := determineProperties(r, cfg.Decoder, cfg.BucketResolver, cfg.Tagging, reqType, op, bktName, objName, owner, groups, tags)
map[string]string{ if err != nil {
s3.PropertyKeyOwner: owner, return nil, nil, nil, fmt.Errorf("determine properties: %w", err)
common.PropertyKeyFrostFSIDGroupID: chain.FormCondSliceContainsValue(groups), }
},
), nil reqLogOrDefault(r.Context(), cfg.Log).Debug(logs.PolicyRequest, zap.String("action", op),
zap.String("resource", res), zap.Any("properties", properties))
return testutil.NewRequest(op, testutil.NewResource(res, nil), properties), pk, groups, nil
} }
type ReqType int type ReqType int
@ -108,45 +209,34 @@ const (
objectType objectType
) )
func determineOperationAndResource(r *http.Request, domains []string) (operation string, resource string) { func getBucketObject(r *http.Request, domains []string) (reqType ReqType, bktName string, objName string) {
var (
reqType ReqType
matchDomain bool
)
for _, domain := range domains { for _, domain := range domains {
ind := strings.Index(r.Host, "."+domain) ind := strings.Index(r.Host, "."+domain)
if ind == -1 { if ind == -1 {
continue continue
} }
matchDomain = true
reqType = bucketType
bkt := r.Host[:ind] bkt := r.Host[:ind]
if obj := strings.TrimPrefix(r.URL.Path, "/"); obj != "" { if obj := strings.TrimPrefix(r.URL.Path, "/"); obj != "" {
reqType = objectType return objectType, bkt, obj
resource = fmt.Sprintf(s3.ResourceFormatS3BucketObject, bkt, obj)
} else {
resource = fmt.Sprintf(s3.ResourceFormatS3Bucket, bkt)
} }
break return bucketType, bkt, ""
} }
if !matchDomain {
bktObj := strings.TrimPrefix(r.URL.Path, "/") bktObj := strings.TrimPrefix(r.URL.Path, "/")
if ind := strings.IndexByte(bktObj, '/'); ind == -1 {
reqType = bucketType
resource = fmt.Sprintf(s3.ResourceFormatS3Bucket, bktObj)
if bktObj == "" { if bktObj == "" {
reqType = noneType return noneType, "", ""
}
} else {
reqType = objectType
resource = fmt.Sprintf(s3.ResourceFormatS3BucketObject, bktObj[:ind], bktObj[ind+1:])
}
} }
if ind := strings.IndexByte(bktObj, '/'); ind != -1 && bktObj[ind+1:] != "" {
return objectType, bktObj[:ind], bktObj[ind+1:]
}
return bucketType, strings.TrimSuffix(bktObj, "/"), ""
}
func determineOperation(r *http.Request, reqType ReqType) (operation string) {
switch reqType { switch reqType {
case objectType: case objectType:
operation = determineObjectOperation(r) operation = determineObjectOperation(r)
@ -156,7 +246,7 @@ func determineOperationAndResource(r *http.Request, domains []string) (operation
operation = determineGeneralOperation(r) operation = determineGeneralOperation(r)
} }
return "s3:" + operation, resource return "s3:" + operation
} }
func determineBucketOperation(r *http.Request) string { func determineBucketOperation(r *http.Request) string {
@ -260,7 +350,7 @@ func determineBucketOperation(r *http.Request) string {
} }
} }
return "" return "UnmatchedBucketOperation"
} }
func determineObjectOperation(r *http.Request) string { func determineObjectOperation(r *http.Request) string {
@ -324,12 +414,156 @@ func determineObjectOperation(r *http.Request) string {
} }
} }
return "" return "UnmatchedObjectOperation"
} }
func determineGeneralOperation(r *http.Request) string { func determineGeneralOperation(r *http.Request) string {
if r.Method == http.MethodGet { if r.Method == http.MethodGet {
return ListBucketsOperation return ListBucketsOperation
} }
return "" return "UnmatchedOperation"
}
func determineProperties(r *http.Request, decoder XMLDecoder, resolver BucketResolveFunc, tagging ResourceTagging, reqType ReqType,
op, bktName, objName, owner string, groups []string, tags map[string]string) (map[string]string, error) {
res := map[string]string{
s3.PropertyKeyOwner: owner,
common.PropertyKeyFrostFSIDGroupID: chain.FormCondSliceContainsValue(groups),
common.PropertyKeyFrostFSSourceIP: GetReqInfo(r.Context()).RemoteHost,
}
queries := GetReqInfo(r.Context()).URL.Query()
for k, v := range tags {
res[fmt.Sprintf(common.PropertyKeyFormatFrostFSIDUserClaim, k)] = v
}
if reqType == objectType {
if versionID := queries.Get(QueryVersionID); len(versionID) > 0 {
res[s3.PropertyKeyVersionID] = versionID
}
}
if reqType == bucketType && (strings.HasSuffix(op, ListObjectsV1Operation) || strings.HasSuffix(op, ListObjectsV2Operation) ||
strings.HasSuffix(op, ListBucketObjectVersionsOperation) || strings.HasSuffix(op, ListMultipartUploadsOperation)) {
if prefix := queries.Get(QueryPrefix); len(prefix) > 0 {
res[s3.PropertyKeyPrefix] = prefix
}
if delimiter := queries.Get(QueryDelimiter); len(delimiter) > 0 {
res[s3.PropertyKeyDelimiter] = delimiter
}
if maxKeys := queries.Get(QueryMaxKeys); len(maxKeys) > 0 {
res[s3.PropertyKeyMaxKeys] = maxKeys
}
}
tags, err := determineTags(r, decoder, resolver, tagging, reqType, op, bktName, objName, queries.Get(QueryVersionID))
if err != nil {
return nil, fmt.Errorf("determine tags: %w", err)
}
for k, v := range tags {
res[k] = v
}
attrs, err := GetAccessBoxAttrs(r.Context())
if err == nil {
for _, attr := range attrs {
res[fmt.Sprintf(s3.PropertyKeyFormatAccessBoxAttr, attr.Key())] = attr.Value()
}
}
return res, nil
}
func determineTags(r *http.Request, decoder XMLDecoder, resolver BucketResolveFunc, tagging ResourceTagging, reqType ReqType,
op, bktName, objName, versionID string) (map[string]string, error) {
res, err := determineRequestTags(r, decoder, op)
if err != nil {
return nil, fmt.Errorf("determine request tags: %w", err)
}
tags, err := determineResourceTags(r.Context(), reqType, op, bktName, objName, versionID, resolver, tagging)
if err != nil {
return nil, fmt.Errorf("determine resource tags: %w", err)
}
for k, v := range tags {
res[k] = v
}
return res, nil
}
func determineRequestTags(r *http.Request, decoder XMLDecoder, op string) (map[string]string, error) {
tags := make(map[string]string)
if strings.HasSuffix(op, PutObjectTaggingOperation) || strings.HasSuffix(op, PutBucketTaggingOperation) {
tagging := new(data.Tagging)
if err := decoder.NewXMLDecoder(r.Body).Decode(tagging); err != nil {
return nil, fmt.Errorf("%w: %s", apiErr.GetAPIError(apiErr.ErrMalformedXML), err.Error())
}
GetReqInfo(r.Context()).Tagging = tagging
for _, tag := range tagging.TagSet {
tags[fmt.Sprintf(s3.PropertyKeyFormatRequestTag, tag.Key)] = tag.Value
}
}
if tagging := r.Header.Get(amzTagging); len(tagging) > 0 {
queries, err := url.ParseQuery(tagging)
if err != nil {
return nil, apiErr.GetAPIError(apiErr.ErrInvalidArgument)
}
for key := range queries {
tags[fmt.Sprintf(s3.PropertyKeyFormatRequestTag, key)] = queries.Get(key)
}
}
return tags, nil
}
func determineResourceTags(ctx context.Context, reqType ReqType, op, bktName, objName, versionID string, resolver BucketResolveFunc,
tagging ResourceTagging) (map[string]string, error) {
tags := make(map[string]string)
if reqType != bucketType && reqType != objectType {
return tags, nil
}
for _, withoutResOp := range withoutResourceOps {
if strings.HasSuffix(op, withoutResOp) {
return tags, nil
}
}
bktInfo, err := resolver(ctx, bktName)
if err != nil {
return nil, fmt.Errorf("get bucket info: %w", err)
}
if reqType == bucketType {
tags, err = tagging.GetBucketTagging(ctx, bktInfo)
if err != nil {
return nil, fmt.Errorf("get bucket tagging: %w", err)
}
}
if reqType == objectType {
tagPrm := &data.GetObjectTaggingParams{
ObjectVersion: &data.ObjectVersion{
BktInfo: bktInfo,
ObjectName: objName,
VersionID: versionID,
},
}
_, tags, err = tagging.GetObjectTagging(ctx, tagPrm)
if err != nil {
return nil, fmt.Errorf("get object tagging: %w", err)
}
}
res := make(map[string]string, len(tags))
for k, v := range tags {
res[fmt.Sprintf(s3.PropertyKeyFormatResourceTag, k)] = v
}
return res, nil
} }

View file

@ -0,0 +1,82 @@
package middleware
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/stretchr/testify/require"
)
func TestReqTypeDetermination(t *testing.T) {
bkt, obj, domain := "test-bucket", "test-object", "domain"
for _, tc := range []struct {
name string
target string
host string
domains []string
expectedType ReqType
expectedBktName string
expectedObjName string
}{
{
name: "bucket request, path-style",
target: "/" + bkt,
expectedType: bucketType,
expectedBktName: bkt,
},
{
name: "bucket request with slash, path-style",
target: "/" + bkt + "/",
expectedType: bucketType,
expectedBktName: bkt,
},
{
name: "object request, path-style",
target: "/" + bkt + "/" + obj,
expectedType: objectType,
expectedBktName: bkt,
expectedObjName: obj,
},
{
name: "object request with slash, path-style",
target: "/" + bkt + "/" + obj + "/",
expectedType: objectType,
expectedBktName: bkt,
expectedObjName: obj + "/",
},
{
name: "none type request",
target: "/",
expectedType: noneType,
},
{
name: "bucket request, virtual-hosted style",
target: "/",
host: bkt + "." + domain,
domains: []string{"some-domain", domain},
expectedType: bucketType,
expectedBktName: bkt,
},
{
name: "object request, virtual-hosted style",
target: "/" + obj,
host: bkt + "." + domain,
domains: []string{"some-domain", domain},
expectedType: objectType,
expectedBktName: bkt,
expectedObjName: obj,
},
} {
t.Run(tc.name, func(t *testing.T) {
r := httptest.NewRequest(http.MethodPut, tc.target, nil)
r.Host = tc.host
reqType, bktName, objName := getBucketObject(r, tc.domains)
require.Equal(t, tc.expectedType, reqType)
require.Equal(t, tc.expectedBktName, bktName)
require.Equal(t, tc.expectedObjName, objName)
})
}
}

View file

@ -39,6 +39,8 @@ type (
TraceID string // Trace ID TraceID string // Trace ID
URL *url.URL // Request url URL *url.URL // Request url
Namespace string Namespace string
User string // User owner id
Tagging *data.Tagging
tags []KeyVal // Any additional info not accommodated by above fields tags []KeyVal // Any additional info not accommodated by above fields
} }
@ -81,17 +83,24 @@ var (
) )
// NewReqInfo returns new ReqInfo based on parameters. // NewReqInfo returns new ReqInfo based on parameters.
func NewReqInfo(w http.ResponseWriter, r *http.Request, req ObjectRequest) *ReqInfo { func NewReqInfo(w http.ResponseWriter, r *http.Request, req ObjectRequest, sourceIPHeader string) *ReqInfo {
return &ReqInfo{ reqInfo := &ReqInfo{
API: req.Method, API: req.Method,
BucketName: req.Bucket, BucketName: req.Bucket,
ObjectName: req.Object, ObjectName: req.Object,
UserAgent: r.UserAgent(), UserAgent: r.UserAgent(),
RemoteHost: getSourceIP(r),
RequestID: GetRequestID(w), RequestID: GetRequestID(w),
DeploymentID: deploymentID.String(), DeploymentID: deploymentID.String(),
URL: r.URL, URL: r.URL,
} }
if sourceIPHeader != "" {
reqInfo.RemoteHost = r.Header.Get(sourceIPHeader)
} else {
reqInfo.RemoteHost = getSourceIP(r)
}
return reqInfo
} }
// AppendTags -- appends key/val to ReqInfo.tags. // AppendTags -- appends key/val to ReqInfo.tags.
@ -190,13 +199,18 @@ func GetReqLog(ctx context.Context) *zap.Logger {
type RequestSettings interface { type RequestSettings interface {
NamespaceHeader() string NamespaceHeader() string
ResolveNamespaceAlias(string) string
SourceIPHeader() string
} }
func Request(log *zap.Logger, settings RequestSettings) Func { func Request(log *zap.Logger, settings RequestSettings) Func {
return func(h http.Handler) http.Handler { return func(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// generate random UUIDv4 // generate random UUIDv4
id, _ := uuid.NewRandom() id, err := uuid.NewRandom()
if err != nil {
log.Error(logs.FailedToGenerateRequestID, zap.Error(err))
}
// set request id into response header // set request id into response header
// also we have to set request id here // also we have to set request id here
@ -205,8 +219,8 @@ func Request(log *zap.Logger, settings RequestSettings) Func {
// set request info into context // set request info into context
// bucket name and object will be set in reqInfo later (limitation of go-chi) // bucket name and object will be set in reqInfo later (limitation of go-chi)
reqInfo := NewReqInfo(w, r, ObjectRequest{}) reqInfo := NewReqInfo(w, r, ObjectRequest{}, settings.SourceIPHeader())
reqInfo.Namespace = r.Header.Get(settings.NamespaceHeader()) reqInfo.Namespace = settings.ResolveNamespaceAlias(r.Header.Get(settings.NamespaceHeader()))
r = r.WithContext(SetReqInfo(r.Context(), reqInfo)) r = r.WithContext(SetReqInfo(r.Context(), reqInfo))
// set request id into gRPC meta header // set request id into gRPC meta header
@ -220,7 +234,7 @@ func Request(log *zap.Logger, settings RequestSettings) Func {
r = r.WithContext(SetReqLogger(r.Context(), reqLogger)) r = r.WithContext(SetReqLogger(r.Context(), reqLogger))
reqLogger.Info(logs.RequestStart, zap.String("host", r.Host), reqLogger.Info(logs.RequestStart, zap.String("host", r.Host),
zap.String("remote_host", reqInfo.RemoteHost)) zap.String("remote_host", reqInfo.RemoteHost), zap.String("namespace", reqInfo.Namespace))
// continue execution // continue execution
h.ServeHTTP(w, r) h.ServeHTTP(w, r)
@ -237,8 +251,10 @@ func AddBucketName(l *zap.Logger) Func {
reqInfo := GetReqInfo(ctx) reqInfo := GetReqInfo(ctx)
reqInfo.BucketName = chi.URLParam(r, BucketURLPrm) reqInfo.BucketName = chi.URLParam(r, BucketURLPrm)
if reqInfo.BucketName != "" {
reqLogger := reqLogOrDefault(ctx, l) reqLogger := reqLogOrDefault(ctx, l)
r = r.WithContext(SetReqLogger(ctx, reqLogger.With(zap.String("bucket", reqInfo.BucketName)))) r = r.WithContext(SetReqLogger(ctx, reqLogger.With(zap.String("bucket", reqInfo.BucketName))))
}
h.ServeHTTP(w, r) h.ServeHTTP(w, r)
}) })
@ -268,7 +284,9 @@ func AddObjectName(l *zap.Logger) Func {
} }
} }
if reqInfo.ObjectName != "" {
r = r.WithContext(SetReqLogger(ctx, reqLogger.With(zap.String("object", reqInfo.ObjectName)))) r = r.WithContext(SetReqLogger(ctx, reqLogger.With(zap.String("object", reqInfo.ObjectName))))
}
h.ServeHTTP(w, r) h.ServeHTTP(w, r)
}) })
@ -307,11 +325,14 @@ func getSourceIP(r *http.Request) string {
} }
} }
if addr != "" { if addr == "" {
return addr addr = r.RemoteAddr
} }
// Default to remote address if headers not set. // Default to remote address if headers not set.
addr, _, _ = net.SplitHostPort(r.RemoteAddr) raddr, _, _ := net.SplitHostPort(addr)
if raddr == "" {
return addr return addr
} }
return raddr
}

View file

@ -118,7 +118,8 @@ var s3ErrorResponseMap = map[string]string{
} }
// WriteErrorResponse writes error headers. // WriteErrorResponse writes error headers.
func WriteErrorResponse(w http.ResponseWriter, reqInfo *ReqInfo, err error) int { // returns http error code and error in case of failure of response writing.
func WriteErrorResponse(w http.ResponseWriter, reqInfo *ReqInfo, err error) (int, error) {
code := http.StatusInternalServerError code := http.StatusInternalServerError
if e, ok := err.(errors.Error); ok { if e, ok := err.(errors.Error); ok {
@ -134,9 +135,14 @@ func WriteErrorResponse(w http.ResponseWriter, reqInfo *ReqInfo, err error) int
// Generates error response. // Generates error response.
errorResponse := getAPIErrorResponse(reqInfo, err) errorResponse := getAPIErrorResponse(reqInfo, err)
encodedErrorResponse := EncodeResponse(errorResponse) encodedErrorResponse, err := EncodeResponse(errorResponse)
WriteResponse(w, code, encodedErrorResponse, MimeXML) if err != nil {
return code return 0, fmt.Errorf("encode response: %w", err)
}
if err = WriteResponse(w, code, encodedErrorResponse, MimeXML); err != nil {
return 0, fmt.Errorf("write response: %w", err)
}
return code, nil
} }
// Write http common headers. // Write http common headers.
@ -157,7 +163,7 @@ func removeSensitiveHeaders(h http.Header) {
} }
// WriteResponse writes given statusCode and response into w (with mType header if set). // WriteResponse writes given statusCode and response into w (with mType header if set).
func WriteResponse(w http.ResponseWriter, statusCode int, response []byte, mType mimeType) { func WriteResponse(w http.ResponseWriter, statusCode int, response []byte, mType mimeType) error {
setCommonHeaders(w) setCommonHeaders(w)
if mType != MimeNone { if mType != MimeNone {
w.Header().Set(hdrContentType, string(mType)) w.Header().Set(hdrContentType, string(mType))
@ -165,37 +171,46 @@ func WriteResponse(w http.ResponseWriter, statusCode int, response []byte, mType
w.Header().Set(hdrContentLength, strconv.Itoa(len(response))) w.Header().Set(hdrContentLength, strconv.Itoa(len(response)))
w.WriteHeader(statusCode) w.WriteHeader(statusCode)
if response == nil { if response == nil {
return return nil
} }
WriteResponseBody(w, response) return WriteResponseBody(w, response)
} }
// WriteResponseBody writes response into w. // WriteResponseBody writes response into w.
func WriteResponseBody(w http.ResponseWriter, response []byte) { func WriteResponseBody(w http.ResponseWriter, response []byte) error {
_, _ = w.Write(response) if _, err := w.Write(response); err != nil {
return err
}
if flusher, ok := w.(http.Flusher); ok { if flusher, ok := w.(http.Flusher); ok {
flusher.Flush() flusher.Flush()
} }
return nil
} }
// EncodeResponse encodes the response headers into XML format. // EncodeResponse encodes the response headers into XML format.
func EncodeResponse(response interface{}) []byte { func EncodeResponse(response interface{}) ([]byte, error) {
var bytesBuffer bytes.Buffer var bytesBuffer bytes.Buffer
bytesBuffer.WriteString(xml.Header) bytesBuffer.WriteString(xml.Header)
_ = xml. if err := xml.NewEncoder(&bytesBuffer).Encode(response); err != nil {
NewEncoder(&bytesBuffer). return nil, err
Encode(response) }
return bytesBuffer.Bytes()
return bytesBuffer.Bytes(), nil
} }
// EncodeResponseNoHeader encodes response without setting xml.Header. // EncodeResponseNoHeader encodes response without setting xml.Header.
// Should be used with periodicXMLWriter which sends xml.Header to the client // Should be used with periodicXMLWriter which sends xml.Header to the client
// with whitespaces to keep connection alive. // with whitespaces to keep connection alive.
func EncodeResponseNoHeader(response interface{}) []byte { func EncodeResponseNoHeader(response interface{}) ([]byte, error) {
var bytesBuffer bytes.Buffer var bytesBuffer bytes.Buffer
_ = xml.NewEncoder(&bytesBuffer).Encode(response) if err := xml.NewEncoder(&bytesBuffer).Encode(response); err != nil {
return bytesBuffer.Bytes() return nil, err
}
return bytesBuffer.Bytes(), nil
} }
// EncodeToResponse encodes the response into ResponseWriter. // EncodeToResponse encodes the response into ResponseWriter.
@ -227,8 +242,8 @@ func EncodeToResponseNoHeader(w http.ResponseWriter, response interface{}) error
// WriteSuccessResponseHeadersOnly writes HTTP (200) OK response with no data // WriteSuccessResponseHeadersOnly writes HTTP (200) OK response with no data
// to the client. // to the client.
func WriteSuccessResponseHeadersOnly(w http.ResponseWriter) { func WriteSuccessResponseHeadersOnly(w http.ResponseWriter) error {
WriteResponse(w, http.StatusOK, nil, MimeNone) return WriteResponse(w, http.StatusOK, nil, MimeNone)
} }
// Error -- Returns S3 error string. // Error -- Returns S3 error string.
@ -312,12 +327,20 @@ func LogSuccessResponse(l *zap.Logger) Func {
reqLogger := reqLogOrDefault(ctx, l) reqLogger := reqLogOrDefault(ctx, l)
reqInfo := GetReqInfo(ctx) reqInfo := GetReqInfo(ctx)
fields := []zap.Field{ fields := make([]zap.Field, 0, 6)
zap.String("method", reqInfo.API), fields = append(fields,
zap.String("bucket", reqInfo.BucketName),
zap.String("object", reqInfo.ObjectName),
zap.Int("status", lw.statusCode), zap.Int("status", lw.statusCode),
zap.String("description", http.StatusText(lw.statusCode))} zap.String("description", http.StatusText(lw.statusCode)),
zap.String("method", reqInfo.API),
)
if reqInfo.BucketName != "" {
fields = append(fields, zap.String("bucket", reqInfo.BucketName))
}
if reqInfo.ObjectName != "" {
fields = append(fields, zap.String("object", reqInfo.ObjectName))
}
if traceID, err := trace.TraceIDFromHex(reqInfo.TraceID); err == nil && traceID.IsValid() { if traceID, err := trace.TraceIDFromHex(reqInfo.TraceID); err == nil && traceID.IsValid() {
fields = append(fields, zap.String("trace_id", reqInfo.TraceID)) fields = append(fields, zap.String("trace_id", reqInfo.TraceID))
} }

View file

@ -6,29 +6,27 @@ import (
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
) )
// keyWrapper is wrapper for context keys. // keyWrapper is wrapper for context keys.
type keyWrapper string type keyWrapper string
// authHeaders is a wrapper for authentication headers of a request. // boxKey is an ID used to store Box in a context.
var authHeadersKey = keyWrapper("__context_auth_headers_key") var boxKey = keyWrapper("__context_box_key")
// boxData is an ID used to store accessbox.Box in a context.
var boxDataKey = keyWrapper("__context_box_key")
// clientTime is an ID used to store client time.Time in a context.
var clientTimeKey = keyWrapper("__context_client_time")
// GetBoxData extracts accessbox.Box from context. // GetBoxData extracts accessbox.Box from context.
func GetBoxData(ctx context.Context) (*accessbox.Box, error) { func GetBoxData(ctx context.Context) (*accessbox.Box, error) {
var box *accessbox.Box data, ok := ctx.Value(boxKey).(*Box)
data, ok := ctx.Value(boxDataKey).(*accessbox.Box)
if !ok || data == nil { if !ok || data == nil {
return nil, fmt.Errorf("couldn't get box from context")
}
if data.AccessBox == nil {
return nil, fmt.Errorf("couldn't get box data from context") return nil, fmt.Errorf("couldn't get box data from context")
} }
box = data box := data.AccessBox
if box.Gate == nil { if box.Gate == nil {
box.Gate = &accessbox.GateData{} box.Gate = &accessbox.GateData{}
} }
@ -37,35 +35,39 @@ func GetBoxData(ctx context.Context) (*accessbox.Box, error) {
// GetAuthHeaders extracts auth.AuthHeader from context. // GetAuthHeaders extracts auth.AuthHeader from context.
func GetAuthHeaders(ctx context.Context) (*AuthHeader, error) { func GetAuthHeaders(ctx context.Context) (*AuthHeader, error) {
authHeaders, ok := ctx.Value(authHeadersKey).(*AuthHeader) data, ok := ctx.Value(boxKey).(*Box)
if !ok { if !ok || data == nil {
return nil, fmt.Errorf("couldn't get auth headers from context") return nil, fmt.Errorf("couldn't get box from context")
} }
return authHeaders, nil return data.AuthHeaders, nil
} }
// GetClientTime extracts time.Time from context. // GetClientTime extracts time.Time from context.
func GetClientTime(ctx context.Context) (time.Time, error) { func GetClientTime(ctx context.Context) (time.Time, error) {
clientTime, ok := ctx.Value(clientTimeKey).(time.Time) data, ok := ctx.Value(boxKey).(*Box)
if !ok { if !ok || data == nil {
return time.Time{}, fmt.Errorf("couldn't get box from context")
}
if data.ClientTime.IsZero() {
return time.Time{}, fmt.Errorf("couldn't get client time from context") return time.Time{}, fmt.Errorf("couldn't get client time from context")
} }
return clientTime, nil return data.ClientTime, nil
} }
// SetBoxData sets accessbox.Box in the context. // GetAccessBoxAttrs extracts []object.Attribute from context.
func SetBoxData(ctx context.Context, box *accessbox.Box) context.Context { func GetAccessBoxAttrs(ctx context.Context) ([]object.Attribute, error) {
return context.WithValue(ctx, boxDataKey, box) data, ok := ctx.Value(boxKey).(*Box)
if !ok || data == nil {
return nil, fmt.Errorf("couldn't get box from context")
} }
// SetAuthHeaders sets auth.AuthHeader in the context. return data.Attributes, nil
func SetAuthHeaders(ctx context.Context, header *AuthHeader) context.Context {
return context.WithValue(ctx, authHeadersKey, header)
} }
// SetClientTime sets time.Time in the context. // SetBox sets Box in the context.
func SetClientTime(ctx context.Context, newTime time.Time) context.Context { func SetBox(ctx context.Context, box *Box) context.Context {
return context.WithValue(ctx, clientTimeKey, newTime) return context.WithValue(ctx, boxKey, box)
} }

View file

@ -136,10 +136,11 @@ func (c *Controller) Subscribe(_ context.Context, topic string, handler layer.Ms
ch := make(chan *nats.Msg, 1) ch := make(chan *nats.Msg, 1)
c.mu.RLock() c.mu.RLock()
if _, ok := c.handlers[topic]; ok { _, ok := c.handlers[topic]
c.mu.RUnlock()
if ok {
return fmt.Errorf("already subscribed to topic '%s'", topic) return fmt.Errorf("already subscribed to topic '%s'", topic)
} }
c.mu.RUnlock()
if _, err := c.jsClient.AddStream(&nats.StreamConfig{Name: topic}); err != nil { if _, err := c.jsClient.AddStream(&nats.StreamConfig{Name: topic}); err != nil {
return fmt.Errorf("add stream: %w", err) return fmt.Errorf("add stream: %w", err)

View file

@ -10,6 +10,7 @@ import (
s3middleware "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware" s3middleware "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/metrics" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/metrics"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine" "git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
"github.com/go-chi/chi/v5" "github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware" "github.com/go-chi/chi/v5/middleware"
@ -36,6 +37,7 @@ type (
PutObjectHandler(http.ResponseWriter, *http.Request) PutObjectHandler(http.ResponseWriter, *http.Request)
DeleteObjectHandler(http.ResponseWriter, *http.Request) DeleteObjectHandler(http.ResponseWriter, *http.Request)
GetBucketLocationHandler(http.ResponseWriter, *http.Request) GetBucketLocationHandler(http.ResponseWriter, *http.Request)
GetBucketPolicyStatusHandler(http.ResponseWriter, *http.Request)
GetBucketPolicyHandler(http.ResponseWriter, *http.Request) GetBucketPolicyHandler(http.ResponseWriter, *http.Request)
GetBucketLifecycleHandler(http.ResponseWriter, *http.Request) GetBucketLifecycleHandler(http.ResponseWriter, *http.Request)
GetBucketEncryptionHandler(http.ResponseWriter, *http.Request) GetBucketEncryptionHandler(http.ResponseWriter, *http.Request)
@ -87,6 +89,7 @@ type (
ListMultipartUploadsHandler(http.ResponseWriter, *http.Request) ListMultipartUploadsHandler(http.ResponseWriter, *http.Request)
ResolveBucket(ctx context.Context, bucket string) (*data.BucketInfo, error) ResolveBucket(ctx context.Context, bucket string) (*data.BucketInfo, error)
ResolveCID(ctx context.Context, bucket string) (cid.ID, error)
} }
) )
@ -118,6 +121,9 @@ type Config struct {
FrostFSIDValidation bool FrostFSIDValidation bool
PolicyChecker engine.ChainRouter PolicyChecker engine.ChainRouter
XMLDecoder s3middleware.XMLDecoder
Tagging s3middleware.ResourceTagging
} }
func NewRouter(cfg Config) *chi.Mux { func NewRouter(cfg Config) *chi.Mux {
@ -127,7 +133,7 @@ func NewRouter(cfg Config) *chi.Mux {
middleware.ThrottleWithOpts(cfg.Throttle), middleware.ThrottleWithOpts(cfg.Throttle),
middleware.Recoverer, middleware.Recoverer,
s3middleware.Tracing(), s3middleware.Tracing(),
s3middleware.Metrics(cfg.Log, cfg.Handler.ResolveBucket, cfg.Metrics, cfg.MiddlewareSettings), s3middleware.Metrics(cfg.Log, cfg.Handler.ResolveCID, cfg.Metrics, cfg.MiddlewareSettings),
s3middleware.LogSuccessResponse(cfg.Log), s3middleware.LogSuccessResponse(cfg.Log),
s3middleware.Auth(cfg.Center, cfg.Log), s3middleware.Auth(cfg.Center, cfg.Log),
) )
@ -136,11 +142,21 @@ func NewRouter(cfg Config) *chi.Mux {
api.Use(s3middleware.FrostfsIDValidation(cfg.FrostfsID, cfg.Log)) api.Use(s3middleware.FrostfsIDValidation(cfg.FrostfsID, cfg.Log))
} }
api.Use(s3middleware.PolicyCheck(cfg.PolicyChecker, cfg.FrostfsID, cfg.MiddlewareSettings, cfg.Domains, cfg.Log)) api.Use(s3middleware.PolicyCheck(s3middleware.PolicyConfig{
Storage: cfg.PolicyChecker,
FrostfsID: cfg.FrostfsID,
Settings: cfg.MiddlewareSettings,
Domains: cfg.Domains,
Log: cfg.Log,
BucketResolver: cfg.Handler.ResolveBucket,
Decoder: cfg.XMLDecoder,
Tagging: cfg.Tagging,
}))
defaultRouter := chi.NewRouter() defaultRouter := chi.NewRouter()
defaultRouter.Mount(fmt.Sprintf("/{%s}", s3middleware.BucketURLPrm), bucketRouter(cfg.Handler, cfg.Log)) defaultRouter.Mount(fmt.Sprintf("/{%s}", s3middleware.BucketURLPrm), bucketRouter(cfg.Handler, cfg.Log))
defaultRouter.Get("/", named("ListBuckets", cfg.Handler.ListBucketsHandler)) defaultRouter.Get("/", named("ListBuckets", cfg.Handler.ListBucketsHandler))
attachErrorHandler(defaultRouter)
hr := NewHostBucketRouter("bucket") hr := NewHostBucketRouter("bucket")
hr.Default(defaultRouter) hr.Default(defaultRouter)
@ -168,14 +184,24 @@ func errorResponseHandler(w http.ResponseWriter, r *http.Request) {
reqInfo := s3middleware.GetReqInfo(ctx) reqInfo := s3middleware.GetReqInfo(ctx)
desc := fmt.Sprintf("Unknown API request at %s", r.URL.Path) desc := fmt.Sprintf("Unknown API request at %s", r.URL.Path)
s3middleware.WriteErrorResponse(w, reqInfo, errors.Error{ _, wrErr := s3middleware.WriteErrorResponse(w, reqInfo, errors.Error{
Code: "UnknownAPIRequest", Code: "UnknownAPIRequest",
Description: desc, Description: desc,
HTTPStatusCode: http.StatusBadRequest, HTTPStatusCode: http.StatusBadRequest,
}) })
if log := s3middleware.GetReqLog(ctx); log != nil { if log := s3middleware.GetReqLog(ctx); log != nil {
log.Error(logs.RequestUnmatched, zap.String("method", reqInfo.API)) fields := []zap.Field{
zap.String("method", reqInfo.API),
zap.String("http method", r.Method),
zap.String("url", r.RequestURI),
}
if wrErr != nil {
fields = append(fields, zap.NamedError("write_response_error", wrErr))
}
log.Error(logs.RequestUnmatched, fields...)
} }
} }
@ -210,6 +236,9 @@ func bucketRouter(h Handler, log *zap.Logger) chi.Router {
Add(NewFilter(). Add(NewFilter().
Queries(s3middleware.LocationQuery). Queries(s3middleware.LocationQuery).
Handler(named(s3middleware.GetBucketLocationOperation, h.GetBucketLocationHandler))). Handler(named(s3middleware.GetBucketLocationOperation, h.GetBucketLocationHandler))).
Add(NewFilter().
Queries(s3middleware.PolicyStatusQuery).
Handler(named(s3middleware.GetBucketPolicyStatusOperation, h.GetBucketPolicyStatusHandler))).
Add(NewFilter(). Add(NewFilter().
Queries(s3middleware.PolicyQuery). Queries(s3middleware.PolicyQuery).
Handler(named(s3middleware.GetBucketPolicyOperation, h.GetBucketPolicyHandler))). Handler(named(s3middleware.GetBucketPolicyOperation, h.GetBucketPolicyHandler))).

View file

@ -3,25 +3,70 @@ package api
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"encoding/xml"
"io"
"net/http" "net/http"
"testing" "testing"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
apiErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
bearertest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer/test"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
const FrostfsNamespaceHeader = "X-Frostfs-Namespace" const FrostfsNamespaceHeader = "X-Frostfs-Namespace"
type poolStatisticMock struct {
}
func (p *poolStatisticMock) Statistic() pool.Statistic {
return pool.Statistic{}
}
type centerMock struct { type centerMock struct {
t *testing.T
anon bool
attrs []object.Attribute
} }
func (c *centerMock) Authenticate(*http.Request) (*middleware.Box, error) { func (c *centerMock) Authenticate(*http.Request) (*middleware.Box, error) {
return &middleware.Box{}, nil var token *bearer.Token
if !c.anon {
bt := bearertest.Token()
token = &bt
key, err := keys.NewPrivateKey()
require.NoError(c.t, err)
require.NoError(c.t, token.Sign(key.PrivateKey))
}
return &middleware.Box{
AuthHeaders: &middleware.AuthHeader{},
AccessBox: &accessbox.Box{
Gate: &accessbox.GateData{
BearerToken: token,
},
},
Attributes: c.attrs,
}, nil
} }
type middlewareSettingsMock struct { type middlewareSettingsMock struct {
denyByDefault bool denyByDefault bool
aclEnabled bool
sourceIPHeader string
}
func (r *middlewareSettingsMock) SourceIPHeader() string {
return r.sourceIPHeader
} }
func (r *middlewareSettingsMock) NamespaceHeader() string { func (r *middlewareSettingsMock) NamespaceHeader() string {
@ -36,8 +81,50 @@ func (r *middlewareSettingsMock) PolicyDenyByDefault() bool {
return r.denyByDefault return r.denyByDefault
} }
func (r *middlewareSettingsMock) ACLEnabled() bool {
return r.aclEnabled
}
type frostFSIDMock struct {
tags map[string]string
}
func (f *frostFSIDMock) ValidatePublicKey(*keys.PublicKey) error {
return nil
}
func (f *frostFSIDMock) GetUserGroupIDsAndClaims(util.Uint160) ([]string, map[string]string, error) {
return []string{}, f.tags, nil
}
type xmlMock struct {
}
func (m *xmlMock) NewXMLDecoder(r io.Reader) *xml.Decoder {
return xml.NewDecoder(r)
}
type resourceTaggingMock struct {
bucketTags map[string]string
objectTags map[string]string
noSuchKey bool
}
func (m *resourceTaggingMock) GetBucketTagging(context.Context, *data.BucketInfo) (map[string]string, error) {
return m.bucketTags, nil
}
func (m *resourceTaggingMock) GetObjectTagging(context.Context, *data.GetObjectTaggingParams) (string, map[string]string, error) {
if m.noSuchKey {
return "", nil, apiErrors.GetAPIError(apiErrors.ErrNoSuchKey)
}
return "", m.objectTags, nil
}
type handlerMock struct { type handlerMock struct {
t *testing.T t *testing.T
cfg *middlewareSettingsMock
buckets map[string]*data.BucketInfo
} }
type handlerResult struct { type handlerResult struct {
@ -90,9 +177,13 @@ func (h *handlerMock) GetObjectLegalHoldHandler(http.ResponseWriter, *http.Reque
panic("implement me") panic("implement me")
} }
func (h *handlerMock) GetObjectHandler(http.ResponseWriter, *http.Request) { func (h *handlerMock) GetObjectHandler(w http.ResponseWriter, r *http.Request) {
//TODO implement me res := &handlerResult{
panic("implement me") Method: middleware.GetObjectOperation,
ReqInfo: middleware.GetReqInfo(r.Context()),
}
h.writeResponse(w, res)
} }
func (h *handlerMock) GetObjectAttributesHandler(http.ResponseWriter, *http.Request) { func (h *handlerMock) GetObjectAttributesHandler(http.ResponseWriter, *http.Request) {
@ -134,6 +225,11 @@ func (h *handlerMock) GetBucketLocationHandler(http.ResponseWriter, *http.Reques
panic("implement me") panic("implement me")
} }
func (h *handlerMock) GetBucketPolicyStatusHandler(http.ResponseWriter, *http.Request) {
//TODO implement me
panic("implement me")
}
func (h *handlerMock) GetBucketPolicyHandler(http.ResponseWriter, *http.Request) { func (h *handlerMock) GetBucketPolicyHandler(http.ResponseWriter, *http.Request) {
//TODO implement me //TODO implement me
panic("implement me") panic("implement me")
@ -282,9 +378,13 @@ func (h *handlerMock) PutBucketObjectLockConfigHandler(http.ResponseWriter, *htt
panic("implement me") panic("implement me")
} }
func (h *handlerMock) PutBucketTaggingHandler(http.ResponseWriter, *http.Request) { func (h *handlerMock) PutBucketTaggingHandler(w http.ResponseWriter, r *http.Request) {
//TODO implement me res := &handlerResult{
panic("implement me") Method: middleware.PutBucketTaggingOperation,
ReqInfo: middleware.GetReqInfo(r.Context()),
}
h.writeResponse(w, res)
} }
func (h *handlerMock) PutBucketVersioningHandler(http.ResponseWriter, *http.Request) { func (h *handlerMock) PutBucketVersioningHandler(http.ResponseWriter, *http.Request) {
@ -297,9 +397,20 @@ func (h *handlerMock) PutBucketNotificationHandler(http.ResponseWriter, *http.Re
panic("implement me") panic("implement me")
} }
func (h *handlerMock) CreateBucketHandler(http.ResponseWriter, *http.Request) { func (h *handlerMock) CreateBucketHandler(w http.ResponseWriter, r *http.Request) {
//TODO implement me reqInfo := middleware.GetReqInfo(r.Context())
panic("implement me")
h.buckets[reqInfo.Namespace+reqInfo.BucketName] = &data.BucketInfo{
Name: reqInfo.BucketName,
APEEnabled: !h.cfg.ACLEnabled(),
}
res := &handlerResult{
Method: middleware.CreateBucketOperation,
ReqInfo: middleware.GetReqInfo(r.Context()),
}
h.writeResponse(w, res)
} }
func (h *handlerMock) HeadBucketHandler(w http.ResponseWriter, r *http.Request) { func (h *handlerMock) HeadBucketHandler(w http.ResponseWriter, r *http.Request) {
@ -401,8 +512,21 @@ func (h *handlerMock) ListMultipartUploadsHandler(w http.ResponseWriter, r *http
h.writeResponse(w, res) h.writeResponse(w, res)
} }
func (h *handlerMock) ResolveBucket(context.Context, string) (*data.BucketInfo, error) { func (h *handlerMock) ResolveBucket(ctx context.Context, name string) (*data.BucketInfo, error) {
return &data.BucketInfo{}, nil reqInfo := middleware.GetReqInfo(ctx)
bktInfo, ok := h.buckets[reqInfo.Namespace+name]
if !ok {
return nil, apiErrors.GetAPIError(apiErrors.ErrNoSuchBucket)
}
return bktInfo, nil
}
func (h *handlerMock) ResolveCID(ctx context.Context, bucket string) (cid.ID, error) {
bktInfo, err := h.ResolveBucket(ctx, bucket)
if err != nil {
return cid.ID{}, err
}
return bktInfo.CID, nil
} }
func (h *handlerMock) writeResponse(w http.ResponseWriter, resp *handlerResult) { func (h *handlerMock) writeResponse(w http.ResponseWriter, resp *handlerResult) {

View file

@ -1,6 +1,7 @@
package api package api
import ( import (
"bytes"
"encoding/json" "encoding/json"
"encoding/xml" "encoding/xml"
"fmt" "fmt"
@ -8,24 +9,30 @@ import (
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
"net/url" "net/url"
"strconv"
"testing" "testing"
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
apiErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors" apiErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
s3middleware "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware" s3middleware "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/metrics" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/metrics"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
engineiam "git.frostfs.info/TrueCloudLab/policy-engine/iam" engineiam "git.frostfs.info/TrueCloudLab/policy-engine/iam"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain" "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine" "git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine/inmemory" "git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine/inmemory"
"git.frostfs.info/TrueCloudLab/policy-engine/schema/common"
"git.frostfs.info/TrueCloudLab/policy-engine/schema/s3" "git.frostfs.info/TrueCloudLab/policy-engine/schema/s3"
"github.com/go-chi/chi/v5" "github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware" "github.com/go-chi/chi/v5/middleware"
"github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"go.uber.org/zap/zaptest" "go.uber.org/zap/zaptest"
) )
type routerMock struct { type routerMock struct {
t *testing.T
router *chi.Mux router *chi.Mux
cfg Config cfg Config
middlewareSettings *middlewareSettingsMock middlewareSettings *middlewareSettingsMock
@ -40,20 +47,33 @@ func prepareRouter(t *testing.T) *routerMock {
middlewareSettings := &middlewareSettingsMock{} middlewareSettings := &middlewareSettingsMock{}
policyChecker := inmemory.NewInMemoryLocalOverrides() policyChecker := inmemory.NewInMemoryLocalOverrides()
logger := zaptest.NewLogger(t)
metricsConfig := metrics.AppMetricsConfig{
Logger: logger,
PoolStatistics: &poolStatisticMock{},
Registerer: prometheus.NewRegistry(),
Enabled: true,
}
cfg := Config{ cfg := Config{
Throttle: middleware.ThrottleOpts{ Throttle: middleware.ThrottleOpts{
Limit: 10, Limit: 10,
BacklogTimeout: 30 * time.Second, BacklogTimeout: 30 * time.Second,
}, },
Handler: &handlerMock{t: t}, Handler: &handlerMock{t: t, cfg: middlewareSettings, buckets: map[string]*data.BucketInfo{}},
Center: &centerMock{}, Center: &centerMock{t: t},
Log: zaptest.NewLogger(t), Log: logger,
Metrics: &metrics.AppMetrics{}, Metrics: metrics.NewAppMetrics(metricsConfig),
MiddlewareSettings: middlewareSettings, MiddlewareSettings: middlewareSettings,
PolicyChecker: policyChecker, PolicyChecker: policyChecker,
Domains: []string{"domain1", "domain2"}, Domains: []string{"domain1", "domain2"},
FrostfsID: &frostFSIDMock{},
XMLDecoder: &xmlMock{},
Tagging: &resourceTaggingMock{},
} }
return &routerMock{ return &routerMock{
t: t,
router: NewRouter(cfg), router: NewRouter(cfg),
cfg: cfg, cfg: cfg,
middlewareSettings: middlewareSettings, middlewareSettings: middlewareSettings,
@ -64,6 +84,8 @@ func prepareRouter(t *testing.T) *routerMock {
func TestRouterUploadPart(t *testing.T) { func TestRouterUploadPart(t *testing.T) {
chiRouter := prepareRouter(t) chiRouter := prepareRouter(t)
createBucket(chiRouter, "", "dkirillov")
w := httptest.NewRecorder() w := httptest.NewRecorder()
r := httptest.NewRequest(http.MethodPut, "/dkirillov/fix-object", nil) r := httptest.NewRequest(http.MethodPut, "/dkirillov/fix-object", nil)
query := make(url.Values) query := make(url.Values)
@ -79,6 +101,8 @@ func TestRouterUploadPart(t *testing.T) {
func TestRouterListMultipartUploads(t *testing.T) { func TestRouterListMultipartUploads(t *testing.T) {
chiRouter := prepareRouter(t) chiRouter := prepareRouter(t)
createBucket(chiRouter, "", "test-bucket")
w := httptest.NewRecorder() w := httptest.NewRecorder()
r := httptest.NewRequest(http.MethodGet, "/test-bucket", nil) r := httptest.NewRequest(http.MethodGet, "/test-bucket", nil)
query := make(url.Values) query := make(url.Values)
@ -93,22 +117,18 @@ func TestRouterListMultipartUploads(t *testing.T) {
func TestRouterObjectWithSlashes(t *testing.T) { func TestRouterObjectWithSlashes(t *testing.T) {
chiRouter := prepareRouter(t) chiRouter := prepareRouter(t)
bktName, objName := "dkirillov", "/fix/object" ns, bktName, objName := "", "dkirillov", "/fix/object"
target := fmt.Sprintf("/%s/%s", bktName, objName)
w := httptest.NewRecorder() createBucket(chiRouter, ns, bktName)
r := httptest.NewRequest(http.MethodPut, target, nil) resp := putObject(chiRouter, ns, bktName, objName, nil)
chiRouter.ServeHTTP(w, r)
resp := readResponse(t, w)
require.Equal(t, "PutObject", resp.Method)
require.Equal(t, objName, resp.ReqInfo.ObjectName) require.Equal(t, objName, resp.ReqInfo.ObjectName)
} }
func TestRouterObjectEscaping(t *testing.T) { func TestRouterObjectEscaping(t *testing.T) {
chiRouter := prepareRouter(t) chiRouter := prepareRouter(t)
bktName := "dkirillov" ns, bktName := "", "dkirillov"
createBucket(chiRouter, ns, bktName)
for _, tc := range []struct { for _, tc := range []struct {
name string name string
@ -142,14 +162,7 @@ func TestRouterObjectEscaping(t *testing.T) {
}, },
} { } {
t.Run(tc.name, func(t *testing.T) { t.Run(tc.name, func(t *testing.T) {
target := fmt.Sprintf("/%s/%s", bktName, tc.objName) resp := putObject(chiRouter, ns, bktName, tc.objName, nil)
w := httptest.NewRecorder()
r := httptest.NewRequest(http.MethodPut, target, nil)
chiRouter.ServeHTTP(w, r)
resp := readResponse(t, w)
require.Equal(t, "PutObject", resp.Method)
require.Equal(t, tc.expectedObjName, resp.ReqInfo.ObjectName) require.Equal(t, tc.expectedObjName, resp.ReqInfo.ObjectName)
}) })
} }
@ -157,47 +170,42 @@ func TestRouterObjectEscaping(t *testing.T) {
func TestPolicyChecker(t *testing.T) { func TestPolicyChecker(t *testing.T) {
chiRouter := prepareRouter(t) chiRouter := prepareRouter(t)
namespace := "custom-ns" ns1, bktName1, objName1 := "", "bucket", "object"
bktName, objName := "bucket", "object" ns2, bktName2, objName2 := "custom-ns", "other-bucket", "object"
target := fmt.Sprintf("/%s/%s", bktName, objName)
createBucket(chiRouter, ns1, bktName1)
createBucket(chiRouter, ns2, bktName1)
createBucket(chiRouter, ns2, bktName2)
ruleChain := &chain.Chain{ ruleChain := &chain.Chain{
ID: chain.ID("id"), ID: chain.ID("id"),
Rules: []chain.Rule{{ Rules: []chain.Rule{{
Status: chain.AccessDenied, Status: chain.AccessDenied,
Actions: chain.Actions{Names: []string{"*"}}, Actions: chain.Actions{Names: []string{"*"}},
Resources: chain.Resources{Names: []string{fmt.Sprintf(s3.ResourceFormatS3BucketObjects, bktName)}}, Resources: chain.Resources{Names: []string{fmt.Sprintf(s3.ResourceFormatS3BucketObjects, bktName1)}},
}}, }},
} }
_, _, err := chiRouter.policyChecker.MorphRuleChainStorage().AddMorphRuleChain(chain.S3, engine.NamespaceTarget(namespace), ruleChain) _, _, err := chiRouter.policyChecker.MorphRuleChainStorage().AddMorphRuleChain(chain.S3, engine.NamespaceTarget(ns2), ruleChain)
require.NoError(t, err) require.NoError(t, err)
// check we can access 'bucket' in default namespace // check we can access 'bucket' in default namespace
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodPut, target, nil) putObject(chiRouter, ns1, bktName1, objName1, nil)
chiRouter.ServeHTTP(w, r)
resp := readResponse(t, w)
require.Equal(t, s3middleware.PutObjectOperation, resp.Method)
// check we can access 'other-bucket' in custom namespace // check we can access 'other-bucket' in custom namespace
w, r = httptest.NewRecorder(), httptest.NewRequest(http.MethodPut, "/other-bucket/object", nil) putObject(chiRouter, ns2, bktName2, objName2, nil)
r.Header.Set(FrostfsNamespaceHeader, namespace)
chiRouter.ServeHTTP(w, r)
resp = readResponse(t, w)
require.Equal(t, s3middleware.PutObjectOperation, resp.Method)
// check we cannot access 'bucket' in custom namespace // check we cannot access 'bucket' in custom namespace
w, r = httptest.NewRecorder(), httptest.NewRequest(http.MethodPut, target, nil) putObjectErr(chiRouter, ns2, bktName1, objName2, nil, apiErrors.ErrAccessDenied)
r.Header.Set(FrostfsNamespaceHeader, namespace)
chiRouter.ServeHTTP(w, r)
assertAPIError(t, w, apiErrors.ErrAccessDenied)
} }
func TestPolicyCheckerReqTypeDetermination(t *testing.T) { func TestPolicyCheckerReqTypeDetermination(t *testing.T) {
chiRouter := prepareRouter(t) chiRouter := prepareRouter(t)
bktName, objName := "bucket", "object" bktName, objName := "bucket", "object"
createBucket(chiRouter, "", bktName)
policy := engineiam.Policy{ policy := engineiam.Policy{
Version: "2012-10-17",
Statement: []engineiam.Statement{{ Statement: []engineiam.Statement{{
Principal: map[engineiam.PrincipalType][]string{engineiam.Wildcard: {}}, Principal: map[engineiam.PrincipalType][]string{engineiam.Wildcard: {}},
Effect: engineiam.AllowEffect, Effect: engineiam.AllowEffect,
@ -212,6 +220,8 @@ func TestPolicyCheckerReqTypeDetermination(t *testing.T) {
_, _, err = chiRouter.policyChecker.MorphRuleChainStorage().AddMorphRuleChain(chain.S3, engine.NamespaceTarget(""), ruleChain) _, _, err = chiRouter.policyChecker.MorphRuleChainStorage().AddMorphRuleChain(chain.S3, engine.NamespaceTarget(""), ruleChain)
require.NoError(t, err) require.NoError(t, err)
createBucket(chiRouter, "", bktName)
chiRouter.middlewareSettings.denyByDefault = true chiRouter.middlewareSettings.denyByDefault = true
t.Run("can list buckets", func(t *testing.T) { t.Run("can list buckets", func(t *testing.T) {
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodGet, "/", nil) w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodGet, "/", nil)
@ -237,20 +247,610 @@ func TestPolicyCheckerReqTypeDetermination(t *testing.T) {
func TestDefaultBehaviorPolicyChecker(t *testing.T) { func TestDefaultBehaviorPolicyChecker(t *testing.T) {
chiRouter := prepareRouter(t) chiRouter := prepareRouter(t)
bktName, objName := "bucket", "object" ns, bktName := "", "bucket"
target := fmt.Sprintf("/%s/%s", bktName, objName)
// check we can access bucket if rules not found // check we can access bucket if rules not found
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodPut, target, nil) createBucket(chiRouter, ns, bktName)
chiRouter.ServeHTTP(w, r)
resp := readResponse(t, w)
require.Equal(t, s3middleware.PutObjectOperation, resp.Method)
// check we cannot access if rules not found when settings is enabled // check we cannot access if rules not found when settings is enabled
chiRouter.middlewareSettings.denyByDefault = true chiRouter.middlewareSettings.denyByDefault = true
w, r = httptest.NewRecorder(), httptest.NewRequest(http.MethodPut, target, nil) createBucketErr(chiRouter, ns, bktName, nil, apiErrors.ErrAccessDenied)
chiRouter.ServeHTTP(w, r) }
assertAPIError(t, w, apiErrors.ErrAccessDenied)
func TestDefaultPolicyCheckerWithUserTags(t *testing.T) {
router := prepareRouter(t)
ns, bktName := "", "bucket"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, engineiam.Conditions{
engineiam.CondStringEquals: engineiam.Condition{fmt.Sprintf(common.PropertyKeyFormatFrostFSIDUserClaim, "tag-test"): []string{"test"}},
})
createBucketErr(router, ns, bktName, nil, apiErrors.ErrAccessDenied)
tags := make(map[string]string)
tags["tag-test"] = "test"
router.cfg.FrostfsID.(*frostFSIDMock).tags = tags
createBucket(router, ns, bktName)
}
func TestACLAPE(t *testing.T) {
t.Run("acl disabled, ape deny by default", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, objName := "", "bucket", "object"
bktNameOld, bktNameNew := "old-bucket", "new-bucket"
createOldBucket(router, bktNameOld)
createNewBucket(router, bktNameNew)
router.middlewareSettings.aclEnabled = false
router.middlewareSettings.denyByDefault = true
// Allow because of using old bucket
putObject(router, ns, bktNameOld, objName, nil)
// Deny because of deny by default
putObjectErr(router, ns, bktNameNew, objName, nil, apiErrors.ErrAccessDenied)
// Deny because of deny by default
createBucketErr(router, ns, bktName, nil, apiErrors.ErrAccessDenied)
listBucketsErr(router, ns, apiErrors.ErrAccessDenied)
// Allow operations and check
allowOperations(router, ns, []string{"s3:CreateBucket", "s3:ListAllMyBuckets"}, nil)
createBucket(router, ns, bktName)
listBuckets(router, ns)
})
t.Run("acl disabled, ape allow by default", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, objName := "", "bucket", "object"
bktNameOld, bktNameNew := "old-bucket", "new-bucket"
createOldBucket(router, bktNameOld)
createNewBucket(router, bktNameNew)
router.middlewareSettings.aclEnabled = false
router.middlewareSettings.denyByDefault = false
// Allow because of using old bucket
putObject(router, ns, bktNameOld, objName, nil)
// Allow because of allow by default
putObject(router, ns, bktNameNew, objName, nil)
// Allow because of deny by default
createBucket(router, ns, bktName)
listBuckets(router, ns)
// Deny operations and check
denyOperations(router, ns, []string{"s3:CreateBucket", "s3:ListAllMyBuckets"}, nil)
createBucketErr(router, ns, bktName, nil, apiErrors.ErrAccessDenied)
listBucketsErr(router, ns, apiErrors.ErrAccessDenied)
})
t.Run("acl enabled, ape deny by default", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, objName := "", "bucket", "object"
bktNameOld, bktNameNew := "old-bucket", "new-bucket"
createOldBucket(router, bktNameOld)
createNewBucket(router, bktNameNew)
router.middlewareSettings.aclEnabled = true
router.middlewareSettings.denyByDefault = true
// Allow because of using old bucket
putObject(router, ns, bktNameOld, objName, nil)
// Deny because of deny by default
putObjectErr(router, ns, bktNameNew, objName, nil, apiErrors.ErrAccessDenied)
// Allow because of old behavior
createBucket(router, ns, bktName)
listBuckets(router, ns)
})
t.Run("acl enabled, ape allow by default", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, objName := "", "bucket", "object"
bktNameOld, bktNameNew := "old-bucket", "new-bucket"
createOldBucket(router, bktNameOld)
createNewBucket(router, bktNameNew)
router.middlewareSettings.aclEnabled = true
router.middlewareSettings.denyByDefault = false
// Allow because of using old bucket
putObject(router, ns, bktNameOld, objName, nil)
// Allow because of allow by default
putObject(router, ns, bktNameNew, objName, nil)
// Allow because of old behavior
createBucket(router, ns, bktName)
listBuckets(router, ns)
})
}
func TestRequestParametersCheck(t *testing.T) {
t.Run("prefix parameter, allow specific value", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, prefix := "", "bucket", "prefix"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policies and check
denyOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondStringNotEquals: engineiam.Condition{s3.PropertyKeyPrefix: []string{prefix}},
})
allowOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondStringEquals: engineiam.Condition{s3.PropertyKeyPrefix: []string{prefix}},
})
listObjectsV1(router, ns, bktName, prefix, "", "")
listObjectsV1Err(router, ns, bktName, "", "", "", apiErrors.ErrAccessDenied)
listObjectsV1Err(router, ns, bktName, "invalid", "", "", apiErrors.ErrAccessDenied)
})
t.Run("delimiter parameter, prohibit specific value", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, delimiter := "", "bucket", "delimiter"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policies and check
denyOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondStringEquals: engineiam.Condition{s3.PropertyKeyDelimiter: []string{delimiter}},
})
allowOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondStringNotEquals: engineiam.Condition{s3.PropertyKeyDelimiter: []string{delimiter}},
})
listObjectsV1(router, ns, bktName, "", "", "")
listObjectsV1(router, ns, bktName, "", "some-delimiter", "")
listObjectsV1Err(router, ns, bktName, "", delimiter, "", apiErrors.ErrAccessDenied)
})
t.Run("max-keys parameter, allow specific value", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, maxKeys := "", "bucket", 10
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policies and check
denyOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondNumericNotEquals: engineiam.Condition{s3.PropertyKeyMaxKeys: []string{strconv.Itoa(maxKeys)}},
})
allowOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondNumericEquals: engineiam.Condition{s3.PropertyKeyMaxKeys: []string{strconv.Itoa(maxKeys)}},
})
listObjectsV1(router, ns, bktName, "", "", strconv.Itoa(maxKeys))
listObjectsV1Err(router, ns, bktName, "", "", "", apiErrors.ErrAccessDenied)
listObjectsV1Err(router, ns, bktName, "", "", strconv.Itoa(maxKeys-1), apiErrors.ErrAccessDenied)
listObjectsV1Err(router, ns, bktName, "", "", "invalid", apiErrors.ErrAccessDenied)
})
t.Run("max-keys parameter, allow range of values", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, maxKeys := "", "bucket", 10
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policies and check
denyOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondNumericGreaterThan: engineiam.Condition{s3.PropertyKeyMaxKeys: []string{strconv.Itoa(maxKeys)}},
})
allowOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondNumericLessThanEquals: engineiam.Condition{s3.PropertyKeyMaxKeys: []string{strconv.Itoa(maxKeys)}},
})
listObjectsV1(router, ns, bktName, "", "", strconv.Itoa(maxKeys))
listObjectsV1(router, ns, bktName, "", "", strconv.Itoa(maxKeys-1))
listObjectsV1Err(router, ns, bktName, "", "", strconv.Itoa(maxKeys+1), apiErrors.ErrAccessDenied)
})
t.Run("max-keys parameter, prohibit specific value", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, maxKeys := "", "bucket", 10
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policies and check
denyOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondNumericEquals: engineiam.Condition{s3.PropertyKeyMaxKeys: []string{strconv.Itoa(maxKeys)}},
})
allowOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondNumericNotEquals: engineiam.Condition{s3.PropertyKeyMaxKeys: []string{strconv.Itoa(maxKeys)}},
})
listObjectsV1(router, ns, bktName, "", "", "")
listObjectsV1(router, ns, bktName, "", "", strconv.Itoa(maxKeys-1))
listObjectsV1Err(router, ns, bktName, "", "", strconv.Itoa(maxKeys), apiErrors.ErrAccessDenied)
})
}
func TestRequestTagsCheck(t *testing.T) {
t.Run("put bucket tagging", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, tagKey, tagValue := "", "bucket", "tag", "value"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policies and check
allowOperations(router, ns, []string{"s3:PutBucketTagging"}, engineiam.Conditions{
engineiam.CondStringEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatRequestTag, tagKey): []string{tagValue}},
})
denyOperations(router, ns, []string{"s3:PutBucketTagging"}, engineiam.Conditions{
engineiam.CondStringNotEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatRequestTag, tagKey): []string{tagValue}},
})
tagging, err := xml.Marshal(data.Tagging{TagSet: []data.Tag{{Key: tagKey, Value: tagValue}}})
require.NoError(t, err)
putBucketTagging(router, ns, bktName, tagging)
tagging, err = xml.Marshal(data.Tagging{TagSet: []data.Tag{{Key: "key", Value: tagValue}}})
require.NoError(t, err)
putBucketTaggingErr(router, ns, bktName, tagging, apiErrors.ErrAccessDenied)
})
t.Run("put object with tag", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, objName, tagKey, tagValue := "", "bucket", "object", "tag", "value"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policies and check
allowOperations(router, ns, []string{"s3:PutObject"}, engineiam.Conditions{
engineiam.CondStringEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatRequestTag, tagKey): []string{tagValue}},
})
denyOperations(router, ns, []string{"s3:PutObject"}, engineiam.Conditions{
engineiam.CondStringNotEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatRequestTag, tagKey): []string{tagValue}},
})
putObject(router, ns, bktName, objName, &data.Tag{Key: tagKey, Value: tagValue})
putObjectErr(router, ns, bktName, objName, &data.Tag{Key: "key", Value: tagValue}, apiErrors.ErrAccessDenied)
})
}
func TestResourceTagsCheck(t *testing.T) {
t.Run("bucket tagging", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, tagKey, tagValue := "", "bucket", "tag", "value"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policies and check
allowOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondStringEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatResourceTag, tagKey): []string{tagValue}},
})
denyOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondStringNotEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatResourceTag, tagKey): []string{tagValue}},
})
router.cfg.Tagging.(*resourceTaggingMock).bucketTags = map[string]string{tagKey: tagValue}
listObjectsV1(router, ns, bktName, "", "", "")
router.cfg.Tagging.(*resourceTaggingMock).bucketTags = map[string]string{}
listObjectsV1Err(router, ns, bktName, "", "", "", apiErrors.ErrAccessDenied)
})
t.Run("object tagging", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, objName, tagKey, tagValue := "", "bucket", "object", "tag", "value"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket", "s3:PutObject"}, nil)
createBucket(router, ns, bktName)
putObject(router, ns, bktName, objName, nil)
// Add policies and check
allowOperations(router, ns, []string{"s3:GetObject"}, engineiam.Conditions{
engineiam.CondStringEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatResourceTag, tagKey): []string{tagValue}},
})
denyOperations(router, ns, []string{"s3:GetObject"}, engineiam.Conditions{
engineiam.CondStringNotEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatResourceTag, tagKey): []string{tagValue}},
})
router.cfg.Tagging.(*resourceTaggingMock).objectTags = map[string]string{tagKey: tagValue}
getObject(router, ns, bktName, objName)
router.cfg.Tagging.(*resourceTaggingMock).objectTags = map[string]string{}
getObjectErr(router, ns, bktName, objName, apiErrors.ErrAccessDenied)
})
t.Run("non-existent resources", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, objName := "", "bucket", "object"
listObjectsV1Err(router, ns, bktName, "", "", "", apiErrors.ErrNoSuchBucket)
router.cfg.Tagging.(*resourceTaggingMock).noSuchKey = true
createBucket(router, ns, bktName)
getObjectErr(router, ns, bktName, objName, apiErrors.ErrNoSuchKey)
})
}
func TestAccessBoxAttributesCheck(t *testing.T) {
router := prepareRouter(t)
ns, bktName, attrKey, attrValue := "", "bucket", "key", "true"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policy and check
allowOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondBool: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatAccessBoxAttr, attrKey): []string{attrValue}},
})
listObjectsV1Err(router, ns, bktName, "", "", "", apiErrors.ErrAccessDenied)
var attr object.Attribute
attr.SetKey(attrKey)
attr.SetValue(attrValue)
router.cfg.Center.(*centerMock).attrs = []object.Attribute{attr}
listObjectsV1(router, ns, bktName, "", "", "")
}
func TestSourceIPCheck(t *testing.T) {
router := prepareRouter(t)
ns, bktName, hdr := "", "bucket", "Source-Ip"
router.middlewareSettings.denyByDefault = true
// Add policy and check
allowOperations(router, ns, []string{"s3:CreateBucket"}, engineiam.Conditions{
engineiam.CondIPAddress: engineiam.Condition{"aws:SourceIp": []string{"192.0.2.0/24"}},
})
router.middlewareSettings.sourceIPHeader = hdr
header := map[string][]string{hdr: {"192.0.3.0"}}
createBucketErr(router, ns, bktName, header, apiErrors.ErrAccessDenied)
router.middlewareSettings.sourceIPHeader = ""
createBucket(router, ns, bktName)
}
func allowOperations(router *routerMock, ns string, operations []string, conditions engineiam.Conditions) {
addPolicy(router, ns, "allow", engineiam.AllowEffect, operations, conditions)
}
func denyOperations(router *routerMock, ns string, operations []string, conditions engineiam.Conditions) {
addPolicy(router, ns, "deny", engineiam.DenyEffect, operations, conditions)
}
func addPolicy(router *routerMock, ns string, id string, effect engineiam.Effect, operations []string, conditions engineiam.Conditions) {
policy := engineiam.Policy{
Version: "2012-10-17",
Statement: []engineiam.Statement{{
Principal: map[engineiam.PrincipalType][]string{engineiam.Wildcard: {}},
Effect: effect,
Action: engineiam.Action(operations),
Resource: engineiam.Resource{fmt.Sprintf(s3.ResourceFormatS3All)},
Conditions: conditions,
}},
}
ruleChain, err := engineiam.ConvertToS3Chain(policy, nil)
require.NoError(router.t, err)
ruleChain.ID = chain.ID(id)
_, _, err = router.policyChecker.MorphRuleChainStorage().AddMorphRuleChain(chain.S3, engine.NamespaceTarget(ns), ruleChain)
require.NoError(router.t, err)
}
func createOldBucket(router *routerMock, bktName string) {
createSpecificBucket(router, bktName, true)
}
func createNewBucket(router *routerMock, bktName string) {
createSpecificBucket(router, bktName, false)
}
func createSpecificBucket(router *routerMock, bktName string, old bool) {
aclEnabled := router.middlewareSettings.ACLEnabled()
router.middlewareSettings.aclEnabled = old
createBucket(router, "", bktName)
router.middlewareSettings.aclEnabled = aclEnabled
}
func createBucket(router *routerMock, namespace, bktName string) {
w := createBucketBase(router, namespace, bktName, nil)
resp := readResponse(router.t, w)
require.Equal(router.t, s3middleware.CreateBucketOperation, resp.Method)
}
func createBucketErr(router *routerMock, namespace, bktName string, header http.Header, errCode apiErrors.ErrorCode) {
w := createBucketBase(router, namespace, bktName, header)
assertAPIError(router.t, w, errCode)
}
func createBucketBase(router *routerMock, namespace, bktName string, header http.Header) *httptest.ResponseRecorder {
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodPut, "/"+bktName, nil)
r.Header.Set(FrostfsNamespaceHeader, namespace)
for key := range header {
r.Header.Set(key, header.Get(key))
}
router.ServeHTTP(w, r)
return w
}
func listBuckets(router *routerMock, namespace string) {
w := listBucketsBase(router, namespace)
resp := readResponse(router.t, w)
require.Equal(router.t, s3middleware.ListBucketsOperation, resp.Method)
}
func listBucketsErr(router *routerMock, namespace string, errCode apiErrors.ErrorCode) {
w := listBucketsBase(router, namespace)
assertAPIError(router.t, w, errCode)
}
func listBucketsBase(router *routerMock, namespace string) *httptest.ResponseRecorder {
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodGet, "/", nil)
r.Header.Set(FrostfsNamespaceHeader, namespace)
router.ServeHTTP(w, r)
return w
}
func putObject(router *routerMock, namespace, bktName, objName string, tag *data.Tag) handlerResult {
w := putObjectBase(router, namespace, bktName, objName, tag)
resp := readResponse(router.t, w)
require.Equal(router.t, s3middleware.PutObjectOperation, resp.Method)
return resp
}
func putObjectErr(router *routerMock, namespace, bktName, objName string, tag *data.Tag, errCode apiErrors.ErrorCode) {
w := putObjectBase(router, namespace, bktName, objName, tag)
assertAPIError(router.t, w, errCode)
}
func putObjectBase(router *routerMock, namespace, bktName, objName string, tag *data.Tag) *httptest.ResponseRecorder {
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodPut, "/"+bktName+"/"+objName, nil)
if tag != nil {
queries := url.Values{
tag.Key: []string{tag.Value},
}
r.Header.Set(AmzTagging, queries.Encode())
}
r.Header.Set(FrostfsNamespaceHeader, namespace)
router.ServeHTTP(w, r)
return w
}
func putBucketTagging(router *routerMock, namespace, bktName string, tagging []byte) handlerResult {
w := putBucketTaggingBase(router, namespace, bktName, tagging)
resp := readResponse(router.t, w)
require.Equal(router.t, s3middleware.PutBucketTaggingOperation, resp.Method)
return resp
}
func putBucketTaggingErr(router *routerMock, namespace, bktName string, tagging []byte, errCode apiErrors.ErrorCode) {
w := putBucketTaggingBase(router, namespace, bktName, tagging)
assertAPIError(router.t, w, errCode)
}
func putBucketTaggingBase(router *routerMock, namespace, bktName string, tagging []byte) *httptest.ResponseRecorder {
queries := url.Values{}
queries.Add(s3middleware.TaggingQuery, "")
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodPut, "/"+bktName, bytes.NewBuffer(tagging))
r.URL.RawQuery = queries.Encode()
r.Header.Set(FrostfsNamespaceHeader, namespace)
router.ServeHTTP(w, r)
return w
}
func getObject(router *routerMock, namespace, bktName, objName string) handlerResult {
w := getObjectBase(router, namespace, bktName, objName)
resp := readResponse(router.t, w)
require.Equal(router.t, s3middleware.GetObjectOperation, resp.Method)
return resp
}
func getObjectErr(router *routerMock, namespace, bktName, objName string, errCode apiErrors.ErrorCode) {
w := getObjectBase(router, namespace, bktName, objName)
assertAPIError(router.t, w, errCode)
}
func getObjectBase(router *routerMock, namespace, bktName, objName string) *httptest.ResponseRecorder {
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodGet, "/"+bktName+"/"+objName, nil)
r.Header.Set(FrostfsNamespaceHeader, namespace)
router.ServeHTTP(w, r)
return w
}
func listObjectsV1(router *routerMock, namespace, bktName, prefix, delimiter, maxKeys string) handlerResult {
w := listObjectsV1Base(router, namespace, bktName, prefix, delimiter, maxKeys)
resp := readResponse(router.t, w)
require.Equal(router.t, s3middleware.ListObjectsV1Operation, resp.Method)
return resp
}
func listObjectsV1Err(router *routerMock, namespace, bktName, prefix, delimiter, maxKeys string, errCode apiErrors.ErrorCode) {
w := listObjectsV1Base(router, namespace, bktName, prefix, delimiter, maxKeys)
assertAPIError(router.t, w, errCode)
}
func listObjectsV1Base(router *routerMock, namespace, bktName, prefix, delimiter, maxKeys string) *httptest.ResponseRecorder {
queries := url.Values{}
if len(prefix) > 0 {
queries.Add(s3middleware.QueryPrefix, prefix)
}
if len(delimiter) > 0 {
queries.Add(s3middleware.QueryDelimiter, delimiter)
}
if len(maxKeys) > 0 {
queries.Add(s3middleware.QueryMaxKeys, maxKeys)
}
encoded := queries.Encode()
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodGet, "/"+bktName, nil)
r.URL.RawQuery = encoded
r.Header.Set(FrostfsNamespaceHeader, namespace)
router.ServeHTTP(w, r)
return w
}
func TestOwnerIDRetrieving(t *testing.T) {
chiRouter := prepareRouter(t)
ns, bktName, objName := "", "test-bucket", "test-object"
createBucket(chiRouter, ns, bktName)
resp := putObject(chiRouter, ns, bktName, objName, nil)
require.NotEqual(t, "anon", resp.ReqInfo.User)
chiRouter.cfg.Center.(*centerMock).anon = true
resp = putObject(chiRouter, ns, bktName, objName, nil)
require.Equal(t, "anon", resp.ReqInfo.User)
}
func TestBillingMetrics(t *testing.T) {
chiRouter := prepareRouter(t)
ns, bktName, objName := "", "test-bucket", "test-object"
createBucket(chiRouter, ns, bktName)
dump := chiRouter.cfg.Metrics.UsersAPIStats().DumpMetrics()
require.Len(t, dump.Requests, 1)
require.NotEqual(t, "anon", dump.Requests[0].User)
require.Equal(t, metrics.PUTRequest, dump.Requests[0].Operation)
require.Equal(t, bktName, dump.Requests[0].Bucket)
require.Equal(t, 1, dump.Requests[0].Requests)
chiRouter.cfg.Center.(*centerMock).anon = true
putObject(chiRouter, ns, bktName, objName, nil)
dump = chiRouter.cfg.Metrics.UsersAPIStats().DumpMetrics()
require.Len(t, dump.Requests, 1)
require.Equal(t, "anon", dump.Requests[0].User)
} }
func readResponse(t *testing.T, w *httptest.ResponseRecorder) handlerResult { func readResponse(t *testing.T, w *httptest.ResponseRecorder) handlerResult {

View file

@ -82,11 +82,6 @@ type FrostFS interface {
TimeToEpoch(context.Context, time.Time) (uint64, uint64, error) TimeToEpoch(context.Context, time.Time) (uint64, uint64, error)
} }
// FrostFSID represents interface to interact with frostfsid contract.
type FrostFSID interface {
RegisterPublicKey(ns string, key *keys.PublicKey) error
}
// Agent contains client communicating with FrostFS and logger. // Agent contains client communicating with FrostFS and logger.
type Agent struct { type Agent struct {
frostFS FrostFS frostFS FrostFS
@ -344,7 +339,7 @@ func (a *Agent) UpdateSecret(ctx context.Context, w io.Writer, options *UpdateSe
creds := tokens.New(cfg) creds := tokens.New(cfg)
box, err := creds.GetBox(ctx, options.Address) box, _, err := creds.GetBox(ctx, options.Address)
if err != nil { if err != nil {
return fmt.Errorf("get accessbox: %w", err) return fmt.Errorf("get accessbox: %w", err)
} }
@ -431,7 +426,7 @@ func (a *Agent) ObtainSecret(ctx context.Context, w io.Writer, options *ObtainSe
return fmt.Errorf("failed to parse secret address: %w", err) return fmt.Errorf("failed to parse secret address: %w", err)
} }
box, err := bearerCreds.GetBox(ctx, addr) box, _, err := bearerCreds.GetBox(ctx, addr)
if err != nil { if err != nil {
return fmt.Errorf("failed to get tokens: %w", err) return fmt.Errorf("failed to get tokens: %w", err)
} }

View file

@ -8,7 +8,7 @@ import (
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/authmate" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/authmate"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid/contract"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/wallet" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/wallet"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
@ -170,7 +170,7 @@ func runIssueSecretCmd(cmd *cobra.Command, _ []string) error {
if rpcAddress == "" { if rpcAddress == "" {
return wrapPreparationError(fmt.Errorf("you can use '%s' flag only along with '%s'", frostfsIDFlag, rpcEndpointFlag)) return wrapPreparationError(fmt.Errorf("you can use '%s' flag only along with '%s'", frostfsIDFlag, rpcEndpointFlag))
} }
cfg := frostfsid.Config{ cfg := contract.Config{
RPCAddress: rpcAddress, RPCAddress: rpcAddress,
Contract: frostFSID, Contract: frostFSID,
ProxyContract: viper.GetString(frostfsIDProxyFlag), ProxyContract: viper.GetString(frostfsIDProxyFlag),
@ -182,7 +182,7 @@ func runIssueSecretCmd(cmd *cobra.Command, _ []string) error {
return wrapFrostFSIDInitError(err) return wrapFrostFSIDInitError(err)
} }
if err = frostfsIDClient.RegisterPublicKey(viper.GetString(frostfsIDNamespaceFlag), key.PublicKey()); err != nil { if err = registerPublicKey(frostfsIDClient, viper.GetString(frostfsIDNamespaceFlag), key.PublicKey()); err != nil {
return wrapBusinessLogicError(fmt.Errorf("failed to register key in frostfsid: %w", err)) return wrapBusinessLogicError(fmt.Errorf("failed to register key in frostfsid: %w", err))
} }
} }

View file

@ -7,7 +7,7 @@ import (
"strings" "strings"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/authmate" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/authmate"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid/contract"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/wallet" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/wallet"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
@ -106,7 +106,7 @@ func runUpdateSecretCmd(cmd *cobra.Command, _ []string) error {
if rpcAddress == "" { if rpcAddress == "" {
return wrapPreparationError(fmt.Errorf("you can use '%s' flag only along with '%s'", frostfsIDFlag, rpcEndpointFlag)) return wrapPreparationError(fmt.Errorf("you can use '%s' flag only along with '%s'", frostfsIDFlag, rpcEndpointFlag))
} }
cfg := frostfsid.Config{ cfg := contract.Config{
RPCAddress: rpcAddress, RPCAddress: rpcAddress,
Contract: frostFSID, Contract: frostFSID,
ProxyContract: viper.GetString(frostfsIDProxyFlag), ProxyContract: viper.GetString(frostfsIDProxyFlag),
@ -118,7 +118,7 @@ func runUpdateSecretCmd(cmd *cobra.Command, _ []string) error {
return wrapFrostFSIDInitError(err) return wrapFrostFSIDInitError(err)
} }
if err = frostfsIDClient.RegisterPublicKey(viper.GetString(frostfsIDNamespaceFlag), key.PublicKey()); err != nil { if err = registerPublicKey(frostfsIDClient, viper.GetString(frostfsIDNamespaceFlag), key.PublicKey()); err != nil {
return wrapBusinessLogicError(fmt.Errorf("failed to register key in frostfsid: %w", err)) return wrapBusinessLogicError(fmt.Errorf("failed to register key in frostfsid: %w", err))
} }
} }

View file

@ -11,7 +11,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/authmate" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/authmate"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid/contract"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
@ -30,7 +30,7 @@ type PoolConfig struct {
RebalanceInterval time.Duration RebalanceInterval time.Duration
} }
func createFrostFS(ctx context.Context, log *zap.Logger, cfg PoolConfig) (authmate.FrostFS, error) { func createFrostFS(ctx context.Context, log *zap.Logger, cfg PoolConfig) (*frostfs.AuthmateFrostFS, error) {
log.Debug(logs.PrepareConnectionPool) log.Debug(logs.PrepareConnectionPool)
var prm pool.InitParameters var prm pool.InitParameters
@ -51,7 +51,7 @@ func createFrostFS(ctx context.Context, log *zap.Logger, cfg PoolConfig) (authma
return nil, fmt.Errorf("dial pool: %w", err) return nil, fmt.Errorf("dial pool: %w", err)
} }
return frostfs.NewAuthmateFrostFS(p, cfg.Key), nil return frostfs.NewAuthmateFrostFS(frostfs.NewFrostFS(p, cfg.Key)), nil
} }
func parsePolicies(val string) (authmate.ContainerPolicies, error) { func parsePolicies(val string) (authmate.ContainerPolicies, error) {
@ -145,10 +145,10 @@ func getLogger() *zap.Logger {
return log return log
} }
func createFrostFSID(ctx context.Context, log *zap.Logger, cfg frostfsid.Config) (authmate.FrostFSID, error) { func createFrostFSID(ctx context.Context, log *zap.Logger, cfg contract.Config) (*contract.FrostFSID, error) {
log.Debug(logs.PrepareFrostfsIDClient) log.Debug(logs.PrepareFrostfsIDClient)
cli, err := frostfsid.New(ctx, cfg) cli, err := contract.New(ctx, cfg)
if err != nil { if err != nil {
return nil, fmt.Errorf("create frostfsid client: %w", err) return nil, fmt.Errorf("create frostfsid client: %w", err)
} }
@ -156,6 +156,15 @@ func createFrostFSID(ctx context.Context, log *zap.Logger, cfg frostfsid.Config)
return cli, nil return cli, nil
} }
func registerPublicKey(cli *contract.FrostFSID, namespace string, key *keys.PublicKey) error {
err := cli.Wait(cli.CreateSubject(namespace, key))
if err != nil && !strings.Contains(err.Error(), "subject already exists") {
return err
}
return nil
}
func parseObjectAttrs(attributes string) ([]object.Attribute, error) { func parseObjectAttrs(attributes string) ([]object.Attribute, error) {
if len(attributes) == 0 { if len(attributes) == 0 {
return nil, nil return nil, nil

View file

@ -30,6 +30,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/tokens" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/tokens"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid"
ffidcontract "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid/contract"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/policy" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/policy"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/policy/contract" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/policy/contract"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/services" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/services"
@ -49,6 +50,7 @@ import (
"github.com/spf13/viper" "github.com/spf13/viper"
"go.uber.org/zap" "go.uber.org/zap"
"golang.org/x/exp/slices" "golang.org/x/exp/slices"
"golang.org/x/text/encoding/ianaindex"
"google.golang.org/grpc" "google.golang.org/grpc"
) )
@ -72,6 +74,8 @@ type (
policyStorage *policy.Storage policyStorage *policy.Storage
servers []Server servers []Server
unbindServers []ServerInfo
mu sync.RWMutex
controlAPI *grpc.Server controlAPI *grpc.Server
@ -88,6 +92,7 @@ type (
logLevel zap.AtomicLevel logLevel zap.AtomicLevel
maxClient maxClientsConfig maxClient maxClientsConfig
defaultMaxAge int defaultMaxAge int
reconnectInterval time.Duration
notificatorEnabled bool notificatorEnabled bool
resolveZoneList []string resolveZoneList []string
isResolveListAllow bool // True if ResolveZoneList contains allowed zones isResolveListAllow bool // True if ResolveZoneList contains allowed zones
@ -100,10 +105,12 @@ type (
clientCut bool clientCut bool
maxBufferSizeForPut uint64 maxBufferSizeForPut uint64
md5Enabled bool md5Enabled bool
aclEnabled bool
namespaceHeader string namespaceHeader string
defaultNamespaces []string defaultNamespaces []string
authorizedControlAPIKeys [][]byte authorizedControlAPIKeys [][]byte
policyDenyByDefault bool policyDenyByDefault bool
sourceIPHeader string
} }
maxClientsConfig struct { maxClientsConfig struct {
@ -121,7 +128,7 @@ func newApp(ctx context.Context, log *Logger, v *viper.Viper) *App {
objPool, treePool, key := getPools(ctx, log.logger, v) objPool, treePool, key := getPools(ctx, log.logger, v)
cfg := tokens.Config{ cfg := tokens.Config{
FrostFS: frostfs.NewAuthmateFrostFS(objPool, key), FrostFS: frostfs.NewAuthmateFrostFS(frostfs.NewFrostFS(objPool, key)),
Key: key, Key: key,
CacheConfig: getAccessBoxCacheConfig(v, log.logger), CacheConfig: getAccessBoxCacheConfig(v, log.logger),
RemovingCheckAfterDurations: fetchRemovingCheckInterval(v, log.logger), RemovingCheckAfterDurations: fetchRemovingCheckInterval(v, log.logger),
@ -204,6 +211,7 @@ func newAppSettings(log *Logger, v *viper.Viper, key *keys.PrivateKey) *appSetti
logLevel: log.lvl, logLevel: log.lvl,
maxClient: newMaxClients(v), maxClient: newMaxClients(v),
defaultMaxAge: fetchDefaultMaxAge(v, log.logger), defaultMaxAge: fetchDefaultMaxAge(v, log.logger),
reconnectInterval: fetchReconnectInterval(v),
notificatorEnabled: v.GetBool(cfgEnableNATS), notificatorEnabled: v.GetBool(cfgEnableNATS),
frostfsidValidation: v.GetBool(cfgFrostfsIDValidationEnabled), frostfsidValidation: v.GetBool(cfgFrostfsIDValidationEnabled),
} }
@ -220,16 +228,28 @@ func newAppSettings(log *Logger, v *viper.Viper, key *keys.PrivateKey) *appSetti
} }
func (s *appSettings) update(v *viper.Viper, log *zap.Logger, key *keys.PrivateKey) { func (s *appSettings) update(v *viper.Viper, log *zap.Logger, key *keys.PrivateKey) {
s.setNamespaceHeader(v.GetString(cfgResolveNamespaceHeader)) // should be updated before placement policies s.updateNamespacesSettings(v, log)
s.initPlacementPolicy(log, v)
s.useDefaultXMLNamespace(v.GetBool(cfgKludgeUseDefaultXMLNS)) s.useDefaultXMLNamespace(v.GetBool(cfgKludgeUseDefaultXMLNS))
s.setACLEnabled(v.GetBool(cfgKludgeACLEnabled))
s.setBypassContentEncodingInChunks(v.GetBool(cfgKludgeBypassContentEncodingCheckInChunks)) s.setBypassContentEncodingInChunks(v.GetBool(cfgKludgeBypassContentEncodingCheckInChunks))
s.setClientCut(v.GetBool(cfgClientCut)) s.setClientCut(v.GetBool(cfgClientCut))
s.setBufferMaxSizeForPut(v.GetUint64(cfgBufferMaxSizeForPut)) s.setBufferMaxSizeForPut(v.GetUint64(cfgBufferMaxSizeForPut))
s.setMD5Enabled(v.GetBool(cfgMD5Enabled)) s.setMD5Enabled(v.GetBool(cfgMD5Enabled))
s.setDefaultNamespaces(fetchDefaultNamespaces(log, v))
s.setAuthorizedControlAPIKeys(append(fetchAuthorizedKeys(log, v), key.PublicKey())) s.setAuthorizedControlAPIKeys(append(fetchAuthorizedKeys(log, v), key.PublicKey()))
s.setPolicyDenyByDefault(v.GetBool(cfgPolicyDenyByDefault)) s.setPolicyDenyByDefault(v.GetBool(cfgPolicyDenyByDefault))
s.setSourceIPHeader(v.GetString(cfgSourceIPHeader))
}
func (s *appSettings) updateNamespacesSettings(v *viper.Viper, log *zap.Logger) {
nsHeader := v.GetString(cfgResolveNamespaceHeader)
nsConfig, defaultNamespaces := fetchNamespacesConfig(log, v)
s.mu.Lock()
defer s.mu.Unlock()
s.namespaceHeader = nsHeader
s.defaultNamespaces = defaultNamespaces
s.namespaces = nsConfig.Namespaces
} }
func (s *appSettings) BypassContentEncodingInChunks() bool { func (s *appSettings) BypassContentEncodingInChunks() bool {
@ -268,15 +288,6 @@ func (s *appSettings) setBufferMaxSizeForPut(size uint64) {
s.mu.Unlock() s.mu.Unlock()
} }
func (s *appSettings) initPlacementPolicy(l *zap.Logger, v *viper.Viper) {
nsConfig := fetchNamespacesConfig(l, v)
s.mu.Lock()
defer s.mu.Unlock()
s.namespaces = nsConfig.Namespaces
}
func (s *appSettings) DefaultPlacementPolicy(namespace string) netmap.PlacementPolicy { func (s *appSettings) DefaultPlacementPolicy(namespace string) netmap.PlacementPolicy {
s.mu.RLock() s.mu.RLock()
defer s.mu.RUnlock() defer s.mu.RUnlock()
@ -307,6 +318,13 @@ func (s *appSettings) DefaultCopiesNumbers(namespace string) []uint32 {
func (s *appSettings) NewXMLDecoder(r io.Reader) *xml.Decoder { func (s *appSettings) NewXMLDecoder(r io.Reader) *xml.Decoder {
dec := xml.NewDecoder(r) dec := xml.NewDecoder(r)
dec.CharsetReader = func(charset string, reader io.Reader) (io.Reader, error) {
enc, err := ianaindex.IANA.Encoding(charset)
if err != nil {
return nil, fmt.Errorf("charset %s: %w", charset, err)
}
return enc.NewDecoder().Reader(reader), nil
}
s.mu.RLock() s.mu.RLock()
if s.defaultXMLNS { if s.defaultXMLNS {
@ -351,39 +369,39 @@ func (s *appSettings) setMD5Enabled(md5Enabled bool) {
s.mu.Unlock() s.mu.Unlock()
} }
func (s *appSettings) setACLEnabled(enableACL bool) {
s.mu.Lock()
s.aclEnabled = enableACL
s.mu.Unlock()
}
func (s *appSettings) ACLEnabled() bool {
s.mu.RLock()
defer s.mu.RUnlock()
return s.aclEnabled
}
func (s *appSettings) NamespaceHeader() string { func (s *appSettings) NamespaceHeader() string {
s.mu.RLock() s.mu.RLock()
defer s.mu.RUnlock() defer s.mu.RUnlock()
return s.namespaceHeader return s.namespaceHeader
} }
func (s *appSettings) setNamespaceHeader(nsHeader string) {
s.mu.Lock()
s.namespaceHeader = nsHeader
s.mu.Unlock()
}
func (s *appSettings) FormContainerZone(ns string) (zone string, isDefault bool) { func (s *appSettings) FormContainerZone(ns string) (zone string, isDefault bool) {
if s.IsDefaultNamespace(ns) { if len(ns) == 0 {
return v2container.SysAttributeZoneDefault, true return v2container.SysAttributeZoneDefault, true
} }
return ns + ".ns", false return ns + ".ns", false
} }
func (s *appSettings) IsDefaultNamespace(ns string) bool { func (s *appSettings) isDefaultNamespace(ns string) bool {
s.mu.RLock() s.mu.RLock()
namespaces := s.defaultNamespaces namespaces := s.defaultNamespaces
s.mu.RUnlock() s.mu.RUnlock()
return slices.Contains(namespaces, ns) return slices.Contains(namespaces, ns)
} }
func (s *appSettings) setDefaultNamespaces(namespaces []string) {
s.mu.Lock()
s.defaultNamespaces = namespaces
s.mu.Unlock()
}
func (s *appSettings) FetchRawKeys() [][]byte { func (s *appSettings) FetchRawKeys() [][]byte {
s.mu.RLock() s.mu.RLock()
defer s.mu.RUnlock() defer s.mu.RUnlock()
@ -402,7 +420,7 @@ func (s *appSettings) setAuthorizedControlAPIKeys(keys keys.PublicKeys) {
} }
func (s *appSettings) ResolveNamespaceAlias(namespace string) string { func (s *appSettings) ResolveNamespaceAlias(namespace string) string {
if s.IsDefaultNamespace(namespace) { if s.isDefaultNamespace(namespace) {
return defaultNamespace return defaultNamespace
} }
@ -421,6 +439,18 @@ func (s *appSettings) setPolicyDenyByDefault(policyDenyByDefault bool) {
s.mu.Unlock() s.mu.Unlock()
} }
func (s *appSettings) setSourceIPHeader(header string) {
s.mu.Lock()
s.sourceIPHeader = header
s.mu.Unlock()
}
func (s *appSettings) SourceIPHeader() string {
s.mu.RLock()
defer s.mu.RUnlock()
return s.sourceIPHeader
}
func (a *App) initAPI(ctx context.Context) { func (a *App) initAPI(ctx context.Context) {
a.initLayer(ctx) a.initLayer(ctx)
a.initHandler() a.initHandler()
@ -439,13 +469,18 @@ func (a *App) initControlAPI() {
} }
func (a *App) initMetrics() { func (a *App) initMetrics() {
a.metrics = metrics.NewAppMetrics(a.log, frostfs.NewPoolStatistic(a.pool), a.cfg.GetBool(cfgPrometheusEnabled)) cfg := metrics.AppMetricsConfig{
Logger: a.log,
PoolStatistics: frostfs.NewPoolStatistic(a.pool),
Enabled: a.cfg.GetBool(cfgPrometheusEnabled),
}
a.metrics = metrics.NewAppMetrics(cfg)
a.metrics.State().SetHealth(metrics.HealthStatusStarting) a.metrics.State().SetHealth(metrics.HealthStatusStarting)
} }
func (a *App) initFrostfsID(ctx context.Context) { func (a *App) initFrostfsID(ctx context.Context) {
var err error cli, err := ffidcontract.New(ctx, ffidcontract.Config{
a.frostfsid, err = frostfsid.New(ctx, frostfsid.Config{
RPCAddress: a.cfg.GetString(cfgRPCEndpoint), RPCAddress: a.cfg.GetString(cfgRPCEndpoint),
Contract: a.cfg.GetString(cfgFrostfsIDContract), Contract: a.cfg.GetString(cfgFrostfsIDContract),
ProxyContract: a.cfg.GetString(cfgProxyContract), ProxyContract: a.cfg.GetString(cfgProxyContract),
@ -454,16 +489,19 @@ func (a *App) initFrostfsID(ctx context.Context) {
if err != nil { if err != nil {
a.log.Fatal(logs.InitFrostfsIDContractFailed, zap.Error(err)) a.log.Fatal(logs.InitFrostfsIDContractFailed, zap.Error(err))
} }
a.frostfsid, err = frostfsid.NewFrostFSID(frostfsid.Config{
Cache: cache.NewFrostfsIDCache(getFrostfsIDCacheConfig(a.cfg, a.log)),
FrostFSID: cli,
Logger: a.log,
})
if err != nil {
a.log.Fatal(logs.InitFrostfsIDContractFailed, zap.Error(err))
}
} }
func (a *App) initPolicyStorage(ctx context.Context) { func (a *App) initPolicyStorage(ctx context.Context) {
var ( policyContract, err := contract.New(ctx, contract.Config{
err error
policyContract policy.Contract
)
if a.cfg.GetBool(cfgPolicyEnabled) {
policyContract, err = contract.New(ctx, contract.Config{
RPCAddress: a.cfg.GetString(cfgRPCEndpoint), RPCAddress: a.cfg.GetString(cfgRPCEndpoint),
Contract: a.cfg.GetString(cfgPolicyContract), Contract: a.cfg.GetString(cfgPolicyContract),
ProxyContract: a.cfg.GetString(cfgProxyContract), ProxyContract: a.cfg.GetString(cfgProxyContract),
@ -472,9 +510,6 @@ func (a *App) initPolicyStorage(ctx context.Context) {
if err != nil { if err != nil {
a.log.Fatal(logs.InitPolicyContractFailed, zap.Error(err)) a.log.Fatal(logs.InitPolicyContractFailed, zap.Error(err))
} }
} else {
policyContract = contract.NewInMemoryContract()
}
a.policyStorage = policy.NewStorage(policy.StorageConfig{ a.policyStorage = policy.NewStorage(policy.StorageConfig{
Contract: policyContract, Contract: policyContract,
@ -684,6 +719,9 @@ func (a *App) Serve(ctx context.Context) {
FrostfsID: a.frostfsid, FrostfsID: a.frostfsid,
FrostFSIDValidation: a.settings.frostfsidValidation, FrostFSIDValidation: a.settings.frostfsidValidation,
XMLDecoder: a.settings,
Tagging: a.obj,
} }
chiRouter := api.NewRouter(cfg) chiRouter := api.NewRouter(cfg)
@ -699,17 +737,23 @@ func (a *App) Serve(ctx context.Context) {
a.startServices() a.startServices()
for i := range a.servers { servs := a.getServers()
go func(i int) {
a.log.Info(logs.StartingServer, zap.String("address", a.servers[i].Address()))
if err := srv.Serve(a.servers[i].Listener()); err != nil && err != http.ErrServerClosed { for i := range servs {
a.metrics.MarkUnhealthy(a.servers[i].Address()) go func(i int) {
a.log.Info(logs.StartingServer, zap.String("address", servs[i].Address()))
if err := srv.Serve(servs[i].Listener()); err != nil && err != http.ErrServerClosed {
a.metrics.MarkUnhealthy(servs[i].Address())
a.log.Fatal(logs.ListenAndServe, zap.Error(err)) a.log.Fatal(logs.ListenAndServe, zap.Error(err))
} }
}(i) }(i)
} }
if len(a.unbindServers) != 0 {
a.scheduleReconnect(ctx, srv)
}
go func() { go func() {
address := a.cfg.GetString(cfgControlGRPCEndpoint) address := a.cfg.GetString(cfgControlGRPCEndpoint)
a.log.Info(logs.StartingControlAPI, zap.String("address", address)) a.log.Info(logs.StartingControlAPI, zap.String("address", address))
@ -826,7 +870,7 @@ func (a *App) startServices() {
} }
func (a *App) initServers(ctx context.Context) { func (a *App) initServers(ctx context.Context) {
serversInfo := fetchServers(a.cfg) serversInfo := fetchServers(a.cfg, a.log)
a.servers = make([]Server, 0, len(serversInfo)) a.servers = make([]Server, 0, len(serversInfo))
for _, serverInfo := range serversInfo { for _, serverInfo := range serversInfo {
@ -836,6 +880,7 @@ func (a *App) initServers(ctx context.Context) {
} }
srv, err := newServer(ctx, serverInfo) srv, err := newServer(ctx, serverInfo)
if err != nil { if err != nil {
a.unbindServers = append(a.unbindServers, serverInfo)
a.metrics.MarkUnhealthy(serverInfo.Address) a.metrics.MarkUnhealthy(serverInfo.Address)
a.log.Warn(logs.FailedToAddServer, append(fields, zap.Error(err))...) a.log.Warn(logs.FailedToAddServer, append(fields, zap.Error(err))...)
continue continue
@ -852,22 +897,25 @@ func (a *App) initServers(ctx context.Context) {
} }
func (a *App) updateServers() error { func (a *App) updateServers() error {
serversInfo := fetchServers(a.cfg) serversInfo := fetchServers(a.cfg, a.log)
a.mu.Lock()
defer a.mu.Unlock()
var found bool var found bool
for _, serverInfo := range serversInfo { for _, serverInfo := range serversInfo {
index := a.serverIndex(serverInfo.Address) ser := a.getServer(serverInfo.Address)
if index == -1 { if ser != nil {
continue
}
if serverInfo.TLS.Enabled { if serverInfo.TLS.Enabled {
if err := a.servers[index].UpdateCert(serverInfo.TLS.CertFile, serverInfo.TLS.KeyFile); err != nil { if err := ser.UpdateCert(serverInfo.TLS.CertFile, serverInfo.TLS.KeyFile); err != nil {
return fmt.Errorf("failed to update tls certs: %w", err) return fmt.Errorf("failed to update tls certs: %w", err)
} }
}
found = true found = true
} }
} else if unbind := a.updateUnbindServerInfo(serverInfo); unbind {
found = true
}
}
if !found { if !found {
return fmt.Errorf("invalid servers configuration: no known server found") return fmt.Errorf("invalid servers configuration: no known server found")
@ -876,15 +924,6 @@ func (a *App) updateServers() error {
return nil return nil
} }
func (a *App) serverIndex(address string) int {
for i := range a.servers {
if a.servers[i].Address() == address {
return i
}
}
return -1
}
func (a *App) stopServices() { func (a *App) stopServices() {
ctx, cancel := shutdownContext() ctx, cancel := shutdownContext()
defer cancel() defer cancel()
@ -950,22 +989,49 @@ func getMorphPolicyCacheConfig(v *viper.Viper, l *zap.Logger) *cache.Config {
return cacheCfg return cacheCfg
} }
func (a *App) initHandler() { func getFrostfsIDCacheConfig(v *viper.Viper, l *zap.Logger) *cache.Config {
var ( cacheCfg := cache.DefaultFrostfsIDConfig(l)
err error
ffsid handler.FrostFSID
)
if a.frostfsid != nil { cacheCfg.Lifetime = fetchCacheLifetime(v, l, cfgFrostfsIDCacheLifetime, cacheCfg.Lifetime)
ffsid = a.frostfsid cacheCfg.Size = fetchCacheSize(v, l, cfgFrostfsIDCacheSize, cacheCfg.Size)
return cacheCfg
} }
a.api, err = handler.New(a.log, a.obj, a.nc, a.settings, a.policyStorage, ffsid) func (a *App) initHandler() {
var err error
a.api, err = handler.New(a.log, a.obj, a.nc, a.settings, a.policyStorage, a.frostfsid)
if err != nil { if err != nil {
a.log.Fatal(logs.CouldNotInitializeAPIHandler, zap.Error(err)) a.log.Fatal(logs.CouldNotInitializeAPIHandler, zap.Error(err))
} }
} }
func (a *App) getServer(address string) Server {
for i := range a.servers {
if a.servers[i].Address() == address {
return a.servers[i]
}
}
return nil
}
func (a *App) updateUnbindServerInfo(info ServerInfo) bool {
for i := range a.unbindServers {
if a.unbindServers[i].Address == info.Address {
a.unbindServers[i] = info
return true
}
}
return false
}
func (a *App) getServers() []Server {
a.mu.RLock()
defer a.mu.RUnlock()
return a.servers
}
func (a *App) setRuntimeParameters() { func (a *App) setRuntimeParameters() {
if len(os.Getenv("GOMEMLIMIT")) != 0 { if len(os.Getenv("GOMEMLIMIT")) != 0 {
// default limit < yaml limit < app env limit < GOMEMLIMIT // default limit < yaml limit < app env limit < GOMEMLIMIT
@ -981,3 +1047,60 @@ func (a *App) setRuntimeParameters() {
zap.Int64("old_value", previous)) zap.Int64("old_value", previous))
} }
} }
func (a *App) scheduleReconnect(ctx context.Context, srv *http.Server) {
go func() {
t := time.NewTicker(a.settings.reconnectInterval)
defer t.Stop()
for {
select {
case <-t.C:
if a.tryReconnect(ctx, srv) {
return
}
t.Reset(a.settings.reconnectInterval)
case <-ctx.Done():
return
}
}
}()
}
func (a *App) tryReconnect(ctx context.Context, sr *http.Server) bool {
a.mu.Lock()
defer a.mu.Unlock()
a.log.Info(logs.ServerReconnecting)
var failedServers []ServerInfo
for _, serverInfo := range a.unbindServers {
fields := []zap.Field{
zap.String("address", serverInfo.Address), zap.Bool("tls enabled", serverInfo.TLS.Enabled),
zap.String("tls cert", serverInfo.TLS.CertFile), zap.String("tls key", serverInfo.TLS.KeyFile),
}
srv, err := newServer(ctx, serverInfo)
if err != nil {
a.log.Warn(logs.ServerReconnectFailed, zap.Error(err))
failedServers = append(failedServers, serverInfo)
a.metrics.MarkUnhealthy(serverInfo.Address)
continue
}
go func() {
a.log.Info(logs.StartingServer, zap.String("address", srv.Address()))
a.metrics.MarkHealthy(serverInfo.Address)
if err = sr.Serve(srv.Listener()); err != nil && !errors.Is(err, http.ErrServerClosed) {
a.log.Warn(logs.ListenAndServe, zap.Error(err))
a.metrics.MarkUnhealthy(serverInfo.Address)
}
}()
a.servers = append(a.servers, srv)
a.log.Info(logs.ServerReconnectedSuccessfully, fields...)
}
a.unbindServers = failedServers
return len(a.unbindServers) == 0
}

View file

@ -59,6 +59,8 @@ const (
defaultConstraintName = "default" defaultConstraintName = "default"
defaultNamespace = "" defaultNamespace = ""
defaultReconnectInterval = time.Minute
) )
var ( var (
@ -114,6 +116,8 @@ const ( // Settings.
cfgAccessControlCacheSize = "cache.accesscontrol.size" cfgAccessControlCacheSize = "cache.accesscontrol.size"
cfgMorphPolicyCacheLifetime = "cache.morph_policy.lifetime" cfgMorphPolicyCacheLifetime = "cache.morph_policy.lifetime"
cfgMorphPolicyCacheSize = "cache.morph_policy.size" cfgMorphPolicyCacheSize = "cache.morph_policy.size"
cfgFrostfsIDCacheLifetime = "cache.frostfsid.lifetime"
cfgFrostfsIDCacheSize = "cache.frostfsid.size"
cfgAccessBoxCacheRemovingCheckInterval = "cache.accessbox.removing_check_interval" cfgAccessBoxCacheRemovingCheckInterval = "cache.accessbox.removing_check_interval"
@ -166,6 +170,7 @@ const ( // Settings.
cfgKludgeUseDefaultXMLNS = "kludge.use_default_xmlns" cfgKludgeUseDefaultXMLNS = "kludge.use_default_xmlns"
cfgKludgeBypassContentEncodingCheckInChunks = "kludge.bypass_content_encoding_check_in_chunks" cfgKludgeBypassContentEncodingCheckInChunks = "kludge.bypass_content_encoding_check_in_chunks"
cfgKludgeDefaultNamespaces = "kludge.default_namespaces" cfgKludgeDefaultNamespaces = "kludge.default_namespaces"
cfgKludgeACLEnabled = "kludge.acl_enabled"
// Web. // Web.
cfgWebReadTimeout = "web.read_timeout" cfgWebReadTimeout = "web.read_timeout"
@ -176,6 +181,8 @@ const ( // Settings.
// Namespaces. // Namespaces.
cfgNamespacesConfig = "namespaces.config" cfgNamespacesConfig = "namespaces.config"
cfgSourceIPHeader = "source_ip_header"
// Command line args. // Command line args.
cmdHelp = "help" cmdHelp = "help"
cmdVersion = "version" cmdVersion = "version"
@ -216,12 +223,14 @@ const ( // Settings.
cfgFrostfsIDValidationEnabled = "frostfsid.validation.enabled" cfgFrostfsIDValidationEnabled = "frostfsid.validation.enabled"
// Policy. // Policy.
cfgPolicyEnabled = "policy.enabled"
cfgPolicyContract = "policy.contract" cfgPolicyContract = "policy.contract"
// Proxy. // Proxy.
cfgProxyContract = "proxy.contract" cfgProxyContract = "proxy.contract"
// Server.
cfgReconnectInterval = "reconnect_interval"
// envPrefix is an environment variables prefix used for configuration. // envPrefix is an environment variables prefix used for configuration.
envPrefix = "S3_GW" envPrefix = "S3_GW"
) )
@ -244,6 +253,15 @@ func fetchConnectTimeout(cfg *viper.Viper) time.Duration {
return connTimeout return connTimeout
} }
func fetchReconnectInterval(cfg *viper.Viper) time.Duration {
reconnect := cfg.GetDuration(cfgReconnectInterval)
if reconnect <= 0 {
reconnect = defaultReconnectInterval
}
return reconnect
}
func fetchStreamTimeout(cfg *viper.Viper) time.Duration { func fetchStreamTimeout(cfg *viper.Viper) time.Duration {
streamTimeout := cfg.GetDuration(cfgStreamTimeout) streamTimeout := cfg.GetDuration(cfgStreamTimeout)
if streamTimeout <= 0 { if streamTimeout <= 0 {
@ -515,7 +533,7 @@ func fetchDefaultNamespaces(l *zap.Logger, v *viper.Viper) []string {
return defaultNamespaces return defaultNamespaces
} }
func fetchNamespacesConfig(l *zap.Logger, v *viper.Viper) NamespacesConfig { func fetchNamespacesConfig(l *zap.Logger, v *viper.Viper) (NamespacesConfig, []string) {
defaultNSRegionMap := fetchRegionMappingPolicies(l, v) defaultNSRegionMap := fetchRegionMappingPolicies(l, v)
defaultNSRegionMap[defaultConstraintName] = fetchDefaultPolicy(l, v) defaultNSRegionMap[defaultConstraintName] = fetchDefaultPolicy(l, v)
@ -551,15 +569,13 @@ func fetchNamespacesConfig(l *zap.Logger, v *viper.Viper) NamespacesConfig {
} }
} }
for _, name := range defaultNamespacesNames { nsConfig.Namespaces[defaultNamespace] = Namespace{
nsConfig.Namespaces[name] = Namespace{ Name: defaultNamespace,
Name: name,
LocationConstraints: defaultNSValue.LocationConstraints, LocationConstraints: defaultNSValue.LocationConstraints,
CopiesNumbers: defaultNSValue.CopiesNumbers, CopiesNumbers: defaultNSValue.CopiesNumbers,
} }
}
return nsConfig return nsConfig, defaultNamespacesNames
} }
func readNamespacesConfig(filepath string) (NamespacesConfig, error) { func readNamespacesConfig(filepath string) (NamespacesConfig, error) {
@ -613,8 +629,9 @@ func fetchPeers(l *zap.Logger, v *viper.Viper) []pool.NodeParam {
return nodes return nodes
} }
func fetchServers(v *viper.Viper) []ServerInfo { func fetchServers(v *viper.Viper, log *zap.Logger) []ServerInfo {
var servers []ServerInfo var servers []ServerInfo
seen := make(map[string]struct{})
for i := 0; ; i++ { for i := 0; ; i++ {
key := cfgServer + "." + strconv.Itoa(i) + "." key := cfgServer + "." + strconv.Itoa(i) + "."
@ -629,6 +646,11 @@ func fetchServers(v *viper.Viper) []ServerInfo {
break break
} }
if _, ok := seen[serverInfo.Address]; ok {
log.Warn(logs.WarnDuplicateAddress, zap.String("address", serverInfo.Address))
continue
}
seen[serverInfo.Address] = struct{}{}
servers = append(servers, serverInfo) servers = append(servers, serverInfo)
} }
@ -719,6 +741,7 @@ func newSettings() *viper.Viper {
v.SetDefault(cfgKludgeUseDefaultXMLNS, false) v.SetDefault(cfgKludgeUseDefaultXMLNS, false)
v.SetDefault(cfgKludgeBypassContentEncodingCheckInChunks, false) v.SetDefault(cfgKludgeBypassContentEncodingCheckInChunks, false)
v.SetDefault(cfgKludgeDefaultNamespaces, defaultDefaultNamespaces) v.SetDefault(cfgKludgeDefaultNamespaces, defaultDefaultNamespaces)
v.SetDefault(cfgKludgeACLEnabled, false)
// web // web
v.SetDefault(cfgWebReadHeaderTimeout, defaultReadHeaderTimeout) v.SetDefault(cfgWebReadHeaderTimeout, defaultReadHeaderTimeout)
@ -729,7 +752,6 @@ func newSettings() *viper.Viper {
// policy // policy
v.SetDefault(cfgPolicyContract, "policy.frostfs") v.SetDefault(cfgPolicyContract, "policy.frostfs")
v.SetDefault(cfgPolicyEnabled, true)
// proxy // proxy
v.SetDefault(cfgProxyContract, "proxy.frostfs") v.SetDefault(cfgProxyContract, "proxy.frostfs")
@ -979,7 +1001,7 @@ func newJournaldLogger(lvl zapcore.Level) *Logger {
encoder := zapjournald.NewPartialEncoder(zapcore.NewConsoleEncoder(c.EncoderConfig), zapjournald.SyslogFields) encoder := zapjournald.NewPartialEncoder(zapcore.NewConsoleEncoder(c.EncoderConfig), zapjournald.SyslogFields)
core := zapjournald.NewCore(zap.NewAtomicLevelAt(lvl), encoder, &journald.Journal{}, zapjournald.SyslogFields) core := zapjournald.NewCore(c.Level, encoder, &journald.Journal{}, zapjournald.SyslogFields)
coreWithContext := core.With([]zapcore.Field{ coreWithContext := core.With([]zapcore.Field{
zapjournald.SyslogFacility(zapjournald.LogDaemon), zapjournald.SyslogFacility(zapjournald.LogDaemon),
zapjournald.SyslogIdentifier(), zapjournald.SyslogIdentifier(),

View file

@ -34,6 +34,15 @@ func TestDefaultNamespace(t *testing.T) {
</Part> </Part>
</CompleteMultipartUpload> </CompleteMultipartUpload>
` `
xmlASCII := `<?xml version="1.0" encoding="US-ASCII"?>
<CompleteMultipartUpload>
<Part>
<PartNumber>1</PartNumber>
<ETag>
8b73814bee405ec32b5d1fc88cd5d97a
</ETag>
</Part>
</CompleteMultipartUpload>`
for _, tc := range []struct { for _, tc := range []struct {
settings *appSettings settings *appSettings
@ -82,6 +91,13 @@ func TestDefaultNamespace(t *testing.T) {
input: xmlBodyWithInvalidNamespace, input: xmlBodyWithInvalidNamespace,
err: true, err: true,
}, },
{
settings: &appSettings{
defaultXMLNS: true,
},
input: xmlASCII,
err: false,
},
} { } {
t.Run("", func(t *testing.T) { t.Run("", func(t *testing.T) {
model := new(handler.CompleteMultipartUpload) model := new(handler.CompleteMultipartUpload)

View file

@ -68,11 +68,13 @@ func newServer(ctx context.Context, serverInfo ServerInfo) (*server, error) {
if serverInfo.TLS.Enabled { if serverInfo.TLS.Enabled {
if err = tlsProvider.UpdateCert(serverInfo.TLS.CertFile, serverInfo.TLS.KeyFile); err != nil { if err = tlsProvider.UpdateCert(serverInfo.TLS.CertFile, serverInfo.TLS.KeyFile); err != nil {
return nil, fmt.Errorf("failed to update cert: %w", err) lnErr := ln.Close()
return nil, fmt.Errorf("failed to update cert (listener close: %v): %w", lnErr, err)
} }
ln = tls.NewListener(ln, &tls.Config{ ln = tls.NewListener(ln, &tls.Config{
GetCertificate: tlsProvider.GetCertificate, GetCertificate: tlsProvider.GetCertificate,
NextProtos: []string{"h2"}, // required to enable HTTP/2 requests in `http.Serve`
}) })
} }

119
cmd/s3-gw/server_test.go Normal file
View file

@ -0,0 +1,119 @@
package main
import (
"context"
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"fmt"
"math/big"
"net"
"net/http"
"os"
"path"
"testing"
"time"
"github.com/stretchr/testify/require"
"golang.org/x/net/http2"
)
const (
expHeaderKey = "Foo"
expHeaderValue = "Bar"
)
func TestHTTP2TLS(t *testing.T) {
ctx := context.Background()
certPath, keyPath := prepareTestCerts(t)
srv := &http.Server{
Handler: http.HandlerFunc(testHandler),
}
tlsListener, err := newServer(ctx, ServerInfo{
Address: ":0",
TLS: ServerTLSInfo{
Enabled: true,
CertFile: certPath,
KeyFile: keyPath,
},
})
require.NoError(t, err)
port := tlsListener.Listener().Addr().(*net.TCPAddr).Port
addr := fmt.Sprintf("https://localhost:%d", port)
go func() {
_ = srv.Serve(tlsListener.Listener())
}()
// Server is running, now send HTTP/2 request
tlsClientConfig := &tls.Config{
InsecureSkipVerify: true,
}
cliHTTP1 := http.Client{Transport: &http.Transport{TLSClientConfig: tlsClientConfig}}
cliHTTP2 := http.Client{Transport: &http2.Transport{TLSClientConfig: tlsClientConfig}}
req, err := http.NewRequest("GET", addr, nil)
require.NoError(t, err)
req.Header[expHeaderKey] = []string{expHeaderValue}
resp, err := cliHTTP1.Do(req)
require.NoError(t, err)
require.Equal(t, http.StatusOK, resp.StatusCode)
resp, err = cliHTTP2.Do(req)
require.NoError(t, err)
require.Equal(t, http.StatusOK, resp.StatusCode)
}
func testHandler(resp http.ResponseWriter, req *http.Request) {
hdr, ok := req.Header[expHeaderKey]
if !ok || len(hdr) != 1 || hdr[0] != expHeaderValue {
resp.WriteHeader(http.StatusBadRequest)
} else {
resp.WriteHeader(http.StatusOK)
}
}
func prepareTestCerts(t *testing.T) (certPath, keyPath string) {
privateKey, err := rsa.GenerateKey(rand.Reader, 2048)
require.NoError(t, err)
template := x509.Certificate{
SerialNumber: big.NewInt(1),
Subject: pkix.Name{CommonName: "localhost"},
NotBefore: time.Now(),
NotAfter: time.Now().Add(time.Hour * 24 * 365),
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
BasicConstraintsValid: true,
}
derBytes, err := x509.CreateCertificate(rand.Reader, &template, &template, &privateKey.PublicKey, privateKey)
require.NoError(t, err)
dir := t.TempDir()
certPath = path.Join(dir, "cert.pem")
keyPath = path.Join(dir, "key.pem")
certFile, err := os.Create(certPath)
require.NoError(t, err)
defer certFile.Close()
keyFile, err := os.Create(keyPath)
require.NoError(t, err)
defer keyFile.Close()
err = pem.Encode(certFile, &pem.Block{Type: "CERTIFICATE", Bytes: derBytes})
require.NoError(t, err)
err = pem.Encode(keyFile, &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(privateKey)})
require.NoError(t, err)
return certPath, keyPath
}

View file

@ -33,6 +33,9 @@ S3_GW_SERVER_1_TLS_ENABLED=true
S3_GW_SERVER_1_TLS_CERT_FILE=/path/to/tls/cert S3_GW_SERVER_1_TLS_CERT_FILE=/path/to/tls/cert
S3_GW_SERVER_1_TLS_KEY_FILE=/path/to/tls/key S3_GW_SERVER_1_TLS_KEY_FILE=/path/to/tls/key
# How often to reconnect to the servers
S3_GW_RECONNECT_INTERVAL: 1m
# Control API # Control API
# List of hex-encoded public keys that have rights to use the Control Service # List of hex-encoded public keys that have rights to use the Control Service
S3_GW_CONTROL_AUTHORIZED_KEYS=035839e45d472a3b7769a2a1bd7d54c4ccd4943c3b40f547870e83a8fcbfb3ce11 028f42cfcb74499d7b15b35d9bff260a1c8d27de4f446a627406a382d8961486d6 S3_GW_CONTROL_AUTHORIZED_KEYS=035839e45d472a3b7769a2a1bd7d54c4ccd4943c3b40f547870e83a8fcbfb3ce11 028f42cfcb74499d7b15b35d9bff260a1c8d27de4f446a627406a382d8961486d6
@ -104,6 +107,9 @@ S3_GW_CACHE_ACCESSCONTROL_SIZE=100000
# Cache which stores list of policy chains # Cache which stores list of policy chains
S3_GW_CACHE_MORPH_POLICY_LIFETIME=1m S3_GW_CACHE_MORPH_POLICY_LIFETIME=1m
S3_GW_CACHE_MORPH_POLICY_SIZE=10000 S3_GW_CACHE_MORPH_POLICY_SIZE=10000
# Cache which stores frostfsid subject info
S3_GW_CACHE_FROSTFSID_LIFETIME=1m
S3_GW_CACHE_FROSTFSID_SIZE=10000
# NATS # NATS
S3_GW_NATS_ENABLED=true S3_GW_NATS_ENABLED=true
@ -162,6 +168,8 @@ S3_GW_KLUDGE_USE_DEFAULT_XMLNS=false
S3_GW_KLUDGE_BYPASS_CONTENT_ENCODING_CHECK_IN_CHUNKS=false S3_GW_KLUDGE_BYPASS_CONTENT_ENCODING_CHECK_IN_CHUNKS=false
# Namespaces that should be handled as default # Namespaces that should be handled as default
S3_GW_KLUDGE_DEFAULT_NAMESPACES="" "root" S3_GW_KLUDGE_DEFAULT_NAMESPACES="" "root"
# Enable bucket/object ACL support for newly created buckets.
S3_GW_KLUDGE_ACL_ENABLED=false
S3_GW_TRACING_ENABLED=false S3_GW_TRACING_ENABLED=false
S3_GW_TRACING_ENDPOINT="localhost:4318" S3_GW_TRACING_ENDPOINT="localhost:4318"
@ -203,8 +211,6 @@ S3_GW_FROSTFSID_CONTRACT=frostfsid.frostfs
S3_GW_FROSTFSID_VALIDATION_ENABLED=true S3_GW_FROSTFSID_VALIDATION_ENABLED=true
# Policy contract configuration. To enable this functionality the `rpc_endpoint` param must be also set. # Policy contract configuration. To enable this functionality the `rpc_endpoint` param must be also set.
# Enables using policies from Policy contract.
S3_GW_POLICY_ENABLED=true
# Policy contract hash (LE) or name in NNS. # Policy contract hash (LE) or name in NNS.
S3_GW_POLICY_CONTRACT=policy.frostfs S3_GW_POLICY_CONTRACT=policy.frostfs
@ -214,3 +220,6 @@ S3_GW_PROXY_CONTRACT=proxy.frostfs
# Namespaces configuration # Namespaces configuration
S3_GW_NAMESPACES_CONFIG=namespaces.json S3_GW_NAMESPACES_CONFIG=namespaces.json
# Custom header to retrieve Source IP
S3_GW_SOURCE_IP_HEADER=Source-Ip

View file

@ -25,6 +25,8 @@ peers:
priority: 2 priority: 2
weight: 0.9 weight: 0.9
reconnect_interval: 1m
server: server:
- address: 0.0.0.0:8080 - address: 0.0.0.0:8080
tls: tls:
@ -129,6 +131,10 @@ cache:
morph_policy: morph_policy:
lifetime: 1m lifetime: 1m
size: 10000 size: 10000
# Cache which stores frostfsid subject info
frostfsid:
lifetime: 1m
size: 10000
nats: nats:
enabled: true enabled: true
@ -193,6 +199,8 @@ kludge:
bypass_content_encoding_check_in_chunks: false bypass_content_encoding_check_in_chunks: false
# Namespaces that should be handled as default # Namespaces that should be handled as default
default_namespaces: [ "", "root" ] default_namespaces: [ "", "root" ]
# Enable bucket/object ACL support for newly created buckets.
acl_enabled: false
runtime: runtime:
soft_memory_limit: 1gb soft_memory_limit: 1gb
@ -241,8 +249,6 @@ frostfsid:
# Policy contract configuration. To enable this functionality the `rpc_endpoint` param must be also set. # Policy contract configuration. To enable this functionality the `rpc_endpoint` param must be also set.
policy: policy:
# Enables using policies from Policy contract.
enabled: true
# Policy contract hash (LE) or name in NNS. # Policy contract hash (LE) or name in NNS.
contract: policy.frostfs contract: policy.frostfs
@ -253,3 +259,6 @@ proxy:
namespaces: namespaces:
config: namespaces.json config: namespaces.json
# Custom header to retrieve Source IP
source_ip_header: "Source-Ip"

View file

@ -59,6 +59,14 @@ func (g *GateData) SessionTokenForSetEACL() *session.Container {
return g.containerSessionToken(session.VerbContainerSetEACL) return g.containerSessionToken(session.VerbContainerSetEACL)
} }
// SessionToken returns the first container session context.
func (g *GateData) SessionToken() *session.Container {
if len(g.SessionTokens) != 0 {
return g.SessionTokens[0]
}
return nil
}
func (g *GateData) containerSessionToken(verb session.ContainerVerb) *session.Container { func (g *GateData) containerSessionToken(verb session.ContainerVerb) *session.Container {
for _, sessionToken := range g.SessionTokens { for _, sessionToken := range g.SessionTokens {
if isAppropriateContainerContext(sessionToken, verb) { if isAppropriateContainerContext(sessionToken, verb) {

View file

@ -1,6 +1,7 @@
package accessbox package accessbox
import ( import (
"encoding/hex"
"testing" "testing"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
@ -170,3 +171,148 @@ func TestUnknownKey(t *testing.T) {
_, err = box.GetTokens(wrongCred) _, err = box.GetTokens(wrongCred)
require.Error(t, err) require.Error(t, err)
} }
func TestGateDataSessionToken(t *testing.T) {
cred, err := keys.NewPrivateKey()
require.NoError(t, err)
var tkn bearer.Token
gate := NewGateData(cred.PublicKey(), &tkn)
require.Equal(t, cred.PublicKey(), gate.GateKey)
assertBearerToken(t, tkn, *gate.BearerToken)
t.Run("session token for put", func(t *testing.T) {
gate.SessionTokens = []*session.Container{}
sessionTkn := gate.SessionTokenForPut()
require.Nil(t, sessionTkn)
sessionTknPut := new(session.Container)
sessionTknPut.ForVerb(session.VerbContainerPut)
gate.SessionTokens = []*session.Container{sessionTknPut}
sessionTkn = gate.SessionTokenForPut()
require.Equal(t, sessionTknPut, sessionTkn)
})
t.Run("session token for delete", func(t *testing.T) {
gate.SessionTokens = []*session.Container{}
sessionTkn := gate.SessionTokenForDelete()
require.Nil(t, sessionTkn)
sessionTknDelete := new(session.Container)
sessionTknDelete.ForVerb(session.VerbContainerDelete)
gate.SessionTokens = []*session.Container{sessionTknDelete}
sessionTkn = gate.SessionTokenForDelete()
require.Equal(t, sessionTknDelete, sessionTkn)
})
t.Run("session token for set eACL", func(t *testing.T) {
gate.SessionTokens = []*session.Container{}
sessionTkn := gate.SessionTokenForSetEACL()
require.Nil(t, sessionTkn)
sessionTknSetEACL := new(session.Container)
sessionTknSetEACL.ForVerb(session.VerbContainerSetEACL)
gate.SessionTokens = []*session.Container{sessionTknSetEACL}
sessionTkn = gate.SessionTokenForSetEACL()
require.Equal(t, sessionTknSetEACL, sessionTkn)
})
t.Run("session token", func(t *testing.T) {
gate.SessionTokens = []*session.Container{}
sessionTkn := gate.SessionToken()
require.Nil(t, sessionTkn)
sessionTknPut := new(session.Container)
sessionTknPut.ForVerb(session.VerbContainerPut)
gate.SessionTokens = []*session.Container{sessionTknPut}
sessionTkn = gate.SessionToken()
require.Equal(t, sessionTkn, sessionTknPut)
})
}
func TestGetBox(t *testing.T) {
cred, err := keys.NewPrivateKey()
require.NoError(t, err)
var tkn bearer.Token
gate := NewGateData(cred.PublicKey(), &tkn)
secret := []byte("secret")
accessBox, _, err := PackTokens([]*GateData{gate}, secret)
require.NoError(t, err)
box, err := accessBox.GetBox(cred)
require.NoError(t, err)
require.Equal(t, hex.EncodeToString(secret), box.Gate.SecretKey)
}
func TestAccessBox(t *testing.T) {
cred, err := keys.NewPrivateKey()
require.NoError(t, err)
var tkn bearer.Token
gate := NewGateData(cred.PublicKey(), &tkn)
accessBox, _, err := PackTokens([]*GateData{gate}, nil)
require.NoError(t, err)
t.Run("invalid owner", func(t *testing.T) {
randomKey, err := keys.NewPrivateKey()
require.NoError(t, err)
_, err = accessBox.GetTokens(randomKey)
require.Error(t, err)
_, err = accessBox.GetBox(randomKey)
require.Error(t, err)
})
t.Run("empty placement policy", func(t *testing.T) {
policy, err := accessBox.GetPlacementPolicy()
require.NoError(t, err)
require.Nil(t, policy)
})
t.Run("get correct placement policy", func(t *testing.T) {
policy := &AccessBox_ContainerPolicy{LocationConstraint: "locationConstraint"}
accessBox.ContainerPolicy = []*AccessBox_ContainerPolicy{policy}
policies, err := accessBox.GetPlacementPolicy()
require.NoError(t, err)
require.Len(t, policies, 1)
require.Equal(t, policy.LocationConstraint, policies[0].LocationConstraint)
})
t.Run("get incorrect placement policy", func(t *testing.T) {
policy := &AccessBox_ContainerPolicy{
LocationConstraint: "locationConstraint",
Policy: []byte("policy"),
}
accessBox.ContainerPolicy = []*AccessBox_ContainerPolicy{policy}
_, err = accessBox.GetPlacementPolicy()
require.Error(t, err)
_, err = accessBox.GetBox(cred)
require.Error(t, err)
})
t.Run("empty seed key", func(t *testing.T) {
accessBox.SeedKey = nil
_, err = accessBox.GetTokens(cred)
require.Error(t, err)
_, err = accessBox.GetBox(cred)
require.Error(t, err)
})
t.Run("invalid gate key", func(t *testing.T) {
gate = &GateData{
BearerToken: &tkn,
GateKey: &keys.PublicKey{},
}
_, _, err = PackTokens([]*GateData{gate}, nil)
require.Error(t, err)
})
}

View file

@ -22,7 +22,7 @@ import (
type ( type (
// Credentials is a bearer token get/put interface. // Credentials is a bearer token get/put interface.
Credentials interface { Credentials interface {
GetBox(context.Context, oid.Address) (*accessbox.Box, error) GetBox(context.Context, oid.Address) (*accessbox.Box, []object.Attribute, error)
Put(context.Context, cid.ID, CredentialsParam) (oid.Address, error) Put(context.Context, cid.ID, CredentialsParam) (oid.Address, error)
Update(context.Context, oid.Address, CredentialsParam) (oid.Address, error) Update(context.Context, oid.Address, CredentialsParam) (oid.Address, error)
} }
@ -86,13 +86,13 @@ type FrostFS interface {
// prevented the object from being created. // prevented the object from being created.
CreateObject(context.Context, PrmObjectCreate) (oid.ID, error) CreateObject(context.Context, PrmObjectCreate) (oid.ID, error)
// GetCredsPayload gets payload of the credential object from FrostFS network. // GetCredsObject gets the credential object from FrostFS network.
// It uses search by system name and select using CRDT 2PSet. In case of absence CRDT header // It uses search by system name and select using CRDT 2PSet. In case of absence CRDT header
// it heads object by address. // it heads object by address.
// //
// It returns exactly one non-nil value. It returns any error encountered which // It returns exactly one non-nil value. It returns any error encountered which
// prevented the object payload from being read. // prevented the object payload from being read.
GetCredsPayload(context.Context, oid.Address) ([]byte, error) GetCredsObject(context.Context, oid.Address) (*object.Object, error)
} }
var ( var (
@ -115,72 +115,72 @@ func New(cfg Config) Credentials {
} }
} }
func (c *cred) GetBox(ctx context.Context, addr oid.Address) (*accessbox.Box, error) { func (c *cred) GetBox(ctx context.Context, addr oid.Address) (*accessbox.Box, []object.Attribute, error) {
cachedBoxValue := c.cache.Get(addr) cachedBoxValue := c.cache.Get(addr)
if cachedBoxValue != nil { if cachedBoxValue != nil {
return c.checkIfCredentialsAreRemoved(ctx, addr, cachedBoxValue) return c.checkIfCredentialsAreRemoved(ctx, addr, cachedBoxValue)
} }
box, err := c.getAccessBox(ctx, addr) box, attrs, err := c.getAccessBox(ctx, addr)
if err != nil { if err != nil {
return nil, fmt.Errorf("get access box: %w", err) return nil, nil, fmt.Errorf("get access box: %w", err)
} }
cachedBox, err := box.GetBox(c.key) cachedBox, err := box.GetBox(c.key)
if err != nil { if err != nil {
return nil, fmt.Errorf("get gate box: %w", err) return nil, nil, fmt.Errorf("get gate box: %w", err)
} }
c.putBoxToCache(addr, cachedBox) c.putBoxToCache(addr, cachedBox, attrs)
return cachedBox, nil return cachedBox, attrs, nil
} }
func (c *cred) checkIfCredentialsAreRemoved(ctx context.Context, addr oid.Address, cachedBoxValue *cache.AccessBoxCacheValue) (*accessbox.Box, error) { func (c *cred) checkIfCredentialsAreRemoved(ctx context.Context, addr oid.Address, cachedBoxValue *cache.AccessBoxCacheValue) (*accessbox.Box, []object.Attribute, error) {
if time.Since(cachedBoxValue.PutTime) < c.removingCheckDuration { if time.Since(cachedBoxValue.PutTime) < c.removingCheckDuration {
return cachedBoxValue.Box, nil return cachedBoxValue.Box, cachedBoxValue.Attributes, nil
} }
box, err := c.getAccessBox(ctx, addr) box, attrs, err := c.getAccessBox(ctx, addr)
if err != nil { if err != nil {
if client.IsErrObjectAlreadyRemoved(err) { if client.IsErrObjectAlreadyRemoved(err) {
c.cache.Delete(addr) c.cache.Delete(addr)
return nil, fmt.Errorf("get access box: %w", err) return nil, nil, fmt.Errorf("get access box: %w", err)
} }
return cachedBoxValue.Box, nil return cachedBoxValue.Box, cachedBoxValue.Attributes, nil
} }
cachedBox, err := box.GetBox(c.key) cachedBox, err := box.GetBox(c.key)
if err != nil { if err != nil {
c.cache.Delete(addr) c.cache.Delete(addr)
return nil, fmt.Errorf("get gate box: %w", err) return nil, nil, fmt.Errorf("get gate box: %w", err)
} }
// we need this to reset PutTime // we need this to reset PutTime
// to don't check for removing each time after removingCheckDuration interval // to don't check for removing each time after removingCheckDuration interval
c.putBoxToCache(addr, cachedBox) c.putBoxToCache(addr, cachedBox, attrs)
return cachedBoxValue.Box, nil return cachedBoxValue.Box, attrs, nil
} }
func (c *cred) putBoxToCache(addr oid.Address, box *accessbox.Box) { func (c *cred) putBoxToCache(addr oid.Address, box *accessbox.Box, attrs []object.Attribute) {
if err := c.cache.Put(addr, box); err != nil { if err := c.cache.Put(addr, box, attrs); err != nil {
c.log.Warn(logs.CouldntPutAccessBoxIntoCache, zap.String("address", addr.EncodeToString())) c.log.Warn(logs.CouldntPutAccessBoxIntoCache, zap.String("address", addr.EncodeToString()))
} }
} }
func (c *cred) getAccessBox(ctx context.Context, addr oid.Address) (*accessbox.AccessBox, error) { func (c *cred) getAccessBox(ctx context.Context, addr oid.Address) (*accessbox.AccessBox, []object.Attribute, error) {
data, err := c.frostFS.GetCredsPayload(ctx, addr) obj, err := c.frostFS.GetCredsObject(ctx, addr)
if err != nil { if err != nil {
return nil, fmt.Errorf("read payload: %w", err) return nil, nil, fmt.Errorf("read payload and attributes: %w", err)
} }
// decode access box // decode access box
var box accessbox.AccessBox var box accessbox.AccessBox
if err = box.Unmarshal(data); err != nil { if err = box.Unmarshal(obj.Payload()); err != nil {
return nil, fmt.Errorf("unmarhal access box: %w", err) return nil, nil, fmt.Errorf("unmarhal access box: %w", err)
} }
return &box, nil return &box, obj.Attributes(), nil
} }
func (c *cred) Put(ctx context.Context, idCnr cid.ID, prm CredentialsParam) (oid.Address, error) { func (c *cred) Put(ctx context.Context, idCnr cid.ID, prm CredentialsParam) (oid.Address, error) {

View file

@ -11,6 +11,8 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status" apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
cidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id/test"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
oidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id/test" oidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id/test"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
@ -19,24 +21,68 @@ import (
) )
type frostfsMock struct { type frostfsMock struct {
objects map[oid.Address][]byte objects map[oid.Address][]*object.Object
errors map[oid.Address]error errors map[oid.Address]error
} }
func (f *frostfsMock) CreateObject(context.Context, PrmObjectCreate) (oid.ID, error) { func newFrostfsMock() *frostfsMock {
panic("implement me for test") return &frostfsMock{
objects: map[oid.Address][]*object.Object{},
errors: map[oid.Address]error{},
}
} }
func (f *frostfsMock) GetCredsPayload(_ context.Context, address oid.Address) ([]byte, error) { func (f *frostfsMock) CreateObject(_ context.Context, prm PrmObjectCreate) (oid.ID, error) {
var obj object.Object
obj.SetPayload(prm.Payload)
obj.SetOwnerID(prm.Creator)
obj.SetContainerID(prm.Container)
a := object.NewAttribute()
a.SetKey(object.AttributeFilePath)
a.SetValue(prm.Filepath)
prm.CustomAttributes = append(prm.CustomAttributes, *a)
obj.SetAttributes(prm.CustomAttributes...)
if prm.NewVersionFor != nil {
var addr oid.Address
addr.SetObject(*prm.NewVersionFor)
addr.SetContainer(prm.Container)
_, ok := f.objects[addr]
if !ok {
return oid.ID{}, errors.New("not found")
}
objID := oidtest.ID()
obj.SetID(objID)
f.objects[addr] = append(f.objects[addr], &obj)
return objID, nil
}
objID := oidtest.ID()
obj.SetID(objID)
var addr oid.Address
addr.SetObject(objID)
addr.SetContainer(prm.Container)
f.objects[addr] = []*object.Object{&obj}
return objID, nil
}
func (f *frostfsMock) GetCredsObject(_ context.Context, address oid.Address) (*object.Object, error) {
if err := f.errors[address]; err != nil { if err := f.errors[address]; err != nil {
return nil, err return nil, err
} }
data, ok := f.objects[address] objects, ok := f.objects[address]
if !ok { if !ok {
return nil, errors.New("not found") return nil, errors.New("not found")
} }
return data, nil
return objects[len(objects)-1], nil
} }
func TestRemovingAccessBox(t *testing.T) { func TestRemovingAccessBox(t *testing.T) {
@ -59,9 +105,14 @@ func TestRemovingAccessBox(t *testing.T) {
data, err := accessBox.Marshal() data, err := accessBox.Marshal()
require.NoError(t, err) require.NoError(t, err)
var obj object.Object
obj.SetPayload(data)
addr := oidtest.Address() addr := oidtest.Address()
obj.SetID(addr.Object())
obj.SetContainerID(addr.Container())
frostfs := &frostfsMock{ frostfs := &frostfsMock{
objects: map[oid.Address][]byte{addr: data}, objects: map[oid.Address][]*object.Object{addr: {&obj}},
errors: map[oid.Address]error{}, errors: map[oid.Address]error{},
} }
@ -78,14 +129,201 @@ func TestRemovingAccessBox(t *testing.T) {
creds := New(cfg) creds := New(cfg)
_, err = creds.GetBox(ctx, addr) _, _, err = creds.GetBox(ctx, addr)
require.NoError(t, err) require.NoError(t, err)
frostfs.errors[addr] = errors.New("network error") frostfs.errors[addr] = errors.New("network error")
_, err = creds.GetBox(ctx, addr) _, _, err = creds.GetBox(ctx, addr)
require.NoError(t, err) require.NoError(t, err)
frostfs.errors[addr] = &apistatus.ObjectAlreadyRemoved{} frostfs.errors[addr] = &apistatus.ObjectAlreadyRemoved{}
_, err = creds.GetBox(ctx, addr) _, _, err = creds.GetBox(ctx, addr)
require.Error(t, err) require.Error(t, err)
} }
func TestGetBox(t *testing.T) {
ctx := context.Background()
key, err := keys.NewPrivateKey()
require.NoError(t, err)
gateData := []*accessbox.GateData{{
BearerToken: &bearer.Token{},
GateKey: key.PublicKey(),
}}
secret := []byte("secret")
accessBox, _, err := accessbox.PackTokens(gateData, secret)
require.NoError(t, err)
data, err := accessBox.Marshal()
require.NoError(t, err)
var attr object.Attribute
attr.SetKey("key")
attr.SetValue("value")
attrs := []object.Attribute{attr}
cfg := Config{
CacheConfig: &cache.Config{
Size: 10,
Lifetime: 24 * time.Hour,
Logger: zaptest.NewLogger(t),
},
}
t.Run("no removing check, accessbox from cache", func(t *testing.T) {
frostfs := newFrostfsMock()
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = time.Hour
cfg.Key = key
creds := New(cfg)
cnrID := cidtest.ID()
addr, err := creds.Put(ctx, cnrID, CredentialsParam{Keys: keys.PublicKeys{key.PublicKey()}, AccessBox: accessBox})
require.NoError(t, err)
_, _, err = creds.GetBox(ctx, addr)
require.NoError(t, err)
frostfs.errors[addr] = &apistatus.ObjectAlreadyRemoved{}
_, _, err = creds.GetBox(ctx, addr)
require.NoError(t, err)
})
t.Run("error while getting box from frostfs", func(t *testing.T) {
frostfs := newFrostfsMock()
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = 0
cfg.Key = key
creds := New(cfg)
cnrID := cidtest.ID()
addr, err := creds.Put(ctx, cnrID, CredentialsParam{Keys: keys.PublicKeys{key.PublicKey()}, AccessBox: accessBox})
require.NoError(t, err)
frostfs.errors[addr] = errors.New("network error")
_, _, err = creds.GetBox(ctx, addr)
require.Error(t, err)
})
t.Run("invalid key", func(t *testing.T) {
frostfs := newFrostfsMock()
var obj object.Object
obj.SetPayload(data)
addr := oidtest.Address()
frostfs.objects[addr] = []*object.Object{&obj}
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = 0
cfg.Key = &keys.PrivateKey{}
creds := New(cfg)
_, _, err = creds.GetBox(ctx, addr)
require.Error(t, err)
})
t.Run("invalid payload", func(t *testing.T) {
frostfs := newFrostfsMock()
var obj object.Object
obj.SetPayload([]byte("invalid"))
addr := oidtest.Address()
frostfs.objects[addr] = []*object.Object{&obj}
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = 0
cfg.Key = key
creds := New(cfg)
_, _, err = creds.GetBox(ctx, addr)
require.Error(t, err)
})
t.Run("check attributes update", func(t *testing.T) {
frostfs := newFrostfsMock()
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = 0
cfg.Key = key
creds := New(cfg)
cnrID := cidtest.ID()
addr, err := creds.Put(ctx, cnrID, CredentialsParam{Keys: keys.PublicKeys{key.PublicKey()}, AccessBox: accessBox})
require.NoError(t, err)
_, boxAttrs, err := creds.GetBox(ctx, addr)
require.NoError(t, err)
_, err = creds.Update(ctx, addr, CredentialsParam{Keys: keys.PublicKeys{key.PublicKey()}, AccessBox: accessBox, CustomAttributes: attrs})
require.NoError(t, err)
_, newBoxAttrs, err := creds.GetBox(ctx, addr)
require.NoError(t, err)
require.Equal(t, len(boxAttrs)+1, len(newBoxAttrs))
})
t.Run("check accessbox update", func(t *testing.T) {
frostfs := newFrostfsMock()
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = 0
cfg.Key = key
creds := New(cfg)
cnrID := cidtest.ID()
addr, err := creds.Put(ctx, cnrID, CredentialsParam{Keys: keys.PublicKeys{key.PublicKey()}, AccessBox: accessBox})
require.NoError(t, err)
box, _, err := creds.GetBox(ctx, addr)
require.NoError(t, err)
require.Equal(t, hex.EncodeToString(secret), box.Gate.SecretKey)
newKey, err := keys.NewPrivateKey()
require.NoError(t, err)
newGateData := []*accessbox.GateData{{
BearerToken: &bearer.Token{},
GateKey: newKey.PublicKey(),
}}
newSecret := []byte("new-secret")
newAccessBox, _, err := accessbox.PackTokens(newGateData, newSecret)
require.NoError(t, err)
_, err = creds.Update(ctx, addr, CredentialsParam{Keys: keys.PublicKeys{newKey.PublicKey()}, AccessBox: newAccessBox})
require.NoError(t, err)
_, _, err = creds.GetBox(ctx, addr)
require.Error(t, err)
cfg.Key = newKey
newCreds := New(cfg)
box, _, err = newCreds.GetBox(ctx, addr)
require.NoError(t, err)
require.Equal(t, hex.EncodeToString(newSecret), box.Gate.SecretKey)
})
t.Run("empty keys", func(t *testing.T) {
frostfs := newFrostfsMock()
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = 0
cfg.Key = key
creds := New(cfg)
cnrID := cidtest.ID()
_, err = creds.Put(ctx, cnrID, CredentialsParam{AccessBox: accessBox})
require.ErrorIs(t, err, ErrEmptyPublicKeys)
})
t.Run("empty accessbox", func(t *testing.T) {
frostfs := newFrostfsMock()
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = 0
cfg.Key = key
creds := New(cfg)
cnrID := cidtest.ID()
_, err = creds.Put(ctx, cnrID, CredentialsParam{Keys: keys.PublicKeys{key.PublicKey()}})
require.ErrorIs(t, err, ErrEmptyBearerToken)
})
}

313
docs/authentication.md Normal file
View file

@ -0,0 +1,313 @@
# Authentication and authorization scheme
This document describes s3-gw authentication and authorization mechanism.
## General overview
Basic provisions:
* A request to s3-gw can be signed or not (request that isn't signed we will cal anonymous or just anon)
* To manage resources (buckets/objects) using s3-gw you must have appropriate access rights
Each request must be authenticated (at least as anonymous) and authorized. The following scheme shows components that
are involved to this
process.
<a>
<img src="images/authentication/auth-overview.svg" alt="Auth general overview"/>
</a>
There are several participants of this process:
1. User that make a request
2. S3-GW that accepts a request
3. FrostFS Storage that stores AccessObjects (objects are needed for authentication)
4. Blockchain smart contracts (`frostfsid`, `policy`) that stores user info and access rules.
## Data auth process
Let's look at the process in more detail:
<a>
<img src="images/authentication/auth-sequence.svg" alt="Auth sequence diagram"/>
</a>
* First of all, someone make a request. If request is signed we will check its signature (`Authentication`) after that
we will check access rights using policies (`Auhorization`). For anonymous requests only authorization be performed.
* **Authentication steps**:
* Each signed request is provided with `AccessKeyId` and signature. So if request is signed we must check its
signature. To do this we must know the `AccessKeyId`/`SecretAccessKey` pair (How the signature is calculated
using this pair see [signing](#aws-signing). Client and server (s3-gw) use the same credentials and algorithm to
compute signature). The `AccessKeyId` is a public part of credentials, and it's passed to gate in request. The
private part of credentials is `SecretAccessKey` and it's encrypted and stored in [AccessBox](#accessbox). So on
this step we must find appropriate `AccessBox` in FrostFS storage node (How to find appropriate `AccessBox`
knowing `AccessKeyId` see [search algorithm](#search-algorithm)). On this stage we can get `AccessDenied` from
FrostFS storage node if the s3-gw doesn't have permission to read this `AccessBox` object.
* After successful retrieving object we must extract `SecretAccessKey` from it. Since it's encrypted the s3-gw must
decrypt (see [encryption](#encryption)) this object using own private key and `SeedKey` from `AccessBox`
(see [AccessBox inner structure](#accessbox)). After s3-gw have got the `AccessKeyId`/`SecretAccessKey` pair it
[calculate signature](#aws-signing) and compare got signature with provided withing request. If signature doesn't
match the `AccessDenied` is returned.
* `AccessBox` also contains `OwnerID` that is related to `AccessKeyId` that was provided. So we have to check if
such `OwnerID` exists in `frsotfsid` contract (that stores all registered valid users). If user doesn't exist in
contract the `AccessDenied` is returned.
* **Authorization steps**:
* To know if user has access right to do what he wants to do we must find appropriate access policies. Such policies
are stored in `policy` contract and locally (can be manged using [control api](#control-auth-process)). So we need
to get policies from contract and [check them](#policies) along with local to decide if user has access right. If
he doesn't have such right the `AccessDenied` is returned.
* After successful authentication and authorization the request will be processed by s3-gw business logic and finally be
propagated to FrostFS storage node which also performs some auth checks and can return `AccessDenied`. If this happens
s3-gw also returns `AccessDenied` as response.
### AWS Signing
Every interaction with FrostFS S3 gateway is either authenticated or anonymous. This section explains request
authentication with the AWS Signature Version 4 algorithm. More info in AWS documentation:
* [Authenticating Requests (AWS Signature Version 4)](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html)
* [Signing AWS API requests](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-signing.html)
#### Authentication Methods
You can express authentication information by using one of the following methods:
* **HTTP Authorization header** - Using the HTTP Authorization header is the most common method of authenticating an
FrostFS S3 request. All the FrostFS S3 REST operations (except for browser-based uploads using POST requests) require
this header. For more information about the Authorization header value, and how to calculate signature and related
options,
see [Authenticating Requests: Using the Authorization Header (AWS Signature Version 4)](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html).
* **Query string parameters** - You can use a query string to express a request entirely in a URL. In this case, you use
query parameters to provide request information, including the authentication information. Because the request
signature is part of the URL, this type of URL is often referred to as a presigned URL. You can use presigned URLs to
embed clickable links, which can be valid for up to seven days, in HTML. For more information,
see [Authenticating Requests: Using Query Parameters (AWS Signature Version 4)](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html).
FrostFS S3 also supports browser-based uploads that use HTTP POST requests. With an HTTP POST request, you can upload
content to FrostFS S3 directly from the browser. For information about authenticating POST requests,
see [Browser-Based Uploads Using POST (AWS Signature Version 4)](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-UsingHTTPPOST.html).
#### Introduction to Signing Requests
Authentication information that you send in a request must include a signature. To calculate a signature, you first
concatenate select request elements to form a string, referred to as the string to sign. You then use a signing key to
calculate the hash-based message authentication code (HMAC) of the string to sign.
In AWS Signature Version 4, you don't use your secret access key to sign the request. Instead, you first use your secret
access key to derive a signing key. The derived signing key is specific to the date, service, and Region. For more
information about how to derive a signing key in different programming languages, see Examples of how to derive a
signing key for Signature Version 4.
The following diagram illustrates the general process of computing a signature.
<a>
<img src="images/authentication/aws-signing.png" alt="AWS Signing"/>
</a>
The string to sign depends on the request type. For example, when you use the HTTP Authorization header or the query
parameters for authentication, you use a varying combination of request elements to create the string to sign. For an
HTTP POST request, the POST policy in the request is the string you sign. For more information about computing string to
sign, follow links provided at the end of this section.
For signing key, the diagram shows series of calculations, where result of each step you feed into the next step. The
final step is the signing key.
Upon receiving an authenticated request, FrostFS S3 servers re-create the signature by using the authentication
information that is contained in the request. If the signatures match, FrostFS S3 processes your request; otherwise, the
request is rejected.
##### Signature Calculations for the Authorization Header
To calculate a signature, you first need a string to sign. You then calculate a HMAC-SHA256 hash of the string to sign
by using a signing key. The following diagram illustrates the process, including the various components of the string
that you create for signing.
When FrostFS S3 receives an authenticated request, it computes the signature and then compares it with the signature
that you provided in the request. For that reason, you must compute the signature by using the same method that is used
by FrostFS S3. The process of putting a request in an agreed-upon form for signing is called canonicalization.
<a>
<img src="images/authentication/auth-header-signing.png" alt="Signature Calculations for the Authorization Header"/>
</a>
See detains in [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html).
#### s3-gw
s3-gw support the following ways to provide the singed request:
* [HTTP Authorization header](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html)
* [Query string parameters](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html)
* [Browser-Based Uploads Using POST](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-UsingHTTPPOST.html)
All these methods provide `AccessKeyId` and signature. Using `AccessKeyId` s3-gw can get `SecretAccessKey`
(see [data auth](#data-auth-process)) to compute signature using exactly the same mechanics
as [client does](#introduction-to-signing-requests). After signature calculation the s3-gw just compares signatures and
if they don't match the access denied is returned.
### AccessBox
`AccessBox` is an ordinary object in FrostFS storage. It contains all information that can be used by s3-gw to
successfully authenticate request. Also, it contains data that is required to successful authentication in FrostFS
storage node.
Based on this object s3 credentials are formed:
* `AccessKeyId` - is concatenated container id and object id (`<cid>0<oid>`) of `AccessBox` (
e.g. `2XGRML5EW3LMHdf64W2DkBy1Nkuu4y4wGhUj44QjbXBi05ZNvs8WVwy1XTmSEkcVkydPKzCgtmR7U3zyLYTj3Snxf`)
* `SecretAccessKey` - hex-encoded random generated 32 bytes (that is encrypted and stored in object payload)
> **Note**: sensitive info in `AccessBox` is [encrypted](#encryption), so only someone who posses specific private key
> can decrypt such info.
`AccessBox` has the following structure:
<a>
<img src="images/authentication/accessbox-object.svg" alt="AccessBox object structure"/>
</a>
**Headers:**
`AccessBox` object has the following attributes (at least them, it also can contain custom one):
* `Timestamp` - unix timestamp when object was created
* `__SYSTEM__EXPIRATION_EPOCH` - epoch after which the object isn't available anymore
* `S3-CRDT-Versions-Add` - comma separated list of previous versions of `AccessBox` (
see [AccessBox versions](#accessbox-versions))
* `S3-Access-Box-CRDT-Name` - `AccessKeyId` of credentials to which current `AccessBox` is related (
see [AccessBox versions](#accessbox-versions))
* `FilePath` - just object name
**Payload:**
The `AccessBox` payload is an encoded [AccessBox protobuf type](../creds/accessbox/accessbox.proto) .
It contains:
* Seed key - hex-encoded public seed key to compute shared secret using ECDH (see [encryption](#encryption))
* List of gate data:
* Gate public key (so that gate (when it will decrypt data later) know which one item from list it should process)
* Encrypted tokens:
* `SecretAccessKey` - hex-encoded random generated 32 bytes
* Marshaled bearer token - more detail
in [spec](https://git.frostfs.info/TrueCloudLab/frostfs-api/src/commit/4c68d92468503b10282c8a92af83a56f170c8a3a/acl/types.proto#L189)
* Marshaled session token - more detail
in [spec](https://git.frostfs.info/TrueCloudLab/frostfs-api/src/commit/4c68d92468503b10282c8a92af83a56f170c8a3a/session/types.proto#L89)
* Container placement policies:
* `LocationsConstraint` - name of location constraint that can be used to create bucket/container using s3
credentials related to this `AccessBox`
* Marshaled placement policy - more detail
in [spec](https://git.frostfs.info/TrueCloudLab/frostfs-api/src/commit/4c68d92468503b10282c8a92af83a56f170c8a3a/netmap/types.proto#L111)
#### AccessBox versions
Imagine the following scenario:
* There is a system where only one s3-gw exist
* There is a `AccessBox` that can be used by this s3-gw
* User has s3 credentials (`AccessKeyId`/`SecretAccessKey`) related to corresponded `AccessBox` and can successfully
make request to s3-gw
* The system is expanded and new one s3-gw is added
* User must be able to use the credentials (that he has already had) to make request to new one s3-gw
Since `AccessBox` object is immutable and `SecretAccessKey` is encrypted only for restricted list of keys (can be used
(decrypted) only by limited number of s3-gw) we have to create new `AccessBox` that has encrypted secrets for new list
of s3-gw and be related to initial s3 credentials (`AccessKeyId`/`SecretAccessKey`). Such relationship is done
by `S3-Access-Box-CRDT-Name`.
##### Search algorithm
To support scenario from previous section and find appropriate version of `AccessBox` (that contains more recent and
relevant data) the following sequence is used:
<a>
<img src="images/authentication/accessbox-search.svg" alt="AccessBox search process"/>
</a>
* Search all object whose attribute `S3-Access-Box-CRDT-Name` is equal to `AccessKeyId` (extract container id
from `AccessKeyId` that has format: `<cid>0<oid>`).
* Get metadata for these object using `HEAD` requests (not `Get` to reduce network traffic)
* Sort all these objects by creation epoch and object id
* Pick last object id (If no object is found then extract object id from `AccessKeyId` that has format: `<cid>0<oid>`.
We need to do this because versions of `AccessBox` can miss the `S3-Access-Box-CRDT-Name` attribute.)
* Get appropriate object from FrostFS storage
* Decrypt `AccessBox` (see [encryption](#encryption))
#### Encryption
Each `AccessBox` contains sensitive information (`AccessSecretKey`, bearer/session tokens etc.) that must be protected
and available only to trusted parties (in our case it's a s3-gw).
To encrypt/decrypt data the authenticated encryption with associated
data ([AEAD](https://en.wikipedia.org/wiki/Authenticated_encryption)) is used. The encryption algorithm
is [ChaCha20-Poly1305](https://en.wikipedia.org/wiki/ChaCha20-Poly1305) ([RFC](https://datatracker.ietf.org/doc/html/rfc7905)).
Is the following algorithm the ECDSA keys (with curve implements NIST P-256 (FIPS 186-3, section D.2.3) also known as
secp256r1 or prime256v1) is used (unless otherwise stated).
**Encryption:**
* Create ephemeral key (`SeedKey`), it's need to generate shared secret
* Generate random 32-byte (that after hex-encoded be `SecretAccessKey`) or use existing secret access key
(if `AccessBox` is being updated rather than creating brand new)
* Generate shared secret as [ECDH](https://en.wikipedia.org/wiki/Elliptic-curve_Diffie%E2%80%93Hellman)
* Derive 32-byte key using shared secret from previous step with key derivation function based on
HMAC with SHA256 [HKDF](https://en.wikipedia.org/wiki/HKDF)
* Encrypt marshaled [Tokens](../creds/accessbox) using derived key
with [ChaCha20-Poly1305](https://en.wikipedia.org/wiki/ChaCha20-Poly1305) algorithm without additional data.
**Decryption:**
* Get public part of `SeedKey` from `AccessBox`
* Generate shared secret as follows:
* Make scalar curve multiplication of public part of `SeedKey` and private part of s3-gw key
* Use `X` part of multiplication (with zero padding at the beginning to fit 32-byte)
* Derive 32-byte key using shared secret from previous step with key derivation function based on
HMAC with SHA256 [HKDF](https://en.wikipedia.org/wiki/HKDF)
* Decrypt encrypted marshaled [Tokens](../creds/accessbox) using derived key
with [ChaCha20-Poly1305](https://en.wikipedia.org/wiki/ChaCha20-Poly1305) algorithm without additional data.
### Policies
The main repository that contains policy implementation is https://git.frostfs.info/TrueCloudLab/policy-engine.
Policies can be stored locally (using [control api](#control-auth-process)) or in `policy` contract. When policies check
is performed the following algorithm is applied:
* Check local policies:
* If any rule was matched return checking result.
* Check contract policies:
* If any rule was matched return checking result.
* If no rules were matched return `deny` status.
To local and contract policies `deny first` scheme is applied. This means that if several rules were matched for
reqeust (with both statuses `allow` and `deny`) the resulting status be `deny`.
Policy rules validate if specified request can be performed on the specific resource. Request and resource can contain
some properties and rules can contain conditions on some such properties.
In s3-gw resource is `/bucket/object`, `/bucket` or just `/` (if request is trying to list buckets).
Currently, request that is checked contains the following properties (so policy rule can contain conditions on them):
* `Owner` - address of owner that is performing request (this is taken from bearer token from `AccessBox`)
* `frostfsid:groupID` - groups to which the owner belongs (this is taken from `frostfsid` contract)
## Control auth process
There are control path [grpc api](../pkg/service/control/service.proto) in s3-gw that also has their own authentication
and authorization process.
But this process is quite straight forward:
* Get grpc request
* Check if signing key belongs to [allowed key list](configuration.md#control-section) (that is located in config file)
* Validate signature
For signing process the asymmetric encryption based on elliptic curves (`ECDSA_SHA512`) is used.
For more details see the appropriate code
in [frostfs-api](https://git.frostfs.info/TrueCloudLab/frostfs-api/src/commit/4c68d92468503b10282c8a92af83a56f170c8a3a/refs/types.proto#L94)
and [frostfs-api-go](https://git.frostfs.info/TrueCloudLab/frostfs-api-go/src/commit/a85146250b312fcdd6da9a71285527fed544234f/refs/types.go#L38).

View file

@ -1,10 +1,11 @@
# S3 API support # S3 API support
Reference: Reference:
* [AWS S3 API Reference](https://docs.aws.amazon.com/AmazonS3/latest/API/s3-api.pdf) * [AWS S3 API Reference](https://docs.aws.amazon.com/AmazonS3/latest/API/s3-api.pdf)
| | Legend | | | Legend |
|----|-------------------------------------------| |-----|-------------------------------------------|
| 🟢 | Supported | | 🟢 | Supported |
| 🟡 | Partially supported | | 🟡 | Partially supported |
| 🔵 | Not supported yet, but will be in future | | 🔵 | Not supported yet, but will be in future |
@ -13,7 +14,7 @@ Reference:
## Object ## Object
| | Method | Comments | | | Method | Comments |
|----|------------------------|-----------------------------------------| |-----|------------------------|-----------------------------------------|
| 🟢 | CopyObject | Done on gateway side | | 🟢 | CopyObject | Done on gateway side |
| 🟢 | DeleteObject | | | 🟢 | DeleteObject | |
| 🟢 | DeleteObjects | aka DeleteMultipleObjects | | 🟢 | DeleteObjects | aka DeleteMultipleObjects |
@ -31,42 +32,26 @@ Reference:
## ACL ## ACL
For now there are some limitations: For now there are some limitations:
* [Bucket policy](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-policies.html) supports only one `Principal` per `Statement`.
Principal must be `"AWS": "*"` (to refer all users) or `"CanonicalUser": "0313b1ac3a8076e155a7e797b24f0b650cccad5941ea59d7cfd51a024a8b2a06bf"` (hex encoded public key of desired user).
* Resource in bucket policy is an array. Each item MUST contain bucket name, CAN contain object name (wildcards are not supported):
```json
{
"Statement": [
{
"Resource": [
"arn:aws:s3:::bucket",
"arn:aws:s3:::bucket/some/object"
]
}
]
}
```
* AWS conditions and wildcard are not supported in [resources](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-arn-format.html)
* Only `CanonicalUser` (with hex encoded public key) and `All Users Group` are supported in [ACL](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html)
| | Method | Comments | | | Method | Comments |
|----|--------------|-----------------| |-----|--------------|-----------------------------------|
| 🟡 | GetObjectAcl | See Limitations | | 🟢 | GetObjectAcl | Objects can have only private acl |
| 🟡 | PutObjectAcl | See Limitations | | 🔴 | PutObjectAcl | Use PutBucketPolicy instead |
## Locking ## Locking
For now there are some limitations: For now there are some limitations:
* Retention period can't be shortened, only extended. * Retention period can't be shortened, only extended.
* You can't delete locks or object with unexpired lock. * You can't delete locks or object with unexpired lock.
| | Method | Comments | | | Method | Comments |
|-----|----------------------------|---------------------------| |-----|----------------------------|-------------------------------|
| 🟡 | GetObjectLegalHold | | | 🟡 | GetObjectLegalHold | |
| 🟢 | GetObjectLockConfiguration | GetBucketObjectLockConfig | | 🟢 | GetObjectLockConfiguration | aka GetBucketObjectLockConfig |
| 🟡 | GetObjectRetention | | | 🟡 | GetObjectRetention | |
| 🟡 | PutObjectLegalHold | | | 🟡 | PutObjectLegalHold | |
| 🟢 | PutObjectLockConfiguration | PutBucketObjectLockConfig | | 🟢 | PutObjectLockConfiguration | aka PutBucketObjectLockConfig |
| 🟡 | PutObjectRetention | | | 🟡 | PutObjectRetention | |
## Multipart ## Multipart
@ -76,7 +61,7 @@ sends whitespace characters to keep connection with the client alive. In this
case, gateway is unable to set proper HTTP headers like `X-Amz-Version-Id`. case, gateway is unable to set proper HTTP headers like `X-Amz-Version-Id`.
| | Method | Comments | | | Method | Comments |
|----|-------------------------|----------| |-----|-------------------------|----------|
| 🟢 | AbortMultipartUpload | | | 🟢 | AbortMultipartUpload | |
| 🟢 | CompleteMultipartUpload | | | 🟢 | CompleteMultipartUpload | |
| 🟢 | CreateMultipartUpload | | | 🟢 | CreateMultipartUpload | |
@ -88,7 +73,7 @@ case, gateway is unable to set proper HTTP headers like `X-Amz-Version-Id`.
## Tagging ## Tagging
| | Method | Comments | | | Method | Comments |
|----|---------------------|----------| |-----|---------------------|----------|
| 🟢 | DeleteObjectTagging | | | 🟢 | DeleteObjectTagging | |
| 🟢 | GetObjectTagging | | | 🟢 | GetObjectTagging | |
| 🟢 | PutObjectTagging | | | 🟢 | PutObjectTagging | |
@ -98,14 +83,14 @@ case, gateway is unable to set proper HTTP headers like `X-Amz-Version-Id`.
See also `GetObject` and other method parameters. See also `GetObject` and other method parameters.
| | Method | Comments | | | Method | Comments |
|----|--------------------|--------------------------| |-----|--------------------|--------------------------|
| 🟢 | ListObjectVersions | ListBucketObjectVersions | | 🟢 | ListObjectVersions | ListBucketObjectVersions |
| 🔵 | RestoreObject | | | 🔵 | RestoreObject | |
## Bucket ## Bucket
| | Method | Comments | | | Method | Comments |
|----|----------------------|-----------| |-----|----------------------|-----------|
| 🟢 | CreateBucket | PutBucket | | 🟢 | CreateBucket | PutBucket |
| 🟢 | DeleteBucket | | | 🟢 | DeleteBucket | |
| 🟢 | GetBucketLocation | | | 🟢 | GetBucketLocation | |
@ -116,21 +101,21 @@ See also `GetObject` and other method parameters.
## Acceleration ## Acceleration
| | Method | Comments | | | Method | Comments |
|----|----------------------------------|---------------------| |-----|----------------------------------|---------------------|
| 🔴 | GetBucketAccelerateConfiguration | GetBucketAccelerate | | 🔴 | GetBucketAccelerateConfiguration | GetBucketAccelerate |
| 🔴 | PutBucketAccelerateConfiguration | | | 🔴 | PutBucketAccelerateConfiguration | |
## ACL ## ACL
| | Method | Comments | | | Method | Comments |
|----|--------------|---------------------| |-----|--------------|------------------------------|
| 🟡 | GetBucketAcl | See ACL limitations | | 🟡 | GetBucketAcl | Only canned acl is supported |
| 🟡 | PutBucketAcl | See ACL Limitations | | 🟡 | PutBucketAcl | Only canned acl is supported |
## Analytics ## Analytics
| | Method | Comments | | | Method | Comments |
|----|------------------------------------|----------| |-----|------------------------------------|----------|
| 🔵 | DeleteBucketAnalyticsConfiguration | | | 🔵 | DeleteBucketAnalyticsConfiguration | |
| 🔵 | GetBucketAnalyticsConfiguration | | | 🔵 | GetBucketAnalyticsConfiguration | |
| 🔵 | ListBucketAnalyticsConfigurations | | | 🔵 | ListBucketAnalyticsConfigurations | |
@ -139,7 +124,7 @@ See also `GetObject` and other method parameters.
## CORS ## CORS
| | Method | Comments | | | Method | Comments |
|----|------------------|----------| |-----|------------------|----------|
| 🟢 | DeleteBucketCors | | | 🟢 | DeleteBucketCors | |
| 🟢 | GetBucketCors | | | 🟢 | GetBucketCors | |
| 🟢 | PutBucketCors | | | 🟢 | PutBucketCors | |
@ -147,7 +132,7 @@ See also `GetObject` and other method parameters.
## Encryption ## Encryption
| | Method | Comments | | | Method | Comments |
|----|------------------------|----------| |-----|------------------------|----------|
| 🔵 | DeleteBucketEncryption | | | 🔵 | DeleteBucketEncryption | |
| 🔵 | GetBucketEncryption | | | 🔵 | GetBucketEncryption | |
| 🔵 | PutBucketEncryption | | | 🔵 | PutBucketEncryption | |
@ -155,7 +140,7 @@ See also `GetObject` and other method parameters.
## Inventory ## Inventory
| | Method | Comments | | | Method | Comments |
|----|------------------------------------|----------| |-----|------------------------------------|----------|
| 🔵 | DeleteBucketInventoryConfiguration | | | 🔵 | DeleteBucketInventoryConfiguration | |
| 🔵 | GetBucketInventoryConfiguration | | | 🔵 | GetBucketInventoryConfiguration | |
| 🔵 | ListBucketInventoryConfigurations | | | 🔵 | ListBucketInventoryConfigurations | |
@ -164,7 +149,7 @@ See also `GetObject` and other method parameters.
## Lifecycle ## Lifecycle
| | Method | Comments | | | Method | Comments |
|----|---------------------------------|----------| |-----|---------------------------------|----------|
| 🔵 | DeleteBucketLifecycle | | | 🔵 | DeleteBucketLifecycle | |
| 🔵 | GetBucketLifecycle | | | 🔵 | GetBucketLifecycle | |
| 🔵 | GetBucketLifecycleConfiguration | | | 🔵 | GetBucketLifecycleConfiguration | |
@ -174,14 +159,14 @@ See also `GetObject` and other method parameters.
## Logging ## Logging
| | Method | Comments | | | Method | Comments |
|----|------------------|----------| |-----|------------------|----------|
| 🔵 | GetBucketLogging | | | 🔵 | GetBucketLogging | |
| 🔵 | PutBucketLogging | | | 🔵 | PutBucketLogging | |
## Metrics ## Metrics
| | Method | Comments | | | Method | Comments |
|----|----------------------------------|----------| |-----|----------------------------------|----------|
| 🔵 | DeleteBucketMetricsConfiguration | | | 🔵 | DeleteBucketMetricsConfiguration | |
| 🔵 | GetBucketMetricsConfiguration | | | 🔵 | GetBucketMetricsConfiguration | |
| 🔵 | ListBucketMetricsConfigurations | | | 🔵 | ListBucketMetricsConfigurations | |
@ -190,7 +175,7 @@ See also `GetObject` and other method parameters.
## Notifications ## Notifications
| | Method | Comments | | | Method | Comments |
|----|------------------------------------|---------------| |-----|------------------------------------|---------------|
| 🔵 | GetBucketNotification | | | 🔵 | GetBucketNotification | |
| 🔵 | GetBucketNotificationConfiguration | | | 🔵 | GetBucketNotificationConfiguration | |
| 🔵 | ListenBucketNotification | non-standard? | | 🔵 | ListenBucketNotification | non-standard? |
@ -200,36 +185,70 @@ See also `GetObject` and other method parameters.
## Ownership controls ## Ownership controls
| | Method | Comments | | | Method | Comments |
|----|-------------------------------|----------| |-----|-------------------------------|----------|
| 🔵 | DeleteBucketOwnershipControls | | | 🔵 | DeleteBucketOwnershipControls | |
| 🔵 | GetBucketOwnershipControls | | | 🔵 | GetBucketOwnershipControls | |
| 🔵 | PutBucketOwnershipControls | | | 🔵 | PutBucketOwnershipControls | |
## Policy and replication ## Policy and replication
Bucket policy has the following limitations
* Supports only AWS principals in format `arn:aws:iam::<namespace>:user/<user>` or wildcard `*`.
* No complex conditions (only conditions for groups now supported)
Simple valid policy example:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Principal": {
"AWS": [
"arn:aws:iam::111122223333:role/JohnDoe"
]
},
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"
]
}
]
}
```
Bucket policy status determines using the following scheme:
* If policy has statement with principal that is wildcard (`*`) then policy is considered as public
| | Method | Comments | | | Method | Comments |
|----|-------------------------|-----------------------------| |-----|-------------------------|---------------------------------------------------|
| 🔵 | DeleteBucketPolicy | | | 🟢 | DeleteBucketPolicy | See Policy limitations |
| 🔵 | DeleteBucketReplication | | | 🔵 | DeleteBucketReplication | |
| 🔵 | DeletePublicAccessBlock | | | 🔵 | DeletePublicAccessBlock | |
| 🟡 | GetBucketPolicy | See ACL limitations | | 🟢 | GetBucketPolicy | See Policy limitations |
| 🔵 | GetBucketPolicyStatus | | | 🟢 | GetBucketPolicyStatus | See rule determining status in Policy limitations |
| 🔵 | GetBucketReplication | | | 🔵 | GetBucketReplication | |
| 🟢 | PostPolicyBucket | Upload file using POST form | | 🟢 | PostPolicyBucket | Upload file using POST form |
| 🟡 | PutBucketPolicy | See ACL limitations | | 🟡 | PutBucketPolicy | See Policy limitations |
| 🔵 | PutBucketReplication | | | 🔵 | PutBucketReplication | |
## Request payment ## Request payment
| | Method | Comments | | | Method | Comments |
|----|-------------------------|----------| |-----|-------------------------|----------|
| 🔴 | GetBucketRequestPayment | | | 🔴 | GetBucketRequestPayment | |
| 🔴 | PutBucketRequestPayment | | | 🔴 | PutBucketRequestPayment | |
## Tagging ## Tagging
| | Method | Comments | | | Method | Comments |
|----|---------------------|----------| |-----|---------------------|----------|
| 🟢 | DeleteBucketTagging | | | 🟢 | DeleteBucketTagging | |
| 🟢 | GetBucketTagging | | | 🟢 | GetBucketTagging | |
| 🟢 | PutBucketTagging | | | 🟢 | PutBucketTagging | |
@ -237,7 +256,7 @@ See also `GetObject` and other method parameters.
## Tiering ## Tiering
| | Method | Comments | | | Method | Comments |
|----|---------------------------------------------|----------| |-----|---------------------------------------------|----------|
| 🔵 | DeleteBucketIntelligentTieringConfiguration | | | 🔵 | DeleteBucketIntelligentTieringConfiguration | |
| 🔵 | GetBucketIntelligentTieringConfiguration | | | 🔵 | GetBucketIntelligentTieringConfiguration | |
| 🔵 | ListBucketIntelligentTieringConfigurations | | | 🔵 | ListBucketIntelligentTieringConfigurations | |
@ -246,14 +265,14 @@ See also `GetObject` and other method parameters.
## Versioning ## Versioning
| | Method | Comments | | | Method | Comments |
|----|---------------------|----------| |-----|---------------------|----------|
| 🟢 | GetBucketVersioning | | | 🟢 | GetBucketVersioning | |
| 🟢 | PutBucketVersioning | | | 🟢 | PutBucketVersioning | |
## Website ## Website
| | Method | Comments | | | Method | Comments |
|----|---------------------|----------| |-----|---------------------|----------|
| 🔵 | DeleteBucketWebsite | | | 🔵 | DeleteBucketWebsite | |
| 🔵 | GetBucketWebsite | | | 🔵 | GetBucketWebsite | |
| 🔵 | PutBucketWebsite | | | 🔵 | PutBucketWebsite | |

131
docs/bucket_policy.md Normal file
View file

@ -0,0 +1,131 @@
# Bucket policy
A bucket policy is a resource-based policy that you can use to grant access permissions to your S3 bucket and the
objects in it https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-policies.html.
## Conditions
In AWS there are a lot of condition
keys https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.htm
but s3-gw currently supports only the following conditions in bucket policy:
> Note: all condition keys and values must be string formatted in json policy (even if they are numbers).
| Condition key | Description |
|-------------------------------|---------------------------------------------------------------------------|
| [s3:max-keys](#s3-max-keys) | Filters access by maximum number of keys returned in a ListBucket request |
| [s3:delimiter](#s3-delimiter) | Filters access by delimiter parameter |
| [s3:prefix](#s3-prefix) | Filters access by key name prefix |
| [s3:VersionId](#s3-versionid) | Filters access by a specific object version |
Each key can be used only with specific set of
operators https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html
(it depends on type of key).
### s3 max-keys
**Key:** `s3:max-keys`
**Type:** `Numeric`
**Description:** Filters access by maximum number of keys returned in a ListBucket request
```json
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": "*",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::example_bucket",
"Condition": {
"NumericLessThanEquals": {
"s3:max-keys": "10"
}
}
}
}
```
### s3 delimiter
**Key:** `s3:delimiter`
**Type:** `String`
**Description:** Filters access by delimiter parameter
```json
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": "*",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::example_bucket",
"Condition": {
"StringEquals": {
"s3:delimiter": "/"
}
}
}
}
```
### s3 prefix
**Key:** `s3:prefix`
**Type:** `String`
**Description:** Filters access by key name prefix
```json
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::111122223333:user/JohnDoe"
]
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::example_bucket",
"Condition": {
"StringEquals": {
"s3:prefix": "home/JohnDoe"
}
}
}
}
```
### s3 VersionId
**Key:** `s3:VersionId`
**Type:** `String`
**Description:** Filters access by a specific object version
```json
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::111122223333:user/JohnDoe"
]
},
"Action": "s3:GetObjectVersion",
"Resource": "arn:aws:s3:::example_bucket/some-file.txt",
"Condition": {
"StringEquals": {
"s3:VersionId": "AT2L3qER7CHGk4TDooocEzkz2RyqTm4Zh2b1QLzAhLbH"
}
}
}
}
```

View file

@ -218,6 +218,10 @@ max_clients_deadline: 30s
allowed_access_key_id_prefixes: allowed_access_key_id_prefixes:
- Ck9BHsgKcnwfCTUSFm6pxhoNS4cBqgN2NQ8zVgPjqZDX - Ck9BHsgKcnwfCTUSFm6pxhoNS4cBqgN2NQ8zVgPjqZDX
- 3stjWenX15YwYzczMr88gy3CQr4NYFBQ8P7keGzH5QFn - 3stjWenX15YwYzczMr88gy3CQr4NYFBQ8P7keGzH5QFn
reconnect_interval: 1m
source_ip_header: "Source-Ip"
``` ```
| Parameter | Type | SIGHUP reload | Default value | Description | | Parameter | Type | SIGHUP reload | Default value | Description |
@ -233,6 +237,8 @@ allowed_access_key_id_prefixes:
| `max_clients_count` | `int` | no | `100` | Limits for processing of clients' requests. | | `max_clients_count` | `int` | no | `100` | Limits for processing of clients' requests. |
| `max_clients_deadline` | `duration` | no | `30s` | Deadline after which the gate sends error `RequestTimeout` to a client. | | `max_clients_deadline` | `duration` | no | `30s` | Deadline after which the gate sends error `RequestTimeout` to a client. |
| `allowed_access_key_id_prefixes` | `[]string` | no | | List of allowed `AccessKeyID` prefixes which S3 GW serve. If the parameter is omitted, all `AccessKeyID` will be accepted. | | `allowed_access_key_id_prefixes` | `[]string` | no | | List of allowed `AccessKeyID` prefixes which S3 GW serve. If the parameter is omitted, all `AccessKeyID` will be accepted. |
| `reconnect_interval` | `duration` | no | `1m` | Listeners reconnection interval. |
| `source_ip_header` | `string` | yes | | Custom header to retrieve Source IP. |
### `wallet` section ### `wallet` section
@ -418,6 +424,9 @@ cache:
morph_policy: morph_policy:
lifetime: 30s lifetime: 30s
size: 10000 size: 10000
frostfsid:
lifetime: 1m
size: 10000
``` ```
| Parameter | Type | Default value | Description | | Parameter | Type | Default value | Description |
@ -431,6 +440,7 @@ cache:
| `accessbox` | [Accessbox cache config](#accessbox-subsection) | `lifetime: 10m`<br>`size: 100` | Cache which stores access box with tokens by its address. | | `accessbox` | [Accessbox cache config](#accessbox-subsection) | `lifetime: 10m`<br>`size: 100` | Cache which stores access box with tokens by its address. |
| `accesscontrol` | [Cache config](#cache-subsection) | `lifetime: 1m`<br>`size: 100000` | Cache which stores owner to cache operation mapping. | | `accesscontrol` | [Cache config](#cache-subsection) | `lifetime: 1m`<br>`size: 100000` | Cache which stores owner to cache operation mapping. |
| `morph_policy` | [Cache config](#cache-subsection) | `lifetime: 1m`<br>`size: 10000` | Cache which stores list of policy chains. | | `morph_policy` | [Cache config](#cache-subsection) | `lifetime: 1m`<br>`size: 10000` | Cache which stores list of policy chains. |
| `frostfsid` | [Cache config](#cache-subsection) | `lifetime: 1m`<br>`size: 10000` | Cache which stores FrostfsID subject info. |
#### `cache` subsection #### `cache` subsection
@ -597,13 +607,15 @@ kludge:
use_default_xmlns: false use_default_xmlns: false
bypass_content_encoding_check_in_chunks: false bypass_content_encoding_check_in_chunks: false
default_namespaces: [ "", "root" ] default_namespaces: [ "", "root" ]
acl_enabled: false
``` ```
| Parameter | Type | SIGHUP reload | Default value | Description | | Parameter | Type | SIGHUP reload | Default value | Description |
|-------------------------------------------|------------|---------------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |-------------------------------------------|------------|---------------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `use_default_xmlns` | `bool` | yes | false | Enable using default xml namespace `http://s3.amazonaws.com/doc/2006-03-01/` when parse xml bodies. | | `use_default_xmlns` | `bool` | yes | `false` | Enable using default xml namespace `http://s3.amazonaws.com/doc/2006-03-01/` when parse xml bodies. |
| `bypass_content_encoding_check_in_chunks` | `bool` | yes | false | Use this flag to be able to use [chunked upload approach](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html) without having `aws-chunked` value in `Content-Encoding` header. | | `bypass_content_encoding_check_in_chunks` | `bool` | yes | `false` | Use this flag to be able to use [chunked upload approach](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html) without having `aws-chunked` value in `Content-Encoding` header. |
| `default_namespaces` | `[]string` | n/d | ["","root"] | Namespaces that should be handled as default. | | `default_namespaces` | `[]string` | yes | `["","root"]` | Namespaces that should be handled as default. |
| `acl_enabled` | `bool` | yes | `false` | Enable bucket/object ACL support for newly created buckets. |
# `runtime` section # `runtime` section
Contains runtime parameters. Contains runtime parameters.
@ -673,13 +685,11 @@ Policy contract configuration. To enable this functionality the `rpc_endpoint` p
```yaml ```yaml
policy: policy:
enabled: false
contract: policy.frostfs contract: policy.frostfs
``` ```
| Parameter | Type | SIGHUP reload | Default value | Description | | Parameter | Type | SIGHUP reload | Default value | Description |
|------------|----------|---------------|----------------|-------------------------------------------------------------------| |------------|----------|---------------|----------------|-------------------------------------------|
| `enabled` | `bool` | no | true | Enables using policies from Policy contract to check permissions. |
| `contract` | `string` | no | policy.frostfs | Policy contract hash (LE) or name in NNS. | | `contract` | `string` | no | policy.frostfs | Policy contract hash (LE) or name in NNS. |
# `proxy` section # `proxy` section

View file

@ -0,0 +1,44 @@
@startuml
package AccessBox {
map Tokens {
SecretKey => Private key
BearerToken => Encoded bearer token
SessionTokens => List of encoded session tokens
}
map Gate {
GateKey => Encoded public gate key
Encrypted tokens *--> Tokens
}
map ContainerPolicy {
LocationConstraint => Policy name
PlacementPolicy => Encoded placement policy
}
map Box {
SeedKey => Encoded public seed key
List of Gates *--> Gate
List of container policies *--> ContainerPolicy
}
map ObjectAttributes {
Timestamp => 1710418478
_~_SYSTEM_~_EXPIRATION_EPOCH => 10801
S3-CRDT-Versions-Add => 5ZNvs8WVwy1XTmSEkcVkydPKzCgtmR7U3zyLYTj3Snxf,9bLtL1EsUpuSiqmHnqFf6RuT6x5QMLMNBqx7vCcCcNhy
S3-Access-Box-CRDT-Name => 2XGRML5EW3LMHdf64W2DkBy1Nkuu4y4wGhUj44QjbXBi05ZNvs8WVwy1XTmSEkcVkydPKzCgtmR7U3zyLYTj3Snxf
FilePath => 1710418478_access.box
}
map FrostFSObject {
Header *-> ObjectAttributes
Payload *--> Box
}
}
@enduml

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 13 KiB

View file

@ -0,0 +1,29 @@
@startuml
User -> "S3-GW": AccessKey
"S3-GW" -> "FrostFS Node": Search objects
note right
Search by exact attribute matching:
**S3-Access-Box-CRDT-Name:** //AccessKey//
end note
"FrostFS Node" --> "S3-GW": AccessBox objects ids
"S3-GW" -> "FrostFS Node" : Head AccessBox objects
"FrostFS Node" --> "S3-GW": AccessBox object headers
"S3-GW" -> "S3-GW": Choose latest AccessBox
note left
Sort AccessBox headers by creation epoch
and then by ObjectID
Pick last
end note
"S3-GW" -> "FrostFS Node" : Get AccessBox object
"FrostFS Node" --> "S3-GW": AccessBox object
"S3-GW" -> "S3-GW": Decrypt and validate AccessBox
@enduml

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 8.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

View file

@ -0,0 +1,25 @@
@startuml
!include <c4/C4_Container.puml>
AddElementTag("smart-contract", $bgColor=#0abab5)
Person(user, "User", "User with or without credentials")
System_Boundary(c1, "FrostFS") {
Container(s3, "S3 Gateway", $descr="AWS S3 compatible gate")
Container(stor, "FrostFS Storage", $descr="Storage service")
}
System_Boundary(c3, "Blockchain") {
Interface "NeoGo"
Container(ffsid, "FrostFS ID", $tags="smart-contract", $descr="Stores namespaces and users")
Container(policy, "Policy", $tags="smart-contract", $descr="Stores APE rules")
}
Rel_R(user, s3, "Requests", "HTTP")
Rel_R(s3, stor, "Get data to validate request, store objects")
Rel_D(s3, NeoGo, "Get data to validate request")
Rel("NeoGo", ffsid, "Fetch users")
Rel("NeoGo", policy, "Fetch policies")
SHOW_LEGEND(true)
@enduml

View file

@ -0,0 +1,611 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" contentStyleType="text/css" height="660px" preserveAspectRatio="none" style="width:851px;height:660px;background:#FFFFFF;" version="1.1" viewBox="0 0 851 660" width="851px" zoomAndPan="magnify"><defs/><g><!--MD5=[84dda40acb3410cad7262261daba2aaf]
cluster c1--><g id="cluster_c1"><rect fill="none" height="152" rx="2.5" ry="2.5" style="stroke:#444444;stroke-width:1.0;stroke-dasharray:7.0,7.0;" width="587" x="258" y="7"/><text fill="#444444" font-family="sans-serif" font-size="16" font-weight="bold" lengthAdjust="spacing" textLength="71" x="516" y="23.8516">FrostFS</text><text fill="#444444" font-family="sans-serif" font-size="12" font-weight="bold" lengthAdjust="spacing" textLength="61" x="521" y="38.7637">[System]</text></g><!--MD5=[fb252dd5a834d4be8567d0df3f6bbec4]
cluster c3--><g id="cluster_c3"><rect fill="none" height="301" rx="2.5" ry="2.5" style="stroke:#444444;stroke-width:1.0;stroke-dasharray:7.0,7.0;" width="405" x="259" y="243.5"/><text fill="#444444" font-family="sans-serif" font-size="16" font-weight="bold" lengthAdjust="spacing" textLength="95" x="414" y="260.3516">Blockchain</text><text fill="#444444" font-family="sans-serif" font-size="12" font-weight="bold" lengthAdjust="spacing" textLength="61" x="431" y="275.2637">[System]</text></g><!--MD5=[b165ca7cce796f881c879adda4a6bef9]
entity s3--><g id="elem_s3"><rect fill="#438DD5" height="85.1875" rx="2.5" ry="2.5" style="stroke:#3C7FC0;stroke-width:0.5;" width="199" x="274.5" y="58"/><text fill="#FFFFFF" font-family="sans-serif" font-size="16" font-weight="bold" lengthAdjust="spacing" textLength="108" x="320" y="82.8516">S3 Gateway</text><text fill="#FFFFFF" font-family="sans-serif" font-size="12" font-style="italic" lengthAdjust="spacing" textLength="10" x="369" y="97.7637">[]</text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="4" x="372" y="113.5889"> </text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="175" x="288.5" y="129.8857">AWS S3 compatible gate</text></g><!--MD5=[b631bc93683c8d3c6bcd86869bd62c2d]
entity stor--><g id="elem_stor"><rect fill="#438DD5" height="85.1875" rx="2.5" ry="2.5" style="stroke:#3C7FC0;stroke-width:0.5;" width="169" x="660.5" y="58"/><text fill="#FFFFFF" font-family="sans-serif" font-size="16" font-weight="bold" lengthAdjust="spacing" textLength="149" x="670.5" y="82.8516">FrostFS Storage</text><text fill="#FFFFFF" font-family="sans-serif" font-size="12" font-style="italic" lengthAdjust="spacing" textLength="10" x="740" y="97.7637">[]</text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="4" x="743" y="113.5889"> </text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="111" x="691.5" y="129.8857">Storage service</text></g><!--MD5=[d75780de534459f9083ff96c63e26824]
entity NeoGo--><g id="elem_NeoGo"><ellipse cx="374" cy="323.5" fill="#F1F1F1" rx="8" ry="8" style="stroke:#181818;stroke-width:0.5;"/><text fill="#000000" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="48" x="350" y="353.4951">NeoGo</text></g><!--MD5=[a1c7fbed12783ec305c3357d72c64f9e]
entity ffsid--><g id="elem_ffsid"><rect fill="#0ABAB5" height="101.4844" rx="2.5" ry="2.5" style="stroke:#3C7FC0;stroke-width:0.5;" width="198" x="275" y="427.5"/><text fill="#FFFFFF" font-family="sans-serif" font-size="16" font-weight="bold" lengthAdjust="spacing" textLength="96" x="326" y="452.3516">FrostFS ID</text><text fill="#FFFFFF" font-family="sans-serif" font-size="12" font-style="italic" lengthAdjust="spacing" textLength="10" x="369" y="467.2637">[]</text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="4" x="372" y="483.0889"> </text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="170" x="289" y="499.3857">Stores namespaces and</text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="38" x="355" y="515.6826">users</text></g><!--MD5=[4361443624774238dacd0e01c3165ecf]
entity policy--><g id="elem_policy"><rect fill="#0ABAB5" height="85.1875" rx="2.5" ry="2.5" style="stroke:#3C7FC0;stroke-width:0.5;" width="139" x="508.5" y="435.5"/><text fill="#FFFFFF" font-family="sans-serif" font-size="16" font-weight="bold" lengthAdjust="spacing" textLength="52" x="552" y="460.3516">Policy</text><text fill="#FFFFFF" font-family="sans-serif" font-size="12" font-style="italic" lengthAdjust="spacing" textLength="10" x="573" y="475.2637">[]</text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="4" x="576" y="491.0889"> </text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="115" x="522.5" y="507.3857">Stores APE rules</text></g><!--MD5=[8fc3522a43f8c7199df5e09e5bb0188e]
entity user--><g id="elem_user"><rect fill="#08427B" height="135.5156" rx="2.5" ry="2.5" style="stroke:#073B6F;stroke-width:0.5;" width="168" x="7" y="32.5"/><image height="48" width="48" x="67" xlink:href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAIAAADYYG7QAAACD0lEQVR4Xu2YoU4EMRCGT+4j8Ai8AhaH4QHgAUjQuFMECUgMIUgwJAgMhgQsAYUiJCiQIBBY+EITsjfTdme6V24v4c8vyGbb+ZjOtN0bNcvjQXmkH83WvYBWto6PLm6v7p7uH1/w2fXD+PBycX1Pv2l3IdDm/vn7x+dXQiAubRzoURa7gRZWd0iGRIiJbOnhnfYBQZNJjNbuyY2eJG8fkDE3bbG4ep6MHUAsgYxmE3nVs6VsBWJSGccsOlFPmLIViMzLOB7pCVO2AtHJMohH7Fh6zqitQK7m0rJvAVYgGcEpe//PLdDz65sM4pF9N7ICcXDKIB5Nv6j7tD0NoSdM2QrU9Gg0ewE1LqBhHR3BBdvj2vapnidjHxD/q6vd7Pvhr31AwcY8eXMTXAKECZZJFXuEq27aLgQK5uLMohCenGGuGewOxSjBvYBqeG6B+Nqiblggdjnc+ZXDy+FNFpFzw76O3UBAROuXh6FoiAcf5g9eTvUgzy0nWg6I8cXHRUpg5bOVBCo+KDpFajOf23GgPme7RSQ+lacIENUgJ6gg1k6HjgOlqnLqip4tEuhv0hNEMXUD0clyXE3p6pZA0S2nnvTlXwLJEZWlb7cTQH1+USgTN4VhAenm/wea1OCAOmqo6fE1WCb9WSKBah+rbUWPWAmE2Rvk0ApiB45eOyNAzU8xcTvj8KvkKEoOaIYeHNA3ZuygAvFMUO0AAAAASUVORK5CYII=" y="42.5"/><text fill="#FFFFFF" font-family="sans-serif" font-size="16" font-weight="bold" lengthAdjust="spacing" textLength="42" x="70" y="105.3516">User</text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="4" x="89" y="122.1201"> </text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="140" x="21" y="138.417">User with or without</text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="79" x="51.5" y="154.7139">credentials</text></g><!--MD5=[52d0c115c7d06b979b7f69659773ccc0]
link user to s3--><g id="link_user_s3"><path d="M175.14,100.5 C203.78,100.5 236.2,100.5 266.42,100.5 " fill="none" id="user-to-s3" style="stroke:#666666;stroke-width:1.0;"/><polygon fill="#666666" points="274.47,100.5,266.47,97.5,266.47,103.5,274.47,100.5" style="stroke:#666666;stroke-width:1.0;"/><text fill="#666666" font-family="sans-serif" font-size="12" font-weight="bold" lengthAdjust="spacing" textLength="63" x="193.25" y="80.6387">Requests</text><text fill="#666666" font-family="sans-serif" font-size="12" font-style="italic" lengthAdjust="spacing" textLength="40" x="204.75" y="94.6074">[HTTP]</text></g><!--MD5=[22d466a8c2458259cbad703a0636b8fb]
link s3 to stor--><g id="link_s3_stor"><path d="M473.91,100.5 C529.33,100.5 597.81,100.5 652.08,100.5 " fill="none" id="s3-to-stor" style="stroke:#666666;stroke-width:1.0;"/><polygon fill="#666666" points="660.32,100.5,652.32,97.5,652.32,103.5,660.32,100.5" style="stroke:#666666;stroke-width:1.0;"/><text fill="#666666" font-family="sans-serif" font-size="12" font-weight="bold" lengthAdjust="spacing" textLength="136" x="497" y="80.6387">Get data to validate</text><text fill="#666666" font-family="sans-serif" font-size="12" font-weight="bold" lengthAdjust="spacing" textLength="150" x="492" y="94.6074">request, store objects</text></g><!--MD5=[8c98dd26c815ae7a8024bcc2d9dd4f66]
link s3 to NeoGo--><g id="link_s3_NeoGo"><path d="M374,143.07 C374,192.21 374,271.65 374,305.9 " fill="none" id="s3-to-NeoGo" style="stroke:#666666;stroke-width:1.0;"/><polygon fill="#666666" points="374,314.3,377,306.3,371,306.3,374,314.3" style="stroke:#666666;stroke-width:1.0;"/><text fill="#666666" font-family="sans-serif" font-size="12" font-weight="bold" lengthAdjust="spacing" textLength="136" x="375" y="210.6387">Get data to validate</text><text fill="#666666" font-family="sans-serif" font-size="12" font-weight="bold" lengthAdjust="spacing" textLength="53" x="418.5" y="224.6074">request</text></g><!--MD5=[67f0c0ebd6d2a23c30e202ac0cd81435]
link NeoGo to ffsid--><g id="link_NeoGo_ffsid"><path d="M374,332.7 C374,348.95 374,386.52 374,419.25 " fill="none" id="NeoGo-to-ffsid" style="stroke:#666666;stroke-width:1.0;"/><polygon fill="#666666" points="374,427.45,377,419.45,371,419.45,374,427.45" style="stroke:#666666;stroke-width:1.0;"/><text fill="#666666" font-family="sans-serif" font-size="12" font-weight="bold" lengthAdjust="spacing" textLength="79" x="375" y="394.6387">Fetch users</text></g><!--MD5=[d37da9b99a3aca6bfa9447725e2e6374]
link NeoGo to policy--><g id="link_NeoGo_policy"><path d="M383.06,329.95 C399.4,339.87 434.74,361.78 463,382.5 C483.22,397.33 504.63,414.49 523.43,430.09 " fill="none" id="NeoGo-to-policy" style="stroke:#666666;stroke-width:1.0;"/><polygon fill="#666666" points="529.7,435.31,525.4816,427.88,521.6363,432.4858,529.7,435.31" style="stroke:#666666;stroke-width:1.0;"/><text fill="#666666" font-family="sans-serif" font-size="12" font-weight="bold" lengthAdjust="spacing" textLength="93" x="483" y="394.6387">Fetch policies</text></g><rect fill="none" height="16.2969" style="stroke:none;stroke-width:1.0;" width="236" x="584" y="568.5"/><text fill="#000000" font-family="sans-serif" font-size="14" font-weight="bold" lengthAdjust="spacing" textLength="59" x="584" y="581.4951">Legend</text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="4" x="643" y="581.4951"> </text><rect fill="#08427B" height="16.2969" style="stroke:none;stroke-width:1.0;" width="236" x="584" y="584.7969"/><text fill="#073B6F" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="8" x="588" y="597.792"></text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="4" x="596" y="597.792"> </text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="49" x="604" y="597.792">person</text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="4" x="657" y="597.792"> </text><rect fill="#438DD5" height="16.2969" style="stroke:none;stroke-width:1.0;" width="236" x="584" y="601.0938"/><text fill="#3C7FC0" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="8" x="588" y="614.0889"></text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="4" x="596" y="614.0889"> </text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="68" x="604" y="614.0889">container</text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="4" x="676" y="614.0889"> </text><rect fill="#0ABAB5" height="16.2969" style="stroke:none;stroke-width:1.0;" width="236" x="584" y="617.3906"/><text fill="#0ABAB5" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="8" x="588" y="630.3857"></text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="4" x="596" y="630.3857"> </text><text fill="#66622E" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="208" x="604" y="630.3857">smart-contract (no text color)</text><text fill="#FFFFFF" font-family="sans-serif" font-size="14" lengthAdjust="spacing" textLength="4" x="816" y="630.3857"> </text><line style="stroke:none;stroke-width:1.0;" x1="584" x2="820" y1="568.5" y2="568.5"/><line style="stroke:none;stroke-width:1.0;" x1="584" x2="820" y1="584.7969" y2="584.7969"/><line style="stroke:none;stroke-width:1.0;" x1="584" x2="820" y1="601.0938" y2="601.0938"/><line style="stroke:none;stroke-width:1.0;" x1="584" x2="820" y1="617.3906" y2="617.3906"/><line style="stroke:none;stroke-width:1.0;" x1="584" x2="820" y1="633.6875" y2="633.6875"/><line style="stroke:none;stroke-width:1.0;" x1="584" x2="584" y1="568.5" y2="633.6875"/><line style="stroke:none;stroke-width:1.0;" x1="820" x2="820" y1="568.5" y2="633.6875"/><!--MD5=[c02d88aa5b998c40021c1d715125d393]
@startuml
!include <c4/C4_Container.puml>
AddElementTag("smart-contract", $bgColor=#0abab5)
Person(user, "User", "User with or without credentials")
System_Boundary(c1, "FrostFS") {
Container(s3, "S3 Gateway", $descr="AWS S3 compatible gate")
Container(stor, "FrostFS Storage", $descr="Storage service")
}
System_Boundary(c3, "Blockchain") {
Interface "NeoGo"
Container(ffsid, "FrostFS ID", $tags="smart-contract", $descr="Stores namespaces and users")
Container(policy, "Policy", $tags="smart-contract", $descr="Stores APE rules")
}
Rel_R(user, s3, "Requests", "HTTP")
Rel_R(s3, stor, "Get data to validate request, store objects")
Rel_D(s3, NeoGo, "Get data to validate request")
Rel("NeoGo", ffsid, "Fetch users")
Rel("NeoGo", policy, "Fetch policies")
SHOW_LEGEND(true)
@enduml
@startuml
skinparam defaultTextAlignment center
skinparam wrapWidth 200
skinparam maxMessageSize 150
skinparam LegendBorderColor transparent
skinparam LegendBackgroundColor transparent
skinparam LegendFontColor #FFFFFF
skinparam shadowing<<legendArea>> false
skinparam rectangle<<legendArea>> {
backgroundcolor #00000000
bordercolor #00000000
}
skinparam rectangle {
StereotypeFontSize 12
shadowing false
}
skinparam database {
StereotypeFontSize 12
shadowing false
}
skinparam queue {
StereotypeFontSize 12
shadowing false
}
skinparam arrow {
Color #666666
FontColor #666666
FontSize 12
}
skinparam actor {
StereotypeFontSize 12
shadowing false
style awesome
}
skinparam person {
StereotypeFontSize 12
shadowing false
}
skinparam package {
StereotypeFontSize 6
StereotypeFontColor transparent
FontStyle plain
BackgroundColor transparent
}
skinparam rectangle<<boundary>> {
Shadowing false
StereotypeFontSize 6
StereotypeFontColor transparent
FontColor #444444
BorderColor #444444
BackgroundColor transparent
BorderStyle dashed
}
skinparam rectangle<<person>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #08427B
BorderColor #073B6F
}
skinparam database<<person>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #08427B
BorderColor #073B6F
}
skinparam queue<<person>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #08427B
BorderColor #073B6F
}
skinparam actor<<person>> {
StereotypeFontColor #08427B
FontColor #08427B
BackgroundColor #08427B
BorderColor #073B6F
}
skinparam person<<person>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #08427B
BorderColor #073B6F
}
skinparam rectangle<<external_person>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #686868
BorderColor #8A8A8A
}
skinparam database<<external_person>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #686868
BorderColor #8A8A8A
}
skinparam queue<<external_person>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #686868
BorderColor #8A8A8A
}
skinparam actor<<external_person>> {
StereotypeFontColor #686868
FontColor #686868
BackgroundColor #686868
BorderColor #8A8A8A
}
skinparam person<<external_person>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #686868
BorderColor #8A8A8A
}
skinparam rectangle<<system>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #1168BD
BorderColor #3C7FC0
}
skinparam database<<system>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #1168BD
BorderColor #3C7FC0
}
skinparam queue<<system>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #1168BD
BorderColor #3C7FC0
}
skinparam actor<<system>> {
StereotypeFontColor #1168BD
FontColor #1168BD
BackgroundColor #1168BD
BorderColor #3C7FC0
}
skinparam person<<system>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #1168BD
BorderColor #3C7FC0
}
skinparam rectangle<<external_system>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #999999
BorderColor #8A8A8A
}
skinparam database<<external_system>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #999999
BorderColor #8A8A8A
}
skinparam queue<<external_system>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #999999
BorderColor #8A8A8A
}
skinparam actor<<external_system>> {
StereotypeFontColor #999999
FontColor #999999
BackgroundColor #999999
BorderColor #8A8A8A
}
skinparam person<<external_system>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #999999
BorderColor #8A8A8A
}
sprite $person [48x48/16] {
000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000
0000000000000000000049BCCA7200000000000000000000
0000000000000000006EFFFFFFFFB3000000000000000000
00000000000000001CFFFFFFFFFFFF700000000000000000
0000000000000001EFFFFFFFFFFFFFF80000000000000000
000000000000000CFFFFFFFFFFFFFFFF6000000000000000
000000000000007FFFFFFFFFFFFFFFFFF100000000000000
00000000000001FFFFFFFFFFFFFFFFFFF900000000000000
00000000000006FFFFFFFFFFFFFFFFFFFF00000000000000
0000000000000BFFFFFFFFFFFFFFFFFFFF40000000000000
0000000000000EFFFFFFFFFFFFFFFFFFFF70000000000000
0000000000000FFFFFFFFFFFFFFFFFFFFF80000000000000
0000000000000FFFFFFFFFFFFFFFFFFFFF80000000000000
0000000000000DFFFFFFFFFFFFFFFFFFFF60000000000000
0000000000000AFFFFFFFFFFFFFFFFFFFF40000000000000
00000000000006FFFFFFFFFFFFFFFFFFFE00000000000000
00000000000000EFFFFFFFFFFFFFFFFFF800000000000000
000000000000007FFFFFFFFFFFFFFFFFF100000000000000
000000000000000BFFFFFFFFFFFFFFFF5000000000000000
0000000000000001DFFFFFFFFFFFFFF70000000000000000
00000000000000000BFFFFFFFFFFFF500000000000000000
0000000000000000005DFFFFFFFFA1000000000000000000
0000000000000000000037ABB96100000000000000000000
000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000
000000000000025788300000000005886410000000000000
000000000007DFFFFFFD9643347BFFFFFFFB400000000000
0000000004EFFFFFFFFFFFFFFFFFFFFFFFFFFB1000000000
000000007FFFFFFFFFFFFFFFFFFFFFFFFFFFFFD200000000
00000006FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE10000000
0000003FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFB0000000
000000BFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF5000000
000003FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD000000
000009FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF200000
00000DFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF600000
00000FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF800000
00001FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFA00000
00001FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFB00000
00001FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFB00000
00001FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFB00000
00001FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFA00000
00000EFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF700000
000006FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE100000
0000008FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD3000000
000000014555555555555555555555555555555300000000
000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000
}
sprite $person2 [48x48/16] {
0000000000000000000049BCCA7200000000000000000000
0000000000000000006EFFFFFFFFB3000000000000000000
00000000000000001CFFFFFFFFFFFF700000000000000000
0000000000000001EFFFFFFFFFFFFFF80000000000000000
000000000000000CFFFFFFFFFFFFFFFF6000000000000000
000000000000007FFFFFFFFFFFFFFFFFF100000000000000
00000000000001FFFFFFFFFFFFFFFFFFF900000000000000
00000000000006FFFFFFFFFFFFFFFFFFFF00000000000000
0000000000000BFFFFFFFFFFFFFFFFFFFF40000000000000
0000000000000EFFFFFFFFFFFFFFFFFFFF70000000000000
0000000000000FFFFFFFFFFFFFFFFFFFFF80000000000000
0000000000000FFFFFFFFFFFFFFFFFFFFF80000000000000
0000000000000DFFFFFFFFFFFFFFFFFFFF60000000000000
0000000000000AFFFFFFFFFFFFFFFFFFFF40000000000000
00000000000006FFFFFFFFFFFFFFFFFFFE00000000000000
00000000000000EFFFFFFFFFFFFFFFFFF800000000000000
000000000000007FFFFFFFFFFFFFFFFFF100000000000000
000000000000000BFFFFFFFFFFFFFFFF5000000000000000
0000000000000001DFFFFFFFFFFFFFF70000000000000000
00000000000000000BFFFFFFFFFFFF500000000000000000
0000000000000000005DFFFFFFFFA1000000000000000000
0000000000000000000037ABB96100000000000000000000
000000000002578888300000000005888864100000000000
0000000007DFFFFFFFFD9643347BFFFFFFFFFB4000000000
00000004EFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFB10000000
0000007FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD2000000
000006FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE100000
00003FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFB00000
0000BFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF50000
0003FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD0000
0009FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF2000
000DFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF6000
000FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF8000
001FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFB000
001FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFB000
001FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFB000
001FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFA000
000FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF8000
000DFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF6000
0009FFFFFFFF8FFFFFFFFFFFFFFFFFFFFFF8FFFFFFFF2000
0003FFFFFFFF8FFFFFFFFFFFFFFFFFFFFFF8FFFFFFFD0000
0000BFFFFFFF8FFFFFFFFFFFFFFFFFFFFFF8FFFFFFF50000
00003FFFFFFF8FFFFFFFFFFFFFFFFFFFFFF8FFFFFFB00000
000006FFFFFF8FFFFFFFFFFFFFFFFFFFFFF8FFFFFE100000
0000007FFFFF8FFFFFFFFFFFFFFFFFFFFFF8FFFFD2000000
00000004EFFF8FFFFFFFFFFFFFFFFFFFFFF8FFFB10000000
0000000007DF8FFFFFFFFFFFFFFFFFFFFFF8FB4000000000
000000000002578888888888888888888864100000000000
}
skinparam rectangle<<container>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #438DD5
BorderColor #3C7FC0
}
skinparam database<<container>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #438DD5
BorderColor #3C7FC0
}
skinparam queue<<container>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #438DD5
BorderColor #3C7FC0
}
skinparam actor<<container>> {
StereotypeFontColor #438DD5
FontColor #438DD5
BackgroundColor #438DD5
BorderColor #3C7FC0
}
skinparam person<<container>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #438DD5
BorderColor #3C7FC0
}
skinparam rectangle<<external_container>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #B3B3B3
BorderColor #A6A6A6
}
skinparam database<<external_container>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #B3B3B3
BorderColor #A6A6A6
}
skinparam queue<<external_container>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #B3B3B3
BorderColor #A6A6A6
}
skinparam actor<<external_container>> {
StereotypeFontColor #B3B3B3
FontColor #B3B3B3
BackgroundColor #B3B3B3
BorderColor #A6A6A6
}
skinparam person<<external_container>> {
StereotypeFontColor #FFFFFF
FontColor #FFFFFF
BackgroundColor #B3B3B3
BorderColor #A6A6A6
}
skinparam rectangle<<smart-contract>> {
BackgroundColor #0abab5
}
skinparam database<<smart-contract>> {
BackgroundColor #0abab5
}
skinparam queue<<smart-contract>> {
BackgroundColor #0abab5
}
skinparam actor<<smart-contract>> {
StereotypeFontColor #0abab5
FontColor #0abab5
BackgroundColor #0abab5
}
skinparam person<<smart-contract>> {
BackgroundColor #0abab5
}
rectangle "<$person>\n==User\n\n User with or without credentials" <<person>> as user
rectangle "==FrostFS\n<size:12>[System]</size>" <<boundary>> as c1 {
rectangle "==S3 Gateway\n//<size:12>[]</size>//\n\n AWS S3 compatible gate" <<container>> as s3
rectangle "==FrostFS Storage\n//<size:12>[]</size>//\n\n Storage service" <<container>> as stor
}
rectangle "==Blockchain\n<size:12>[System]</size>" <<boundary>> as c3 {
Interface "NeoGo"
rectangle "==FrostFS ID\n//<size:12>[]</size>//\n\n Stores namespaces and users" <<smart-contract>><<container>> as ffsid
rectangle "==Policy\n//<size:12>[]</size>//\n\n Stores APE rules" <<smart-contract>><<container>> as policy
}
user -RIGHT->> s3 : **Requests**\n//<size:12>[HTTP]</size>//
s3 -RIGHT->> stor : **Get data to validate request, store objects**
s3 -DOWN->> NeoGo : **Get data to validate request**
NeoGo - ->> ffsid : **Fetch users**
NeoGo - ->> policy : **Fetch policies**
hide stereotype
legend right
<#00000000,#00000000>|<color:#000000>**Legend**</color> |
|<#08427B><color:#073B6F> <U+25AF></color> <color:#FFFFFF> person </color> |
|<#438DD5><color:#3C7FC0> <U+25AF></color> <color:#FFFFFF> container </color> |
|<#0abab5><color:#0abab5> <U+25AF></color> <color:#66622E> smart-contract (no text color) </color> |
endlegend
@enduml
PlantUML version 1.2022.13(Sat Nov 19 16:22:17 MSK 2022)
(GPL source distribution)
Java Runtime: OpenJDK Runtime Environment
JVM: OpenJDK 64-Bit Server VM
Default Encoding: UTF-8
Language: en
Country: US
--></g></svg>

After

Width:  |  Height:  |  Size: 25 KiB

View file

@ -0,0 +1,60 @@
@startuml
participant User
participant "S3-GW"
collections "FrostFS Storage"
User -> "S3-GW": Request
group signed request
"S3-GW" -> "FrostFS Storage": Find Access Box
"FrostFS Storage" -> "FrostFS Storage": Check request
alt #pink Check failure
"FrostFS Storage" -->> "S3-GW": Access Denied
"S3-GW" -->> User: Access Denied
end
"FrostFS Storage" -->> "S3-GW": Access Box
"S3-GW" -> "S3-GW": Check sign
alt #pink Check failure
"S3-GW" -->> User: Access Denied
end
"S3-GW" -> "frostfsid contract": Find user
"frostfsid contract" -->> "S3-GW": User info
"S3-GW" -> "S3-GW": Check user info
alt #pink Check failure
"S3-GW" -->> User: Access Denied
end
end
"S3-GW" -> "policy contract": Get policies
"policy contract" -->> "S3-GW": Policies
"S3-GW" -> "S3-GW": Check policy
alt #pink Check failure
"S3-GW" -->> User: Access Denied
end
"S3-GW" -> "FrostFS Storage": User Request
"FrostFS Storage" -> "FrostFS Storage": Check request
alt #pink Check failure
"FrostFS Storage" -->> "S3-GW": Access Denied
"S3-GW" -->> User: Access Denied
end
"FrostFS Storage" -->> "S3-GW": Response
"S3-GW" -->> User: Response
box "Neo Go"
participant "frostfsid contract"
participant "policy contract"
end box
@enduml

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

25
go.mod
View file

@ -3,11 +3,11 @@ module git.frostfs.info/TrueCloudLab/frostfs-s3-gw
go 1.20 go 1.20
require ( require (
git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.1-0.20231121085847-241a9f1ad0a4 git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.1-0.20240327095603-491a47e7fe24
git.frostfs.info/TrueCloudLab/frostfs-contract v0.18.1-0.20231218084346-bce7ef18c83b git.frostfs.info/TrueCloudLab/frostfs-contract v0.19.3-0.20240409115729-6eb492025bdd
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6 git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20231107114540-ab75edd70939 git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20240402141549-3790142b10c7
git.frostfs.info/TrueCloudLab/policy-engine v0.0.0-20240206111236-8354a074c4df git.frostfs.info/TrueCloudLab/policy-engine v0.0.0-20240416071728-04a79f57ef1f
git.frostfs.info/TrueCloudLab/zapjournald v0.0.0-20240124114243-cb2e66427d02 git.frostfs.info/TrueCloudLab/zapjournald v0.0.0-20240124114243-cb2e66427d02
github.com/aws/aws-sdk-go v1.44.6 github.com/aws/aws-sdk-go v1.44.6
github.com/bluele/gcache v0.0.2 github.com/bluele/gcache v0.0.2
@ -15,7 +15,7 @@ require (
github.com/google/uuid v1.3.1 github.com/google/uuid v1.3.1
github.com/minio/sio v0.3.0 github.com/minio/sio v0.3.0
github.com/nats-io/nats.go v1.13.1-0.20220121202836-972a071d373d github.com/nats-io/nats.go v1.13.1-0.20220121202836-972a071d373d
github.com/nspcc-dev/neo-go v0.104.1-0.20231206061802-441eb8aa86be github.com/nspcc-dev/neo-go v0.105.0
github.com/panjf2000/ants/v2 v2.5.0 github.com/panjf2000/ants/v2 v2.5.0
github.com/prometheus/client_golang v1.15.1 github.com/prometheus/client_golang v1.15.1
github.com/prometheus/client_model v0.3.0 github.com/prometheus/client_model v0.3.0
@ -28,10 +28,12 @@ require (
go.opentelemetry.io/otel v1.16.0 go.opentelemetry.io/otel v1.16.0
go.opentelemetry.io/otel/trace v1.16.0 go.opentelemetry.io/otel/trace v1.16.0
go.uber.org/zap v1.26.0 go.uber.org/zap v1.26.0
golang.org/x/crypto v0.14.0 golang.org/x/crypto v0.21.0
golang.org/x/exp v0.0.0-20230817173708-d852ddb80c63 golang.org/x/exp v0.0.0-20230817173708-d852ddb80c63
golang.org/x/net v0.23.0
golang.org/x/text v0.14.0
google.golang.org/grpc v1.59.0 google.golang.org/grpc v1.59.0
google.golang.org/protobuf v1.31.0 google.golang.org/protobuf v1.33.0
) )
require ( require (
@ -50,6 +52,7 @@ require (
github.com/go-logr/logr v1.2.4 // indirect github.com/go-logr/logr v1.2.4 // indirect
github.com/go-logr/stdr v1.2.2 // indirect github.com/go-logr/stdr v1.2.2 // indirect
github.com/golang/protobuf v1.5.3 // indirect github.com/golang/protobuf v1.5.3 // indirect
github.com/golang/snappy v0.0.1 // indirect
github.com/gorilla/websocket v1.5.0 // indirect github.com/gorilla/websocket v1.5.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.3 // indirect github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.3 // indirect
github.com/hashicorp/golang-lru v0.6.0 // indirect github.com/hashicorp/golang-lru v0.6.0 // indirect
@ -78,8 +81,10 @@ require (
github.com/spf13/cast v1.5.0 // indirect github.com/spf13/cast v1.5.0 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/subosito/gotenv v1.4.2 // indirect github.com/subosito/gotenv v1.4.2 // indirect
github.com/syndtr/goleveldb v1.0.1-0.20210305035536-64b5b1c73954 // indirect
github.com/twmb/murmur3 v1.1.8 // indirect github.com/twmb/murmur3 v1.1.8 // indirect
github.com/urfave/cli v1.22.5 // indirect github.com/urfave/cli v1.22.5 // indirect
go.etcd.io/bbolt v1.3.8 // indirect
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.16.0 // indirect go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.16.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.16.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.16.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.16.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.16.0 // indirect
@ -88,11 +93,9 @@ require (
go.opentelemetry.io/otel/sdk v1.16.0 // indirect go.opentelemetry.io/otel/sdk v1.16.0 // indirect
go.opentelemetry.io/proto/otlp v0.19.0 // indirect go.opentelemetry.io/proto/otlp v0.19.0 // indirect
go.uber.org/multierr v1.11.0 // indirect go.uber.org/multierr v1.11.0 // indirect
golang.org/x/net v0.17.0 // indirect
golang.org/x/sync v0.3.0 // indirect golang.org/x/sync v0.3.0 // indirect
golang.org/x/sys v0.13.0 // indirect golang.org/x/sys v0.18.0 // indirect
golang.org/x/term v0.13.0 // indirect golang.org/x/term v0.18.0 // indirect
golang.org/x/text v0.13.0 // indirect
google.golang.org/genproto v0.0.0-20231120223509-83a465c0220f // indirect google.golang.org/genproto v0.0.0-20231120223509-83a465c0220f // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20231106174013-bbf56f31fb17 // indirect google.golang.org/genproto/googleapis/api v0.0.0-20231106174013-bbf56f31fb17 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20231127180814-3a041ad873d4 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20231127180814-3a041ad873d4 // indirect

82
go.sum
View file

@ -36,20 +36,20 @@ cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RX
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0= cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3fOKtUw0Xmo= cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3fOKtUw0Xmo=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.1-0.20231121085847-241a9f1ad0a4 h1:wjLfZ3WCt7qNGsQv+Jl0TXnmtg0uVk/jToKPFTBc/jo= git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.1-0.20240327095603-491a47e7fe24 h1:uIkl0mKWwDICUZTbNWZ38HLYDBI9rMgdAhYQWZ0C9iQ=
git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.1-0.20231121085847-241a9f1ad0a4/go.mod h1:uY0AYmCznjZdghDnAk7THFIe1Vlg531IxUcus7ZfUJI= git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.1-0.20240327095603-491a47e7fe24/go.mod h1:OBDSr+DqV1z4VDouoX3YMleNc4DPBVBWTG3WDT2PK1o=
git.frostfs.info/TrueCloudLab/frostfs-contract v0.18.1-0.20231218084346-bce7ef18c83b h1:zdbOxyqkxRyOLc7/2oNFu5tBwwg0Q6+0tJM3RkAxHlE= git.frostfs.info/TrueCloudLab/frostfs-contract v0.19.3-0.20240409115729-6eb492025bdd h1:fujTUMMn0wnpEKNDWLejFL916EPuaYD1MdZpk1ZokU8=
git.frostfs.info/TrueCloudLab/frostfs-contract v0.18.1-0.20231218084346-bce7ef18c83b/go.mod h1:YMFtNZy2MgeiSwt0t8lqk8dYBGzlbhmV1cbbstJJ6oY= git.frostfs.info/TrueCloudLab/frostfs-contract v0.19.3-0.20240409115729-6eb492025bdd/go.mod h1:F/fe1OoIDKr5Bz99q4sriuHDuf3aZefZy9ZsCqEtgxc=
git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0 h1:FxqFDhQYYgpe41qsIHVOcdzSVCB8JNSfPG7Uk4r2oSk= git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0 h1:FxqFDhQYYgpe41qsIHVOcdzSVCB8JNSfPG7Uk4r2oSk=
git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0/go.mod h1:RUIKZATQLJ+TaYQa60X2fTDwfuhMfm8Ar60bQ5fr+vU= git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0/go.mod h1:RUIKZATQLJ+TaYQa60X2fTDwfuhMfm8Ar60bQ5fr+vU=
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6 h1:aGQ6QaAnTerQ5Dq5b2/f9DUQtSqPkZZ/bkMx/HKuLCo= git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6 h1:aGQ6QaAnTerQ5Dq5b2/f9DUQtSqPkZZ/bkMx/HKuLCo=
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6/go.mod h1:W8Nn08/l6aQ7UlIbpF7FsQou7TVpcRD1ZT1KG4TrFhE= git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6/go.mod h1:W8Nn08/l6aQ7UlIbpF7FsQou7TVpcRD1ZT1KG4TrFhE=
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20231107114540-ab75edd70939 h1:jZEepi9yWmqrWgLRQcHQu4YPJaudmd7d2AEhpmM3m4U= git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20240402141549-3790142b10c7 h1:sjvYXV0WJAF4iNF3l0uhcN8zhXmpY1gYI0WyJpeFe6s=
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20231107114540-ab75edd70939/go.mod h1:t1akKcUH7iBrFHX8rSXScYMP17k2kYQXMbZooiL5Juw= git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20240402141549-3790142b10c7/go.mod h1:i0RKqiF4z3UOxLSNwhHw+cUz/JyYWuTRpnn9ere4Y3w=
git.frostfs.info/TrueCloudLab/hrw v1.2.1 h1:ccBRK21rFvY5R1WotI6LNoPlizk7qSvdfD8lNIRudVc= git.frostfs.info/TrueCloudLab/hrw v1.2.1 h1:ccBRK21rFvY5R1WotI6LNoPlizk7qSvdfD8lNIRudVc=
git.frostfs.info/TrueCloudLab/hrw v1.2.1/go.mod h1:C1Ygde2n843yTZEQ0FP69jYiuaYV0kriLvP4zm8JuvM= git.frostfs.info/TrueCloudLab/hrw v1.2.1/go.mod h1:C1Ygde2n843yTZEQ0FP69jYiuaYV0kriLvP4zm8JuvM=
git.frostfs.info/TrueCloudLab/policy-engine v0.0.0-20240206111236-8354a074c4df h1:FLk850Ti+aj9vdJTUPvtS4KDIpISze9vTNKV15WIbME= git.frostfs.info/TrueCloudLab/policy-engine v0.0.0-20240416071728-04a79f57ef1f h1:hP1Q/MJvRsHSBIWXn48C+hVsRHfPWWLhdOg6IxjaWBs=
git.frostfs.info/TrueCloudLab/policy-engine v0.0.0-20240206111236-8354a074c4df/go.mod h1:YVL7yFaT0QNSpA0z+RHudLvrLwT+lsFYGyBSVc1ustI= git.frostfs.info/TrueCloudLab/policy-engine v0.0.0-20240416071728-04a79f57ef1f/go.mod h1:H/AW85RtYxVTbcgwHW76DqXeKlsiCIOeNXHPqyDBrfQ=
git.frostfs.info/TrueCloudLab/rfc6979 v0.4.0 h1:M2KR3iBj7WpY3hP10IevfIB9MURr4O9mwVfJ+SjT3HA= git.frostfs.info/TrueCloudLab/rfc6979 v0.4.0 h1:M2KR3iBj7WpY3hP10IevfIB9MURr4O9mwVfJ+SjT3HA=
git.frostfs.info/TrueCloudLab/rfc6979 v0.4.0/go.mod h1:okpbKfVYf/BpejtfFTfhZqFP+sZ8rsHrP8Rr/jYPNRc= git.frostfs.info/TrueCloudLab/rfc6979 v0.4.0/go.mod h1:okpbKfVYf/BpejtfFTfhZqFP+sZ8rsHrP8Rr/jYPNRc=
git.frostfs.info/TrueCloudLab/tzhash v1.8.0 h1:UFMnUIk0Zh17m8rjGHJMqku2hCgaXDqjqZzS4gsb4UA= git.frostfs.info/TrueCloudLab/tzhash v1.8.0 h1:UFMnUIk0Zh17m8rjGHJMqku2hCgaXDqjqZzS4gsb4UA=
@ -66,6 +66,7 @@ github.com/aws/aws-sdk-go v1.44.6 h1:Y+uHxmZfhRTLX2X3khkdxCoTZAyGEX21aOUHe1U6geg
github.com/aws/aws-sdk-go v1.44.6/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= github.com/aws/aws-sdk-go v1.44.6/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bits-and-blooms/bitset v1.8.0 h1:FD+XqgOZDUxxZ8hzoBFuV9+cGWY9CslN6d5MS5JVb4c=
github.com/bluele/gcache v0.0.2 h1:WcbfdXICg7G/DGBh1PFfcirkWOQV+v077yF1pSy3DGw= github.com/bluele/gcache v0.0.2 h1:WcbfdXICg7G/DGBh1PFfcirkWOQV+v077yF1pSy3DGw=
github.com/bluele/gcache v0.0.2/go.mod h1:m15KV+ECjptwSPxKhOhQoAFQVtUFjTVkc3H8o0t/fp0= github.com/bluele/gcache v0.0.2/go.mod h1:m15KV+ECjptwSPxKhOhQoAFQVtUFjTVkc3H8o0t/fp0=
github.com/cenkalti/backoff/v4 v4.2.1 h1:y4OZtCnogmCPw98Zjyt5a6+QwPLGkiQsYW5oUqylYbM= github.com/cenkalti/backoff/v4 v4.2.1 h1:y4OZtCnogmCPw98Zjyt5a6+QwPLGkiQsYW5oUqylYbM=
@ -87,6 +88,8 @@ github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWH
github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/consensys/bavard v0.1.13 h1:oLhMLOFGTLdlda/kma4VOJazblc7IM5y5QPd2A/YjhQ=
github.com/consensys/gnark-crypto v0.12.2-0.20231013160410-1f65e75b6dfb h1:f0BMgIjhZy4lSRHCXFbQst85f5agZAjtDMixQqBWNpc=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.2 h1:p1EgwI/C7NhT0JmVkwCD2ZBK8j4aeHQX2pMHHBfMQ6w= github.com/cpuguy83/go-md2man/v2 v2.0.2 h1:p1EgwI/C7NhT0JmVkwCD2ZBK8j4aeHQX2pMHHBfMQ6w=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
@ -104,6 +107,8 @@ github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.m
github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0= github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/frankban/quicktest v1.14.5 h1:dfYrrRyLtiqT9GyKXgdh+k4inNeTvmGbuSgZ3lx3GhA= github.com/frankban/quicktest v1.14.5 h1:dfYrrRyLtiqT9GyKXgdh+k4inNeTvmGbuSgZ3lx3GhA=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY= github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY=
github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw= github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
@ -149,6 +154,7 @@ github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiu
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg= github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4= github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
@ -197,6 +203,8 @@ github.com/hashicorp/golang-lru/v2 v2.0.2 h1:Dwmkdr5Nc/oBiXgJS3CDHNhJtIHkuZ3DZF5
github.com/hashicorp/golang-lru/v2 v2.0.2/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= github.com/hashicorp/golang-lru/v2 v2.0.2/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4= github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/holiman/uint256 v1.2.0 h1:gpSYcPLWGv4sG43I2mVLiDZCNDh/EpGjSk8tmtxitHM=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
@ -228,6 +236,7 @@ github.com/minio/sio v0.3.0 h1:syEFBewzOMOYVzSTFpp1MqpSZk8rUNbz8VIIc+PNzus=
github.com/minio/sio v0.3.0/go.mod h1:8b0yPp2avGThviy/+OCJBI6OMpvxoUuiLvE6F1lebhw= github.com/minio/sio v0.3.0/go.mod h1:8b0yPp2avGThviy/+OCJBI6OMpvxoUuiLvE6F1lebhw=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mmcloughlin/addchain v0.4.0 h1:SobOdjm2xLj1KkXN5/n0xTIWyZA2+s99UCY1iPfkHRY=
github.com/mr-tron/base58 v1.2.0 h1:T/HDJBh4ZCPbU39/+c3rRvE0uKBQlU27+QI8LJ4t64o= github.com/mr-tron/base58 v1.2.0 h1:T/HDJBh4ZCPbU39/+c3rRvE0uKBQlU27+QI8LJ4t64o=
github.com/mr-tron/base58 v1.2.0/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc= github.com/mr-tron/base58 v1.2.0/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
github.com/nats-io/jwt/v2 v2.2.1-0.20220113022732-58e87895b296 h1:vU9tpM3apjYlLLeY23zRWJ9Zktr5jp+mloR942LEOpY= github.com/nats-io/jwt/v2 v2.2.1-0.20220113022732-58e87895b296 h1:vU9tpM3apjYlLLeY23zRWJ9Zktr5jp+mloR942LEOpY=
@ -241,12 +250,21 @@ github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c= github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/nspcc-dev/go-ordered-json v0.0.0-20231123160306-3374ff1e7a3c h1:OOQeE613BH93ICPq3eke5N78gWNeMjcBWkmD2NKyXVg= github.com/nspcc-dev/go-ordered-json v0.0.0-20231123160306-3374ff1e7a3c h1:OOQeE613BH93ICPq3eke5N78gWNeMjcBWkmD2NKyXVg=
github.com/nspcc-dev/go-ordered-json v0.0.0-20231123160306-3374ff1e7a3c/go.mod h1:79bEUDEviBHJMFV6Iq6in57FEOCMcRhfQnfaf0ETA5U= github.com/nspcc-dev/go-ordered-json v0.0.0-20231123160306-3374ff1e7a3c/go.mod h1:79bEUDEviBHJMFV6Iq6in57FEOCMcRhfQnfaf0ETA5U=
github.com/nspcc-dev/neo-go v0.104.1-0.20231206061802-441eb8aa86be h1:nZ2Hi5JSXdq3JXDi/8lms1UXQDAA5LVGpOpcrf2bRVA= github.com/nspcc-dev/neo-go v0.105.0 h1:vtNZYFEFySK8zRDhLzQYha849VzWrcKezlnq/oNQg/w=
github.com/nspcc-dev/neo-go v0.104.1-0.20231206061802-441eb8aa86be/go.mod h1:dsu8+VDMgGF7QNtPFBU4seE3pxSq8fYCuk3A6he4+ZQ= github.com/nspcc-dev/neo-go v0.105.0/go.mod h1:6pchIHg5okeZO955RxpTh5q0sUI0vtpgPM6Q+no1rlI=
github.com/nspcc-dev/neo-go/pkg/interop v0.0.0-20231127165613-b35f351f0ba0 h1:N+dMIBmteXjJpkH6UZ7HmNftuFxkqszfGLbhsEctnv0= github.com/nspcc-dev/neo-go/pkg/interop v0.0.0-20231127165613-b35f351f0ba0 h1:N+dMIBmteXjJpkH6UZ7HmNftuFxkqszfGLbhsEctnv0=
github.com/nspcc-dev/neo-go/pkg/interop v0.0.0-20231127165613-b35f351f0ba0/go.mod h1:J/Mk6+nKeKSW4wygkZQFLQ6SkLOSGX5Ga0RuuuktEag= github.com/nspcc-dev/neo-go/pkg/interop v0.0.0-20231127165613-b35f351f0ba0/go.mod h1:J/Mk6+nKeKSW4wygkZQFLQ6SkLOSGX5Ga0RuuuktEag=
github.com/nspcc-dev/rfc6979 v0.2.0 h1:3e1WNxrN60/6N0DW7+UYisLeZJyfqZTNOjeV/toYvOE= github.com/nspcc-dev/rfc6979 v0.2.0 h1:3e1WNxrN60/6N0DW7+UYisLeZJyfqZTNOjeV/toYvOE=
github.com/nspcc-dev/rfc6979 v0.2.0/go.mod h1:exhIh1PdpDC5vQmyEsGvc4YDM/lyQp/452QxGq/UEso= github.com/nspcc-dev/rfc6979 v0.2.0/go.mod h1:exhIh1PdpDC5vQmyEsGvc4YDM/lyQp/452QxGq/UEso=
github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.14.0 h1:2mOpI4JVVPBN+WQRa0WKH2eXR+Ey+uK4n7Zj0aYpIQA=
github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1 h1:o0+MgICZLuZ7xjH7Vx6zS/zcu93/BEp1VwkIW1mEXCE=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/panjf2000/ants/v2 v2.5.0 h1:1rWGWSnxCsQBga+nQbA4/iY6VMeNoOIAM0ZWh9u3q2Q= github.com/panjf2000/ants/v2 v2.5.0 h1:1rWGWSnxCsQBga+nQbA4/iY6VMeNoOIAM0ZWh9u3q2Q=
github.com/panjf2000/ants/v2 v2.5.0/go.mod h1:cU93usDlihJZ5CfRGNDYsiBYvoilLvBF5Qp/BT2GNRE= github.com/panjf2000/ants/v2 v2.5.0/go.mod h1:cU93usDlihJZ5CfRGNDYsiBYvoilLvBF5Qp/BT2GNRE=
github.com/pelletier/go-toml/v2 v2.0.6 h1:nrzqCb7j9cDFj2coyLNLaZuJTLjWjlaz6nvTvIwycIU= github.com/pelletier/go-toml/v2 v2.0.6 h1:nrzqCb7j9cDFj2coyLNLaZuJTLjWjlaz6nvTvIwycIU=
@ -301,6 +319,7 @@ github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXl
github.com/subosito/gotenv v1.4.2 h1:X1TuBLAMDFbaTAChgCBLu3DU3UPyELpnF2jjJ2cz/S8= github.com/subosito/gotenv v1.4.2 h1:X1TuBLAMDFbaTAChgCBLu3DU3UPyELpnF2jjJ2cz/S8=
github.com/subosito/gotenv v1.4.2/go.mod h1:ayKnFf/c6rvx/2iiLrJUk1e6plDbT3edrFNGqEflhK0= github.com/subosito/gotenv v1.4.2/go.mod h1:ayKnFf/c6rvx/2iiLrJUk1e6plDbT3edrFNGqEflhK0=
github.com/syndtr/goleveldb v1.0.1-0.20210305035536-64b5b1c73954 h1:xQdMZ1WLrgkkvOZ/LDQxjVxMLdby7osSh4ZEVa5sIjs= github.com/syndtr/goleveldb v1.0.1-0.20210305035536-64b5b1c73954 h1:xQdMZ1WLrgkkvOZ/LDQxjVxMLdby7osSh4ZEVa5sIjs=
github.com/syndtr/goleveldb v1.0.1-0.20210305035536-64b5b1c73954/go.mod h1:u2MKkTVTVJWe5D1rCvame8WqhBd88EuIwODJZ1VHCPM=
github.com/twmb/murmur3 v1.1.8 h1:8Yt9taO/WN3l08xErzjeschgZU2QSrwm1kclYq+0aRg= github.com/twmb/murmur3 v1.1.8 h1:8Yt9taO/WN3l08xErzjeschgZU2QSrwm1kclYq+0aRg=
github.com/twmb/murmur3 v1.1.8/go.mod h1:Qq/R7NUyOfr65zD+6Q5IHKsJLwP7exErjN6lyyq3OSQ= github.com/twmb/murmur3 v1.1.8/go.mod h1:Qq/R7NUyOfr65zD+6Q5IHKsJLwP7exErjN6lyyq3OSQ=
github.com/urfave/cli v1.22.5 h1:lNq9sAHXK2qfdI8W+GRItjCEkI+2oR4d+MEHy1CKXoU= github.com/urfave/cli v1.22.5 h1:lNq9sAHXK2qfdI8W+GRItjCEkI+2oR4d+MEHy1CKXoU=
@ -314,6 +333,7 @@ github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.etcd.io/bbolt v1.3.8 h1:xs88BrvEv273UsB79e0hcVrlUWmS0a8upikMFhSyAtA= go.etcd.io/bbolt v1.3.8 h1:xs88BrvEv273UsB79e0hcVrlUWmS0a8upikMFhSyAtA=
go.etcd.io/bbolt v1.3.8/go.mod h1:N9Mkw9X8x5fupy0IKsmuqVtoGDyxsaDlbk4Rd05IAQw=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU= go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8= go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
@ -353,8 +373,8 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20210314154223-e6e6c4f2bb5b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= golang.org/x/crypto v0.0.0-20210314154223-e6e6c4f2bb5b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20211108221036-ceb1ce70b4fa/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20211108221036-ceb1ce70b4fa/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.14.0 h1:wBqGXzWJW6m1XrIKlAH0Hs1JJ7+9KBwnIO8v66Q9cHc= golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA=
golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@ -390,8 +410,10 @@ golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.12.0 h1:rmsUpXtvNzj340zd98LZ4KntptpfRHwpFOHG188oHXc=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
@ -412,9 +434,11 @@ golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/
golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200813134508-3edf25e44fcc/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
@ -423,8 +447,8 @@ golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM= golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -449,6 +473,7 @@ golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E= golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -457,7 +482,10 @@ golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -471,8 +499,10 @@ golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200814200057-3d37ad5750ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -487,12 +517,12 @@ golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE= golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.13.0 h1:bb+I9cTfFazGW51MZqBVmZy7+JEJMouUHTUSKVQLBek= golang.org/x/term v0.18.0 h1:FcHjZXDMxI8mM3nwhX9HlKop4C0YQvCVCdwYl2wOtE8=
golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -501,8 +531,8 @@ golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k= golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@ -554,10 +584,12 @@ golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210108195828-e2f9c7f1fc8e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210108195828-e2f9c7f1fc8e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.12.1-0.20230815132531-74c255bcf846 h1:Vve/L0v7CXXuxUmaMGIEK/dEeq7uiqb5qBgQrZzIE7E=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 h1:H2TDz8ibqkAF6YGhCdN3jS9O0/s90v0rJh3X/OLHEUk=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE= google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M= google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg= google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
@ -663,17 +695,22 @@ google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlba
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8= google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA= gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
@ -689,3 +726,4 @@ honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
rsc.io/tmplfunc v0.0.3 h1:53XFQh69AfOa8Tw0Jm7t+GV7KZhOi6jzsCzTtKbMvzU=

View file

@ -4,7 +4,6 @@ import (
"bytes" "bytes"
"context" "context"
"fmt" "fmt"
"io"
"strconv" "strconv"
"time" "time"
@ -15,9 +14,8 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/crdt" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/crdt"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
) )
const ( const (
@ -26,17 +24,17 @@ const (
// AuthmateFrostFS is a mediator which implements authmate.FrostFS through pool.Pool. // AuthmateFrostFS is a mediator which implements authmate.FrostFS through pool.Pool.
type AuthmateFrostFS struct { type AuthmateFrostFS struct {
frostFS *FrostFS frostFS layer.FrostFS
} }
// NewAuthmateFrostFS creates new AuthmateFrostFS using provided pool.Pool. // NewAuthmateFrostFS creates new AuthmateFrostFS using provided pool.Pool.
func NewAuthmateFrostFS(p *pool.Pool, key *keys.PrivateKey) *AuthmateFrostFS { func NewAuthmateFrostFS(frostFS layer.FrostFS) *AuthmateFrostFS {
return &AuthmateFrostFS{frostFS: NewFrostFS(p, key)} return &AuthmateFrostFS{frostFS: frostFS}
} }
// ContainerExists implements authmate.FrostFS interface method. // ContainerExists implements authmate.FrostFS interface method.
func (x *AuthmateFrostFS) ContainerExists(ctx context.Context, idCnr cid.ID) error { func (x *AuthmateFrostFS) ContainerExists(ctx context.Context, idCnr cid.ID) error {
_, err := x.frostFS.Container(ctx, idCnr) _, err := x.frostFS.Container(ctx, layer.PrmContainer{ContainerID: idCnr})
if err != nil { if err != nil {
return fmt.Errorf("get container via connection pool: %w", err) return fmt.Errorf("get container via connection pool: %w", err)
} }
@ -69,8 +67,8 @@ func (x *AuthmateFrostFS) CreateContainer(ctx context.Context, prm authmate.PrmC
return res.ContainerID, nil return res.ContainerID, nil
} }
// GetCredsPayload implements authmate.FrostFS interface method. // GetCredsObject implements authmate.FrostFS interface method.
func (x *AuthmateFrostFS) GetCredsPayload(ctx context.Context, addr oid.Address) ([]byte, error) { func (x *AuthmateFrostFS) GetCredsObject(ctx context.Context, addr oid.Address) (*object.Object, error) {
versions, err := x.getCredVersions(ctx, addr) versions, err := x.getCredVersions(ctx, addr)
if err != nil { if err != nil {
return nil, err return nil, err
@ -85,14 +83,13 @@ func (x *AuthmateFrostFS) GetCredsPayload(ctx context.Context, addr oid.Address)
Container: addr.Container(), Container: addr.Container(),
Object: credObjID, Object: credObjID,
WithPayload: true, WithPayload: true,
WithHeader: true,
}) })
if err != nil { if err != nil {
return nil, err return nil, err
} }
defer res.Payload.Close() return res.Head, err
return io.ReadAll(res.Payload)
} }
// CreateObject implements authmate.FrostFS interface method. // CreateObject implements authmate.FrostFS interface method.

View file

@ -0,0 +1,70 @@
package frostfs
import (
"context"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/authmate"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/tokens"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/stretchr/testify/require"
)
func TestGetCredsObject(t *testing.T) {
ctx, bktName, payload, newPayload := context.Background(), "bucket", []byte("payload"), []byte("new-payload")
key, err := keys.NewPrivateKey()
require.NoError(t, err)
var userID user.ID
userID.SetScriptHash(key.PublicKey().GetScriptHash())
var token bearer.Token
err = token.Sign(key.PrivateKey)
require.NoError(t, err)
ctx = middleware.SetBox(ctx, &middleware.Box{AccessBox: &accessbox.Box{
Gate: &accessbox.GateData{
BearerToken: &token,
},
}})
frostfs := NewAuthmateFrostFS(layer.NewTestFrostFS(key))
cid, err := frostfs.CreateContainer(ctx, authmate.PrmContainerCreate{
FriendlyName: bktName,
Owner: userID,
})
require.NoError(t, err)
objID, err := frostfs.CreateObject(ctx, tokens.PrmObjectCreate{
Container: cid,
Payload: payload,
})
require.NoError(t, err)
var addr oid.Address
addr.SetContainer(cid)
addr.SetObject(objID)
obj, err := frostfs.GetCredsObject(ctx, addr)
require.NoError(t, err)
require.Equal(t, payload, obj.Payload())
_, err = frostfs.CreateObject(ctx, tokens.PrmObjectCreate{
Container: cid,
Payload: newPayload,
NewVersionFor: &objID,
})
require.NoError(t, err)
obj, err = frostfs.GetCredsObject(ctx, addr)
require.NoError(t, err)
require.Equal(t, newPayload, obj.Payload())
}

View file

@ -14,7 +14,6 @@ import (
errorsFrost "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/errors" errorsFrost "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/errors"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
@ -90,8 +89,11 @@ func (x *FrostFS) TimeToEpoch(ctx context.Context, now, futureTime time.Time) (u
} }
// Container implements frostfs.FrostFS interface method. // Container implements frostfs.FrostFS interface method.
func (x *FrostFS) Container(ctx context.Context, idCnr cid.ID) (*container.Container, error) { func (x *FrostFS) Container(ctx context.Context, layerPrm layer.PrmContainer) (*container.Container, error) {
prm := pool.PrmContainerGet{ContainerID: idCnr} prm := pool.PrmContainerGet{
ContainerID: layerPrm.ContainerID,
Session: layerPrm.SessionToken,
}
res, err := x.pool.GetContainer(ctx, prm) res, err := x.pool.GetContainer(ctx, prm)
if err != nil { if err != nil {
@ -101,16 +103,8 @@ func (x *FrostFS) Container(ctx context.Context, idCnr cid.ID) (*container.Conta
return &res, nil return &res, nil
} }
var basicACLZero acl.Basic
// CreateContainer implements frostfs.FrostFS interface method. // CreateContainer implements frostfs.FrostFS interface method.
//
// If prm.BasicACL is zero, 'eacl-public-read-write' is used.
func (x *FrostFS) CreateContainer(ctx context.Context, prm layer.PrmContainerCreate) (*layer.ContainerCreateResult, error) { func (x *FrostFS) CreateContainer(ctx context.Context, prm layer.PrmContainerCreate) (*layer.ContainerCreateResult, error) {
if prm.BasicACL == basicACLZero {
prm.BasicACL = acl.PublicRWExtended
}
var cnr container.Container var cnr container.Container
cnr.Init() cnr.Init()
cnr.SetPlacementPolicy(prm.Policy) cnr.SetPlacementPolicy(prm.Policy)
@ -158,9 +152,11 @@ func (x *FrostFS) CreateContainer(ctx context.Context, prm layer.PrmContainerCre
} }
// UserContainers implements frostfs.FrostFS interface method. // UserContainers implements frostfs.FrostFS interface method.
func (x *FrostFS) UserContainers(ctx context.Context, id user.ID) ([]cid.ID, error) { func (x *FrostFS) UserContainers(ctx context.Context, layerPrm layer.PrmUserContainers) ([]cid.ID, error) {
var prm pool.PrmContainerList prm := pool.PrmContainerList{
prm.SetOwnerID(id) OwnerID: layerPrm.UserID,
Session: layerPrm.SessionToken,
}
r, err := x.pool.ListContainers(ctx, prm) r, err := x.pool.ListContainers(ctx, prm)
return r, handleObjectError("list user containers via connection pool", err) return r, handleObjectError("list user containers via connection pool", err)
@ -175,9 +171,11 @@ func (x *FrostFS) SetContainerEACL(ctx context.Context, table eacl.Table, sessio
} }
// ContainerEACL implements frostfs.FrostFS interface method. // ContainerEACL implements frostfs.FrostFS interface method.
func (x *FrostFS) ContainerEACL(ctx context.Context, id cid.ID) (*eacl.Table, error) { func (x *FrostFS) ContainerEACL(ctx context.Context, layerPrm layer.PrmContainerEACL) (*eacl.Table, error) {
var prm pool.PrmContainerEACL prm := pool.PrmContainerEACL{
prm.SetContainerID(id) ContainerID: layerPrm.ContainerID,
Session: layerPrm.SessionToken,
}
res, err := x.pool.GetEACL(ctx, prm) res, err := x.pool.GetEACL(ctx, prm)
if err != nil { if err != nil {
@ -233,10 +231,16 @@ func (x *FrostFS) CreateObject(ctx context.Context, prm layer.PrmObjectCreate) (
obj := object.New() obj := object.New()
obj.SetContainerID(prm.Container) obj.SetContainerID(prm.Container)
obj.SetOwnerID(&x.owner) obj.SetOwnerID(x.owner)
obj.SetAttributes(attrs...) obj.SetAttributes(attrs...)
obj.SetPayloadSize(prm.PayloadSize) obj.SetPayloadSize(prm.PayloadSize)
if prm.BearerToken == nil && prm.PrivateKey != nil {
var owner user.ID
user.IDFromKey(&owner, prm.PrivateKey.PublicKey)
obj.SetOwnerID(owner)
}
if len(prm.Locks) > 0 { if len(prm.Locks) > 0 {
lock := new(object.Lock) lock := new(object.Lock)
lock.WriteMembers(prm.Locks) lock.WriteMembers(prm.Locks)

View file

@ -38,7 +38,7 @@ func TestErrorTimeoutChecking(t *testing.T) {
t.Run("deadline exceeded", func(t *testing.T) { t.Run("deadline exceeded", func(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond)
defer cancel() defer cancel()
time.Sleep(50 * time.Millisecond) <-ctx.Done()
require.True(t, errorsFrost.IsTimeoutError(ctx.Err())) require.True(t, errorsFrost.IsTimeoutError(ctx.Err()))
}) })

View file

@ -0,0 +1,84 @@
package contract
import (
"context"
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-contract/frostfsid/client"
frostfsutil "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/util"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/wallet"
)
type FrostFSID struct {
cli *client.Client
}
type Config struct {
// RPCAddress is an endpoint to connect to neo rpc.
RPCAddress string
// Contract is hash of contract or its name in NNS.
Contract string
// ProxyContract is hash of proxy contract or its name in NNS to interact with frostfsid.
ProxyContract string
// Key is used to interact with frostfsid contract.
// If this is nil than random key will be generated.
Key *keys.PrivateKey
}
// New creates new FrostfsID contract wrapper that implements auth.FrostFSID interface.
func New(ctx context.Context, cfg Config) (*FrostFSID, error) {
contractHash, err := frostfsutil.ResolveContractHash(cfg.Contract, cfg.RPCAddress)
if err != nil {
return nil, fmt.Errorf("resolve frostfs contract hash: %w", err)
}
key := cfg.Key
if key == nil {
if key, err = keys.NewPrivateKey(); err != nil {
return nil, fmt.Errorf("generate anon private key for frostfsid: %w", err)
}
}
rpcCli, err := rpcclient.New(ctx, cfg.RPCAddress, rpcclient.Options{})
if err != nil {
return nil, fmt.Errorf("init rpc client: %w", err)
}
var opt client.Options
opt.ProxyContract, err = frostfsutil.ResolveContractHash(cfg.ProxyContract, cfg.RPCAddress)
if err != nil {
return nil, fmt.Errorf("resolve frostfs contract hash: %w", err)
}
cli, err := client.New(rpcCli, wallet.NewAccountFromPrivateKey(key), contractHash, opt)
if err != nil {
return nil, fmt.Errorf("init frostfsid client: %w", err)
}
return &FrostFSID{
cli: cli,
}, nil
}
func (f *FrostFSID) GetSubjectExtended(userHash util.Uint160) (*client.SubjectExtended, error) {
return f.cli.GetSubjectExtended(userHash)
}
func (f *FrostFSID) GetSubjectKeyByName(namespace, name string) (*keys.PublicKey, error) {
return f.cli.GetSubjectKeyByName(namespace, name)
}
func (f *FrostFSID) CreateSubject(namespace string, key *keys.PublicKey) (util.Uint256, uint32, error) {
return f.cli.CreateSubject(namespace, key)
}
func (f *FrostFSID) Wait(tx util.Uint256, vub uint32, err error) error {
_, err = f.cli.Wait(tx, vub, err)
return err
}

View file

@ -1,118 +1,128 @@
package frostfsid package frostfsid
import ( import (
"context" "encoding/hex"
"fmt" "errors"
"strconv" "strconv"
"strings" "strings"
"git.frostfs.info/TrueCloudLab/frostfs-contract/frostfsid/client" "git.frostfs.info/TrueCloudLab/frostfs-contract/frostfsid/client"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/cache"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/handler" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/handler"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/authmate" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid/contract"
frostfsutil "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/util" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/wallet" "go.uber.org/zap"
) )
type FrostFSID struct { type FrostFSID struct {
cli *client.Client frostfsid *contract.FrostFSID
cache *cache.FrostfsIDCache
log *zap.Logger
} }
type Config struct { type Config struct {
// RPCAddress is an endpoint to connect to neo rpc. Cache *cache.FrostfsIDCache
RPCAddress string FrostFSID *contract.FrostFSID
Logger *zap.Logger
// Contract is hash of contract or its name in NNS.
Contract string
// ProxyContract is hash of proxy contract or its name in NNS to interact with frostfsid.
ProxyContract string
// Key is used to interact with frostfsid contract.
// If this is nil than random key will be generated.
Key *keys.PrivateKey
} }
var ( var (
_ api.FrostFSID = (*FrostFSID)(nil) _ api.FrostFSID = (*FrostFSID)(nil)
_ authmate.FrostFSID = (*FrostFSID)(nil)
_ handler.FrostFSID = (*FrostFSID)(nil) _ handler.FrostFSID = (*FrostFSID)(nil)
) )
// New creates new FrostfsID contract wrapper that implements auth.FrostFSID interface. // NewFrostFSID creates new FrostfsID wrapper.
func New(ctx context.Context, cfg Config) (*FrostFSID, error) { func NewFrostFSID(cfg Config) (*FrostFSID, error) {
contractHash, err := frostfsutil.ResolveContractHash(cfg.Contract, cfg.RPCAddress) switch {
if err != nil { case cfg.FrostFSID == nil:
return nil, fmt.Errorf("resolve frostfs contract hash: %w", err) return nil, errors.New("missing frostfsid client")
} case cfg.Cache == nil:
return nil, errors.New("missing frostfsid cache")
key := cfg.Key case cfg.Logger == nil:
if key == nil { return nil, errors.New("missing frostfsid logger")
if key, err = keys.NewPrivateKey(); err != nil {
return nil, fmt.Errorf("generate anon private key for frostfsid: %w", err)
}
}
rpcCli, err := rpcclient.New(ctx, cfg.RPCAddress, rpcclient.Options{})
if err != nil {
return nil, fmt.Errorf("init rpc client: %w", err)
}
var opt client.Options
opt.ProxyContract, err = frostfsutil.ResolveContractHash(cfg.ProxyContract, cfg.RPCAddress)
if err != nil {
return nil, fmt.Errorf("resolve frostfs contract hash: %w", err)
}
cli, err := client.New(rpcCli, wallet.NewAccountFromPrivateKey(key), contractHash, opt)
if err != nil {
return nil, fmt.Errorf("init frostfsid client: %w", err)
} }
return &FrostFSID{ return &FrostFSID{
cli: cli, frostfsid: cfg.FrostFSID,
cache: cfg.Cache,
log: cfg.Logger,
}, nil }, nil
} }
func (f *FrostFSID) ValidatePublicKey(key *keys.PublicKey) error { func (f *FrostFSID) ValidatePublicKey(key *keys.PublicKey) error {
_, err := f.cli.GetSubjectByKey(key) _, err := f.getSubject(key.GetScriptHash())
return err return err
} }
func (f *FrostFSID) RegisterPublicKey(ns string, key *keys.PublicKey) error { func (f *FrostFSID) GetUserGroupIDsAndClaims(userHash util.Uint160) ([]string, map[string]string, error) {
_, err := f.cli.Wait(f.cli.CreateSubject(ns, key)) subj, err := f.getSubject(userHash)
if err != nil && !strings.Contains(err.Error(), "subject already exists") { if err != nil {
return err if strings.Contains(err.Error(), "not found") {
f.log.Debug(logs.UserGroupsListIsEmpty, zap.Error(err))
return nil, nil, nil
}
return nil, nil, err
} }
return nil res := make([]string, len(subj.Groups))
for i, group := range subj.Groups {
res[i] = strconv.FormatInt(group.ID, 10)
}
return res, subj.KV, nil
}
func (f *FrostFSID) getSubject(addr util.Uint160) (*client.SubjectExtended, error) {
if subj := f.cache.GetSubject(addr); subj != nil {
return subj, nil
}
subj, err := f.frostfsid.GetSubjectExtended(addr)
if err != nil {
return nil, err
}
if err = f.cache.PutSubject(addr, subj); err != nil {
f.log.Warn(logs.CouldntCacheSubject, zap.Error(err))
}
return subj, nil
} }
func (f *FrostFSID) GetUserAddress(namespace, name string) (string, error) { func (f *FrostFSID) GetUserAddress(namespace, name string) (string, error) {
key, err := f.cli.GetSubjectKeyByName(namespace, name) userKey, err := f.getUserKey(namespace, name)
if err != nil { if err != nil {
return "", err return "", err
} }
return key.Address(), nil return userKey.Address(), nil
} }
func (f *FrostFSID) GetUserGroupIDs(userHash util.Uint160) ([]string, error) { func (f *FrostFSID) GetUserKey(namespace, name string) (string, error) {
subjExt, err := f.cli.GetSubjectExtended(userHash) userKey, err := f.getUserKey(namespace, name)
if err != nil { if err != nil {
if strings.Contains(err.Error(), "not found") { return "", err
return nil, nil
} }
return hex.EncodeToString(userKey.Bytes()), nil
}
func (f *FrostFSID) getUserKey(namespace, name string) (*keys.PublicKey, error) {
if userKey := f.cache.GetUserKey(namespace, name); userKey != nil {
return userKey, nil
}
userKey, err := f.frostfsid.GetSubjectKeyByName(namespace, name)
if err != nil {
return nil, err return nil, err
} }
res := make([]string, len(subjExt.Groups)) if err = f.cache.PutUserKey(namespace, name, userKey); err != nil {
for i, group := range subjExt.Groups { f.log.Warn(logs.CouldntCacheUserKey, zap.Error(err))
res[i] = strconv.FormatInt(group.ID, 10)
} }
return res, nil return userKey, nil
} }

View file

@ -2,9 +2,11 @@ package contract
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"math/big" "math/big"
"git.frostfs.info/TrueCloudLab/frostfs-contract/commonclient"
policycontract "git.frostfs.info/TrueCloudLab/frostfs-contract/policy" policycontract "git.frostfs.info/TrueCloudLab/frostfs-contract/policy"
policyclient "git.frostfs.info/TrueCloudLab/frostfs-contract/rpcclient/policy" policyclient "git.frostfs.info/TrueCloudLab/frostfs-contract/rpcclient/policy"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/policy" "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/policy"
@ -21,6 +23,7 @@ import (
type Client struct { type Client struct {
actor *actor.Actor actor *actor.Actor
policyContract *policyclient.Contract policyContract *policyclient.Contract
contractHash util.Uint160
} }
type Config struct { type Config struct {
@ -38,6 +41,11 @@ type Config struct {
Key *keys.PrivateKey Key *keys.PrivateKey
} }
const (
batchSize = 100
iteratorChainsByPrefix = "iteratorChainsByPrefix"
)
var _ policy.Contract = (*Client)(nil) var _ policy.Contract = (*Client)(nil)
// New creates new Policy contract wrapper. // New creates new Policy contract wrapper.
@ -72,6 +80,7 @@ func New(ctx context.Context, cfg Config) (*Client, error) {
return &Client{ return &Client{
actor: act, actor: act,
policyContract: policyclient.New(act, contractHash), policyContract: policyclient.New(act, contractHash),
contractHash: contractHash,
}, nil }, nil
} }
@ -110,7 +119,8 @@ func (c *Client) RemoveChain(kind policycontract.Kind, entity string, name []byt
} }
func (c *Client) ListChains(kind policycontract.Kind, entity string, name []byte) ([][]byte, error) { func (c *Client) ListChains(kind policycontract.Kind, entity string, name []byte) ([][]byte, error) {
items, err := c.policyContract.ListChainsByPrefix(big.NewInt(int64(kind)), entity, name) items, err := commonclient.ReadIteratorItems(c.actor, batchSize, c.contractHash, iteratorChainsByPrefix,
big.NewInt(int64(kind)), entity, name)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -130,3 +140,87 @@ func (c *Client) Wait(tx util.Uint256, vub uint32, err error) error {
_, err = c.actor.Wait(tx, vub, err) _, err = c.actor.Wait(tx, vub, err)
return err return err
} }
type multiTX struct {
contractHash util.Uint160
txs []*commonclient.Transaction
err error
}
func (m *multiTX) AddChain(entity policycontract.Kind, entityName string, name []byte, chain []byte) {
m.wrapCall("addChain", []any{big.NewInt(int64(entity)), entityName, name, chain})
}
func (m *multiTX) RemoveChain(entity policycontract.Kind, entityName string, name []byte) {
m.wrapCall("removeChain", []any{big.NewInt(int64(entity)), entityName, name})
}
func (m *multiTX) Scripts() ([][]byte, error) {
if m.err != nil {
return nil, m.err
}
if len(m.txs) == 0 {
return nil, errors.New("tx isn't initialized")
}
res := make([][]byte, 0, len(m.txs))
for _, tx := range m.txs {
script, err := tx.Bytes()
if err != nil {
return nil, err
}
res = append(res, script)
}
return res, nil
}
func (m *multiTX) wrapCall(method string, args []any) {
if m.err != nil {
return
}
if len(m.txs) == 0 {
m.err = errors.New("multi tx isn't initialized")
return
}
err := m.txs[len(m.txs)-1].WrapCall(method, args)
if err == nil {
return
}
if !errors.Is(commonclient.ErrTransactionTooLarge, err) {
m.err = err
return
}
tx := commonclient.NewTransaction(m.contractHash)
m.err = tx.WrapCall(method, args)
if m.err == nil {
m.txs = append(m.txs, tx)
}
}
func (c *Client) StartTx() policy.MultiTransaction {
return &multiTX{
txs: []*commonclient.Transaction{commonclient.NewTransaction(c.contractHash)},
contractHash: c.contractHash,
}
}
func (c *Client) SendTx(mtx policy.MultiTransaction) error {
var err error
scripts, err := mtx.Scripts()
if err != nil {
return err
}
for i := range scripts {
if _, err = c.actor.Wait(c.actor.SendRun(scripts[i])); err != nil {
return err
}
}
return nil
}

Some files were not shown because too many files have changed in this diff Show more