Compare commits

...

76 commits

Author SHA1 Message Date
a031777a1b Release v0.30.0
All checks were successful
/ DCO (pull_request) Successful in 5m12s
/ Vulncheck (pull_request) Successful in 5m26s
/ Builds (1.21) (pull_request) Successful in 6m39s
/ Builds (1.22) (pull_request) Successful in 6m29s
/ Lint (pull_request) Successful in 8m28s
/ Tests (1.21) (pull_request) Successful in 6m16s
/ Tests (1.22) (pull_request) Successful in 6m4s
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-07-19 17:09:58 +03:00
b2a5da8247 [#430] Bump frostfs-api-go for latest stable marshaler
All checks were successful
/ DCO (pull_request) Successful in 2m20s
/ Vulncheck (pull_request) Successful in 2m45s
/ Builds (1.21) (pull_request) Successful in 2m54s
/ Builds (1.22) (pull_request) Successful in 2m48s
/ Lint (pull_request) Successful in 4m53s
/ Tests (1.21) (pull_request) Successful in 2m52s
/ Tests (1.22) (pull_request) Successful in 2m45s
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-07-19 16:42:36 +03:00
ec349e4523 [#430] Adopt compatibility workarounds in Tree API
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-07-19 14:47:47 +03:00
977a20760b [#430] Delete all split version at once
All checks were successful
/ DCO (pull_request) Successful in 8m40s
/ Vulncheck (pull_request) Successful in 13m10s
/ Builds (1.21) (pull_request) Successful in 14m11s
/ Builds (1.22) (pull_request) Successful in 14m18s
/ Lint (pull_request) Successful in 16m31s
/ Tests (1.21) (pull_request) Successful in 10m48s
/ Tests (1.22) (pull_request) Successful in 14m31s
Previously after split we can get two `null` versioned object with the same key
and deleting such key removes only one node/object.

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-07-19 11:26:51 +03:00
2948d1f942 [#430] ci: Update go version
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-07-19 11:24:50 +03:00
c0011ebb8d [#430] tree: Fix multipart having system name
Previously if multipart key has the same name as some system node
(e.g. bucket-settings, bucket-cors etc.) it shadows real system node
and bucket started to be unversioned again for example.

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-07-19 11:24:50 +03:00
456319d2f1 [#430] Fix split tree
Update tree service to fix split tree problem.
Tree intermediate nodes can be duplicated, so we must handle this.

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-07-19 11:24:46 +03:00
1d965b23ab [#432] doc: Fix grammar mistakes in authentication
All checks were successful
/ DCO (pull_request) Successful in 1m57s
/ Builds (1.20) (pull_request) Successful in 2m53s
/ Builds (1.21) (pull_request) Successful in 2m38s
/ Vulncheck (pull_request) Successful in 2m42s
/ Lint (pull_request) Successful in 4m40s
/ Tests (1.20) (pull_request) Successful in 3m1s
/ Tests (1.21) (pull_request) Successful in 2m52s
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2024-07-17 17:08:48 +03:00
3b83de31d2 [#419] Update SDK version
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-07-08 12:15:24 +00:00
70eedfc077 [#414] authmate: Add register-user command
All checks were successful
/ Builds (1.20) (pull_request) Successful in 14m30s
/ Builds (1.21) (pull_request) Successful in 14m25s
/ DCO (pull_request) Successful in 6m47s
/ Vulncheck (pull_request) Successful in 6m59s
/ Lint (pull_request) Successful in 11m52s
/ Tests (1.20) (pull_request) Successful in 8m24s
/ Tests (1.21) (pull_request) Successful in 8m19s
New command allows register user in frostfsid and
set allowed rules in policy contract

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-07-08 14:13:00 +03:00
f86b82351a [#398] Fix parameter parsing in bucket retryer
Some checks failed
/ Builds (1.20) (pull_request) Successful in 2m7s
/ Builds (1.21) (pull_request) Successful in 1m38s
/ DCO (pull_request) Successful in 2m3s
/ Vulncheck (pull_request) Failing after 2m23s
/ Lint (pull_request) Successful in 4m21s
/ Tests (1.20) (pull_request) Successful in 2m46s
/ Tests (1.21) (pull_request) Successful in 2m42s
RetryStrategyExponential should use jitter backoff
instead of constant delay function

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-07-03 13:42:24 +03:00
465eaa816a [#372] Drop [e]ACL related code
All checks were successful
/ DCO (pull_request) Successful in 2m15s
/ Vulncheck (pull_request) Successful in 2m55s
/ Builds (1.20) (pull_request) Successful in 3m46s
/ Builds (1.21) (pull_request) Successful in 3m48s
/ Lint (pull_request) Successful in 5m26s
/ Tests (1.20) (pull_request) Successful in 3m34s
/ Tests (1.21) (pull_request) Successful in 3m18s
Always consider buckets as APE compatible

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-07-01 16:58:44 +03:00
9241954496 [#372] authmate: Don't create creds with eacl table
Allow only impersonate flag.
Don't allow SetEACL container session token.

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-07-01 16:26:21 +03:00
77f8bdac58 [#372] Drop kludge.acl_enabled flag
Now only APE container can be created using s3-gw

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-07-01 16:26:19 +03:00
91541a432d [#411] Check uniqueness in DeleteMultipleObjects
All checks were successful
/ Builds (1.20) (pull_request) Successful in 4m57s
/ Builds (1.21) (pull_request) Successful in 4m42s
/ DCO (pull_request) Successful in 4m52s
/ Vulncheck (pull_request) Successful in 4m38s
/ Lint (pull_request) Successful in 6m46s
/ Tests (1.20) (pull_request) Successful in 4m28s
/ Tests (1.21) (pull_request) Successful in 4m7s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-06-26 16:39:06 +03:00
943b30d9f4 [#411] Don't check object tags on deletion
By specification https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging-and-policies.html
we shouldn't check object tags on PUT and DELETE

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-06-26 16:38:56 +03:00
414f3943e2 [#410] Drop layer.Client interface
All checks were successful
/ DCO (pull_request) Successful in 2m1s
/ Vulncheck (pull_request) Successful in 2m31s
/ Builds (1.20) (pull_request) Successful in 2m39s
/ Builds (1.21) (pull_request) Successful in 2m31s
/ Lint (pull_request) Successful in 3m14s
/ Tests (1.20) (pull_request) Successful in 2m34s
/ Tests (1.21) (pull_request) Successful in 2m10s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-06-25 15:57:55 +03:00
9432782ce6 [#401] Drop notifications
All checks were successful
/ DCO (pull_request) Successful in 2m5s
/ Builds (1.20) (pull_request) Successful in 2m40s
/ Builds (1.21) (pull_request) Successful in 2m33s
/ Vulncheck (pull_request) Successful in 2m22s
/ Lint (pull_request) Successful in 4m24s
/ Tests (1.20) (pull_request) Successful in 2m48s
/ Tests (1.21) (pull_request) Successful in 2m45s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-06-25 15:49:37 +03:00
2b04fcb5ec [#406] Remove control api
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-06-21 06:36:56 +00:00
280d11c794 [#407] Don't set full_control for bucket owner
All checks were successful
/ DCO (pull_request) Successful in 1m29s
/ Builds (1.20) (pull_request) Successful in 2m14s
/ Builds (1.21) (pull_request) Successful in 1m47s
/ Vulncheck (pull_request) Successful in 1m57s
/ Lint (pull_request) Successful in 4m16s
/ Tests (1.20) (pull_request) Successful in 2m46s
/ Tests (1.21) (pull_request) Successful in 2m29s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-06-19 10:55:24 +03:00
ed34b2cae4 [#402] auth: Extend test coverage
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-06-14 10:06:00 +00:00
76f553d292 [#403] Set resource tags into resource properties
All checks were successful
/ DCO (pull_request) Successful in 6m17s
/ Vulncheck (pull_request) Successful in 8m13s
/ Builds (1.20) (pull_request) Successful in 9m45s
/ Builds (1.21) (pull_request) Successful in 9m8s
/ Lint (pull_request) Successful in 18m4s
/ Tests (1.20) (pull_request) Successful in 9m52s
/ Tests (1.21) (pull_request) Successful in 9m5s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-06-13 11:12:40 +03:00
1513a9252b [#403] go.mod: Update APE
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-06-13 11:12:35 +03:00
bb81afc14a [#398] Support retryer
Add two strategy for PutBucketSettings request retryer:
* exponential backoff (increasing up to `max_backoff` delays with jitter)
* constant backoff (always the same `max_backoff` delay between requests)

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-06-06 13:02:17 +00:00
58850f590e [#335] Improve determining AccessBox latest version
Signed-off-by: Anoke <rustamgta1011@gmail.com>
2024-06-06 12:35:48 +00:00
e25dc90c20 [#399] Add OPTIONS method for object operations
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-06-04 12:59:45 +00:00
71bae5cd9a [#400] Update frostfs-sdk-go version with support EC
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-06-04 12:22:06 +00:00
b5fae316cf [#396] Add user to response
Signed-off-by: Pavel Pogodaev <p.pogodaev@yadro.com>
2024-06-04 09:37:55 +00:00
9f3ea470e6 [#395] Port changelog and prepare it for next release
All checks were successful
/ Vulncheck (pull_request) Successful in 2m35s
/ Builds (1.20) (pull_request) Successful in 2m30s
/ Builds (1.21) (pull_request) Successful in 2m8s
/ DCO (pull_request) Successful in 2m26s
/ Lint (pull_request) Successful in 3m42s
/ Tests (1.20) (pull_request) Successful in 2m51s
/ Tests (1.21) (pull_request) Successful in 2m40s
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-05-27 14:27:11 +03:00
9787b29542 [#392] go.mod: Update APE to drop private IPs checking
All checks were successful
/ Vulncheck (pull_request) Successful in 2m8s
/ DCO (pull_request) Successful in 2m38s
/ Builds (1.20) (pull_request) Successful in 2m48s
/ Builds (1.21) (pull_request) Successful in 3m22s
/ Lint (pull_request) Successful in 5m17s
/ Tests (1.20) (pull_request) Successful in 3m37s
/ Tests (1.21) (pull_request) Successful in 2m59s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-05-27 11:26:15 +03:00
9152b084ec [#387] Fix typo
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-05-22 15:06:02 +00:00
21dbe3ea8e [#387] api: Add tests for middleware
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-05-22 15:06:02 +00:00
f4d174e740 [#387] middleware: Extend test coverage
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-05-22 15:06:02 +00:00
8a758293b9 [#387] middleware: Delete unused code
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-05-22 15:06:02 +00:00
fb521c7ac6 [#367] policy: Set IAM-MFA property to false by default
All checks were successful
/ DCO (pull_request) Successful in 2m34s
/ Vulncheck (pull_request) Successful in 2m41s
/ Builds (1.20) (pull_request) Successful in 4m26s
/ Builds (1.21) (pull_request) Successful in 4m19s
/ Lint (pull_request) Successful in 5m48s
/ Tests (1.20) (pull_request) Successful in 3m55s
/ Tests (1.21) (pull_request) Successful in 3m53s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-05-22 12:05:42 +03:00
87b9e97a80 [#354] Do not proceed on bucket remove error
All checks were successful
/ DCO (pull_request) Successful in 2m22s
/ Vulncheck (pull_request) Successful in 2m37s
/ Builds (1.20) (pull_request) Successful in 3m42s
/ Builds (1.21) (pull_request) Successful in 3m12s
/ Lint (pull_request) Successful in 4m51s
/ Tests (1.20) (pull_request) Successful in 3m13s
/ Tests (1.21) (pull_request) Successful in 3m6s
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-05-17 20:38:39 +03:00
d62d8f3874 [#385] Support the renaming of ObjectRequest and ObjectContainer
All checks were successful
/ DCO (pull_request) Successful in 1m33s
/ Builds (1.20) (pull_request) Successful in 2m7s
/ Builds (1.21) (pull_request) Successful in 1m24s
/ Vulncheck (pull_request) Successful in 1m58s
/ Lint (pull_request) Successful in 4m26s
/ Tests (1.20) (pull_request) Successful in 2m35s
/ Tests (1.21) (pull_request) Successful in 2m42s
Signed-off-by: Artem Tataurov <a.tataurov@yadro.com>
2024-05-14 16:51:36 +03:00
6bf6a3b1a3 [#362] Check user and groups during policy check
All checks were successful
/ DCO (pull_request) Successful in 4m8s
/ Vulncheck (pull_request) Successful in 4m10s
/ Builds (1.20) (pull_request) Successful in 5m33s
/ Builds (1.21) (pull_request) Successful in 5m24s
/ Lint (pull_request) Successful in 8m32s
/ Tests (1.20) (pull_request) Successful in 5m9s
/ Tests (1.21) (pull_request) Successful in 4m52s
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-05-08 15:25:14 +03:00
2f108c9951 [#362] Expand control service
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-05-08 15:15:49 +03:00
c43ef040dc [#382] Fix request type determination
All checks were successful
/ DCO (pull_request) Successful in 1m36s
/ Builds (1.20) (pull_request) Successful in 2m15s
/ Builds (1.21) (pull_request) Successful in 2m9s
/ Lint (pull_request) Successful in 3m22s
/ Tests (1.20) (pull_request) Successful in 2m18s
/ Tests (1.21) (pull_request) Successful in 2m6s
/ Vulncheck (pull_request) Successful in 57s
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-05-07 15:17:22 +03:00
2ab655b909 [#380] Add test for credentials versioning
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-05-03 07:24:13 +00:00
1c398551e5 [#380] creds: Increase test coverage
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-05-03 07:24:13 +00:00
db05021786 [#379] Add Iana CharsetReader for Oracle integration
All checks were successful
/ DCO (pull_request) Successful in 1m54s
/ Builds (1.20) (pull_request) Successful in 2m23s
/ Builds (1.21) (pull_request) Successful in 2m0s
/ Vulncheck (pull_request) Successful in 2m7s
/ Lint (pull_request) Successful in 4m16s
/ Tests (1.20) (pull_request) Successful in 2m38s
/ Tests (1.21) (pull_request) Successful in 2m29s
Signed-off-by: Pavel Pogodaev <p.pogodaev@yadro.com>
2024-04-25 17:44:38 +03:00
034396d554 [#377] Add check of Source IP
All checks were successful
/ DCO (pull_request) Successful in 1m55s
/ Builds (1.20) (pull_request) Successful in 2m16s
/ Builds (1.21) (pull_request) Successful in 2m26s
/ Vulncheck (pull_request) Successful in 2m24s
/ Lint (pull_request) Successful in 4m17s
/ Tests (1.20) (pull_request) Successful in 2m42s
/ Tests (1.21) (pull_request) Successful in 2m32s
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-04-22 15:29:18 +03:00
3c436d8de9 [#365] Include iam user tags in query
All checks were successful
/ Vulncheck (pull_request) Successful in 1m48s
/ Builds (1.20) (pull_request) Successful in 2m30s
/ Builds (1.21) (pull_request) Successful in 1m25s
/ Lint (pull_request) Successful in 3m52s
/ Tests (1.20) (pull_request) Successful in 2m24s
/ Tests (1.21) (pull_request) Successful in 2m22s
/ DCO (pull_request) Successful in 45s
Signed-off-by: Pavel Pogodaev <p.pogodaev@yadro.com>
2024-04-22 10:47:43 +03:00
45f77de8c8 [#371] Add custom Source IP header configuration
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-04-22 07:42:45 +00:00
d903de2457 [#370] Fix fetching attributes from tree
All checks were successful
/ DCO (pull_request) Successful in 2m6s
/ Vulncheck (pull_request) Successful in 2m48s
/ Builds (1.20) (pull_request) Successful in 3m19s
/ Builds (1.21) (pull_request) Successful in 3m15s
/ Lint (pull_request) Successful in 4m50s
/ Tests (1.20) (pull_request) Successful in 3m13s
/ Tests (1.21) (pull_request) Successful in 3m4s
Port #374

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-19 17:33:55 +03:00
e22ff52165 [#367] Add check of AccessBox attributes
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-04-19 06:25:26 +00:00
5315f7b733 [#269] Create frostfsid wrapper with cache
All checks were successful
/ DCO (pull_request) Successful in 2m10s
/ Vulncheck (pull_request) Successful in 2m0s
/ Builds (1.20) (pull_request) Successful in 2m31s
/ Builds (1.21) (pull_request) Successful in 1m31s
/ Lint (pull_request) Successful in 3m34s
/ Tests (1.20) (pull_request) Successful in 2m26s
/ Tests (1.21) (pull_request) Successful in 2m21s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-18 09:32:30 +03:00
43a687b572 [#269] authmate: Update frostfsid using
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-17 12:11:23 +03:00
29a2dae40c [#269] Move frostfsid client to separate package
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-17 12:11:23 +03:00
fec3b3f31e [#269] Add frostfsid cache configuration
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-17 12:11:23 +03:00
7db89c840b [#368] Update vulnerable dependencies
All checks were successful
/ DCO (pull_request) Successful in 3m41s
/ Vulncheck (pull_request) Successful in 3m28s
/ Builds (1.20) (pull_request) Successful in 4m41s
/ Builds (1.21) (pull_request) Successful in 4m35s
/ Lint (pull_request) Successful in 5m57s
/ Tests (1.20) (pull_request) Successful in 4m8s
/ Tests (1.21) (pull_request) Successful in 3m58s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-17 11:29:09 +03:00
3ff027587c [#357] Add check of request and resource tags
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-04-17 07:06:58 +00:00
9f29fcbd52 [#353] docs: Add bucket policy docs
Some checks failed
/ DCO (pull_request) Successful in 1m35s
/ Builds (1.20) (pull_request) Successful in 2m12s
/ Builds (1.21) (pull_request) Successful in 1m51s
/ Vulncheck (pull_request) Failing after 2m8s
/ Lint (pull_request) Successful in 3m2s
/ Tests (1.20) (pull_request) Successful in 2m40s
/ Tests (1.21) (pull_request) Successful in 2m34s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-15 11:41:19 +03:00
8307c73fef [#364] Fix removing combined object
Some checks failed
/ Vulncheck (pull_request) Failing after 3m8s
/ DCO (pull_request) Successful in 3m49s
/ Builds (1.20) (pull_request) Successful in 5m35s
/ Builds (1.21) (pull_request) Successful in 4m16s
/ Lint (pull_request) Successful in 6m55s
/ Tests (1.20) (pull_request) Successful in 5m14s
/ Tests (1.21) (pull_request) Successful in 4m29s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-12 14:56:38 +03:00
d8889fca56 [#340] Fix encode object acl
In the process of encode the acl of an object,
we use a map. As a result, when traversing the
map, we can get a different sequence of permissions
each time. Therefore, a list is used instead of a map.

Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-04-11 09:28:30 +00:00
61ff4702a2 [#360] Reuse single target during policy check
Some checks failed
/ DCO (pull_request) Successful in 1m38s
/ Vulncheck (pull_request) Failing after 2m4s
/ Builds (1.20) (pull_request) Successful in 2m33s
/ Builds (1.21) (pull_request) Successful in 2m12s
/ Lint (pull_request) Successful in 3m6s
/ Tests (1.20) (pull_request) Successful in 2m57s
/ Tests (1.21) (pull_request) Successful in 2m6s
Policy engine library is able to manage multiple
targets and resolve different status results.

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-04-10 17:56:47 +03:00
6da1acc554 [#360] Use 'c' prefix for bucket policies instead of 'n'
With 'c' prefix, acl chains become shorter, thus gateway
receives shorter results and avoids sessions to neo-go.

There is still issue with many IAM rules.

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-04-10 17:56:47 +03:00
3ea3f971e1 [#359] Update APE to allow put tombstone on delete object
Some checks failed
/ Vulncheck (pull_request) Failing after 4m0s
/ DCO (pull_request) Successful in 4m33s
/ Builds (1.20) (pull_request) Successful in 4m59s
/ Builds (1.21) (pull_request) Successful in 4m58s
/ Lint (pull_request) Successful in 6m17s
/ Tests (1.20) (pull_request) Successful in 4m57s
/ Tests (1.21) (pull_request) Successful in 4m7s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-10 15:12:30 +03:00
cb83f7646f [#347] port: Explicitly specify sorting order of subtree for object listing
Some checks failed
/ DCO (pull_request) Successful in 1m56s
/ Vulncheck (pull_request) Failing after 4m57s
/ Builds (1.20) (pull_request) Successful in 5m54s
/ Builds (1.21) (pull_request) Successful in 5m56s
/ Lint (pull_request) Successful in 13m10s
/ Tests (1.20) (pull_request) Successful in 5m34s
/ Tests (1.21) (pull_request) Successful in 3m22s
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-04-09 18:57:47 +03:00
9c012d0a66 [#355] Remove policies when delete bucket
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-09 15:49:46 +00:00
bda014b7b4 [#355] Update frostfs-contract to terminate session iterator
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-09 15:49:46 +00:00
37d05dcefd [#353] Add check of listing parameters and versionID
Some checks failed
/ DCO (pull_request) Successful in 1m36s
/ Vulncheck (pull_request) Failing after 2m17s
/ Builds (1.20) (pull_request) Successful in 3m27s
/ Builds (1.21) (pull_request) Successful in 3m22s
/ Lint (pull_request) Successful in 5m4s
/ Tests (1.20) (pull_request) Successful in 2m53s
/ Tests (1.21) (pull_request) Successful in 2m47s
Add properties in policy check:
* s3:delimiter
* s3:prefix
* s3:max-keys
* s3:VersionId

Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2024-04-08 17:57:55 +03:00
8407b3ea4c [#352] policy: Use iterators to list chains
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-04 12:51:12 +00:00
e537675223 [#341] Update CHANGELOG
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-04-03 12:04:48 +00:00
789464e134 [#341] Add "h2" as next proto to allow HTTP/2 requests in http.Serve
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-04-03 12:04:48 +00:00
a138f4954b [#341] Test HTTP/2 requests
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2024-04-03 12:04:48 +00:00
8669bf6b50 [#346] acl: Update APE and fix using
All checks were successful
/ DCO (pull_request) Successful in 2m57s
/ Vulncheck (pull_request) Successful in 3m33s
/ Lint (pull_request) Successful in 4m44s
/ Tests (1.20) (pull_request) Successful in 3m38s
/ Tests (1.21) (pull_request) Successful in 3m29s
/ Builds (1.20) (pull_request) Successful in 1m12s
/ Builds (1.21) (pull_request) Successful in 3m23s
* Remove native policy when remove bucket policy
* Allow policies that contain only s3 compatible statements
(now deny rules cannot be converted to native rules)

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-02 12:43:04 +00:00
6b8095182e [#343] docs: Actualize s3 compatibility table
All checks were successful
/ Builds (1.20) (pull_request) Successful in 13m52s
/ Builds (1.21) (pull_request) Successful in 13m40s
/ Lint (pull_request) Successful in 19m2s
/ Tests (1.20) (pull_request) Successful in 14m18s
/ Tests (1.21) (pull_request) Successful in 14m23s
/ DCO (pull_request) Successful in 2m55s
/ Vulncheck (pull_request) Successful in 1m9s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-04-02 15:02:51 +03:00
348126b3b8 [#301] go.mod: Update sdk-go
All checks were successful
/ Vulncheck (pull_request) Successful in 1m52s
/ Builds (1.20) (pull_request) Successful in 3m0s
/ Builds (1.21) (pull_request) Successful in 2m57s
/ DCO (pull_request) Successful in 2m50s
/ Lint (pull_request) Successful in 4m21s
/ Tests (1.20) (pull_request) Successful in 1m41s
/ Tests (1.21) (pull_request) Successful in 1m35s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-28 09:13:27 +03:00
fbe7a784e8 [#301] Support GetBucketPolicyStatus
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-28 09:13:25 +03:00
bfcde09f07 [#291] server auto re-binding
Some checks failed
/ Vulncheck (pull_request) Failing after 1m38s
/ DCO (pull_request) Successful in 1m43s
/ Builds (1.20) (pull_request) Successful in 2m17s
/ Builds (1.21) (pull_request) Successful in 1m57s
/ Lint (pull_request) Successful in 5m7s
/ Tests (1.20) (pull_request) Successful in 2m32s
/ Tests (1.21) (pull_request) Successful in 2m8s
Signed-off-by: Pavel Pogodaev <p.pogodaev@yadro.com>
2024-03-27 14:28:50 +03:00
94bd1dfe28 [#334] Add auth doc
Some checks failed
/ DCO (pull_request) Successful in 1m18s
/ Vulncheck (pull_request) Failing after 1m42s
/ Builds (1.20) (pull_request) Successful in 2m16s
/ Builds (1.21) (pull_request) Successful in 1m51s
/ Lint (pull_request) Successful in 4m4s
/ Tests (1.20) (pull_request) Successful in 2m34s
/ Tests (1.21) (pull_request) Successful in 2m23s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-21 12:12:29 +03:00
80c7b73eb9 [#306] In APE buckets forbid canned acl except private
Some checks failed
/ DCO (pull_request) Successful in 2m50s
/ Vulncheck (pull_request) Failing after 3m15s
/ Builds (1.20) (pull_request) Successful in 3m39s
/ Builds (1.21) (pull_request) Successful in 3m41s
/ Lint (pull_request) Successful in 5m48s
/ Tests (1.20) (pull_request) Successful in 4m0s
/ Tests (1.21) (pull_request) Successful in 3m53s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-19 16:57:26 +03:00
62cc5a04a7 [#328] Log error on failed response writing
Some checks failed
/ DCO (pull_request) Successful in 3m34s
/ Vulncheck (pull_request) Failing after 4m18s
/ Builds (1.20) (pull_request) Successful in 4m58s
/ Builds (1.21) (pull_request) Successful in 4m24s
/ Lint (pull_request) Successful in 7m27s
/ Tests (1.20) (pull_request) Successful in 5m24s
/ Tests (1.21) (pull_request) Successful in 5m0s
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2024-03-15 11:02:26 +03:00
141 changed files with 7597 additions and 10091 deletions

View file

@ -6,7 +6,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
go_versions: [ '1.20', '1.21' ]
go_versions: [ '1.21', '1.22' ]
fail-fast: false
steps:
- uses: actions/checkout@v3

View file

@ -12,7 +12,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: '1.21'
go-version: '1.22'
- name: Run commit format checker
uses: https://git.frostfs.info/TrueCloudLab/dco-go@v3

View file

@ -10,7 +10,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.21'
go-version: '1.22'
cache: true
- name: Install linters
@ -24,7 +24,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
go_versions: [ '1.20', '1.21' ]
go_versions: [ '1.21', '1.22' ]
fail-fast: false
steps:
- uses: actions/checkout@v3

View file

@ -12,7 +12,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: '1.21'
go-version: '1.22'
- name: Install govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest

View file

@ -4,13 +4,72 @@ This document outlines major changes between releases.
## [Unreleased]
## [0.30.0] - Kangshung -2024-07-19
### Fixed
- Fix HTTP/2 requests (#341)
- Fix Decoder.CharsetReader is nil (#379)
- Fix flaky ACL encode test (#340)
- Docs grammar (#432)
### Added
- Add new `reconnect_interval` config param for server rebinding (#291)
- Support `GetBucketPolicyStatus` (#301)
- Support request IP filter with policy (#371, #377)
- Support tag checks in policies (#357, #365, #392, #403, #411)
- Support IAM-MFA checks (#367)
- More docs (#334, #353)
- Add `register-user` command to `authmate` (#414)
- `User` field in request log (#396)
- Erasure coding support in placement policy (#400)
- Improved test coverage (#402)
### Changed
- Update dependencies noted by govulncheck (#368)
- Improve test coverage (#380, #387)
- Support updated naming in native policy JSON (#385)
- Improve determining AccessBox latest version (#335)
- Don't set full_control policy for bucket owner (#407)
### Removed
- Remove control api (#406)
- Remove notifications (#401)
- Remove `layer.Client` interface (#410)
- Remove extended ACL related code (#372)
## [0.29.3] - 2024-07-19
### Fixed
- Support tree split environment when multiple nodes
may be part of the same sub path (#430)
- Collision of multipart name and system data in the tree (#430)
- Workaround for removal of multiple null versions in unversioned bucket (#430)
## [0.29.2] - 2024-07-03
### Fixed
- Parsing of put-bucket-setting retry configuration (#398)
## [0.29.1] - 2024-06-20
### Fixed
- OPTIONS request processing for object operations (#399)
### Added
- Retries of put-bucket-setting operation during container creation (#398)
## [0.29.0] - Zemu - 2024-05-27
### Fixed
- Fix marshaling errors in `DeleteObjects` method (#222)
- Fix status code in GET/HEAD delete marker (#226)
- Fix `NextVersionIDMarker` in `list-object-versions` (#248)
- Fix possibility of panic during SIGHUP (#288)
- Fix flaky `TestErrorTimeoutChecking` (`make test` sometimes failed) (#290)
- Fix user owner ID in billing metrics (#321)
- Fix log-level change on SIGHUP (#313)
- Fix anonymous put request (#311)
- Fix routine leak from nns resolver (#324)
- Fix svace errors (#325, #328)
### Added
- Add new `frostfs.buffer_max_size_for_put` config param and sync TZ hash for PUT operations (#197)
@ -22,10 +81,10 @@ This document outlines major changes between releases.
- Support per namespace placement policies configuration (see `namespaces.config` config param) (#266)
- Support control api to manage policies. See `control` config section (#258)
- Add `namespace` label to billing metrics (#271)
- Support policy-engine (#257)
- Support `policy` contract (#259)
- Support policy-engine (#257, #259, #282, #283, #302, #307, #345, #351, #358, #360, #362, #383, #354)
- Support `proxy` contract (#287)
- Authmate: support custom attributes (#292)
- Add FrostfsID cache (#269)
### Changed
- Generalise config param `use_default_xmlns_for_complete_multipart` to `use_default_xmlns` so that use default xmlns for all requests (#221)
@ -34,9 +93,23 @@ This document outlines major changes between releases.
- Use tombstone when delete multipart upload (#275)
- Support new parameter `cache.accessbox.removing_check_interval` (#305)
- Use APE rules instead of eACL in container creation (#306)
- Rework bucket policy with policy-engine (#261)
- Improved object listing speed (#165, #347)
- Logging improvement (#300, #318)
### Removed
- Drop sending whitespace characters during complete multipart upload and related config param `kludge.complete_multipart_keepalive` (#227)
- Unused legacy minio related code (#299)
- Redundant output with journald logging (#298)
## [0.28.2] - 2024-05-27
### Fixed
- `anon` user in billing metrics (#321)
- Parts are not removed when multipart object removed (#370)
### Added
- Put request in duration metrics (#280)
## [0.28.1] - 2024-01-24
@ -154,4 +227,10 @@ To see CHANGELOG for older versions, refer to https://github.com/nspcc-dev/neofs
[0.27.0]: https://git.frostfs.info/TrueCloudLab/frostfs-s3-gw/compare/b2148cc3...v0.27.0
[0.28.0]: https://git.frostfs.info/TrueCloudLab/frostfs-s3-gw/compare/v0.27.0...v0.28.0
[0.28.1]: https://git.frostfs.info/TrueCloudLab/frostfs-s3-gw/compare/v0.28.0...v0.28.1
[Unreleased]: https://git.frostfs.info/TrueCloudLab/frostfs-s3-gw/compare/v0.28.1...master
[0.28.2]: https://git.frostfs.info/TrueCloudLab/frostfs-s3-gw/compare/v0.28.1...v0.28.2
[0.29.0]: https://git.frostfs.info/TrueCloudLab/frostfs-s3-gw/compare/v0.28.2...v0.29.0
[0.29.1]: https://git.frostfs.info/TrueCloudLab/frostfs-s3-gw/compare/v0.29.0...v0.29.1
[0.29.2]: https://git.frostfs.info/TrueCloudLab/frostfs-s3-gw/compare/v0.29.1...v0.29.2
[0.29.3]: https://git.frostfs.info/TrueCloudLab/frostfs-s3-gw/compare/v0.29.2...v0.29.3
[0.30.0]: https://git.frostfs.info/TrueCloudLab/frostfs-s3-gw/compare/v0.29.3...v0.30.0
[Unreleased]: https://git.frostfs.info/TrueCloudLab/frostfs-s3-gw/compare/v0.30.0...master

View file

@ -3,7 +3,7 @@
# Common variables
REPO ?= $(shell go list -m)
VERSION ?= $(shell git describe --tags --dirty --match "v*" --always --abbrev=8 2>/dev/null || cat VERSION 2>/dev/null || echo "develop")
GO_VERSION ?= 1.20
GO_VERSION ?= 1.22
LINT_VERSION ?= 1.56.1
TRUECLOUDLAB_LINT_VERSION ?= 0.0.5
BINDIR = bin

View file

@ -1 +1 @@
v0.28.1
v0.30.0

View file

@ -186,7 +186,7 @@ func (c *Center) Authenticate(r *http.Request) (*middleware.Box, error) {
return nil, err
}
box, err := c.cli.GetBox(r.Context(), addr)
box, attrs, err := c.cli.GetBox(r.Context(), addr)
if err != nil {
return nil, fmt.Errorf("get box '%s': %w", addr, err)
}
@ -207,6 +207,7 @@ func (c *Center) Authenticate(r *http.Request) (*middleware.Box, error) {
Region: authHdr.Region,
SignatureV4: authHdr.SignatureV4,
},
Attributes: attrs,
}
if needClientTime {
result.ClientTime = signatureDateTime
@ -274,7 +275,7 @@ func (c *Center) checkFormData(r *http.Request) (*middleware.Box, error) {
return nil, err
}
box, err := c.cli.GetBox(r.Context(), addr)
box, attrs, err := c.cli.GetBox(r.Context(), addr)
if err != nil {
return nil, fmt.Errorf("get box '%s': %w", addr, err)
}
@ -289,7 +290,7 @@ func (c *Center) checkFormData(r *http.Request) (*middleware.Box, error) {
reqSignature, signature)
}
return &middleware.Box{AccessBox: box}, nil
return &middleware.Box{AccessBox: box, Attributes: attrs}, nil
}
func cloneRequest(r *http.Request, authHeader *AuthHeader) *http.Request {

View file

@ -1,12 +1,31 @@
package auth
import (
"bytes"
"context"
"fmt"
"mime/multipart"
"net/http"
"net/http/httptest"
"net/url"
"strings"
"testing"
"time"
v4 "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/auth/signer/v4"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/cache"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/tokens"
frostfsErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/errors"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
oidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id/test"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/stretchr/testify/require"
"go.uber.org/zap/zaptest"
)
func TestAuthHeaderParse(t *testing.T) {
@ -123,6 +142,11 @@ func TestCheckFormatContentSHA256(t *testing.T) {
hash: "ed7002b439e9ac845f22357d822bac1444730fbdb6016d3ec9432297b9ec9f7s",
error: defaultErr,
},
{
name: "invalid hash format: hash size",
hash: "5aadb45520dcd8726b2822a7a78bb53d794f557199d5d4abdedd2c55a4bd6ca73607605c558de3db80c8e86c3196484566163ed1327e82e8b6757d1932113cb8",
error: defaultErr,
},
{
name: "unsigned payload",
hash: "UNSIGNED-PAYLOAD",
@ -145,3 +169,466 @@ func TestCheckFormatContentSHA256(t *testing.T) {
})
}
}
type frostFSMock struct {
objects map[oid.Address]*object.Object
}
func newFrostFSMock() *frostFSMock {
return &frostFSMock{
objects: map[oid.Address]*object.Object{},
}
}
func (f *frostFSMock) GetCredsObject(_ context.Context, address oid.Address) (*object.Object, error) {
obj, ok := f.objects[address]
if !ok {
return nil, fmt.Errorf("not found")
}
return obj, nil
}
func (f *frostFSMock) CreateObject(context.Context, tokens.PrmObjectCreate) (oid.ID, error) {
return oid.ID{}, fmt.Errorf("the mock method is not implemented")
}
func TestAuthenticate(t *testing.T) {
key, err := keys.NewPrivateKey()
require.NoError(t, err)
cfg := &cache.Config{
Size: 10,
Lifetime: 24 * time.Hour,
Logger: zaptest.NewLogger(t),
}
gateData := []*accessbox.GateData{{
BearerToken: &bearer.Token{},
GateKey: key.PublicKey(),
}}
accessBox, secret, err := accessbox.PackTokens(gateData, []byte("secret"))
require.NoError(t, err)
data, err := accessBox.Marshal()
require.NoError(t, err)
var obj object.Object
obj.SetPayload(data)
addr := oidtest.Address()
obj.SetContainerID(addr.Container())
obj.SetID(addr.Object())
frostfs := newFrostFSMock()
frostfs.objects[addr] = &obj
accessKeyID := addr.Container().String() + "0" + addr.Object().String()
awsCreds := credentials.NewStaticCredentials(accessKeyID, secret.SecretKey, "")
defaultSigner := v4.NewSigner(awsCreds)
service, region := "s3", "default"
invalidValue := "invalid-value"
bigConfig := tokens.Config{
FrostFS: frostfs,
Key: key,
CacheConfig: cfg,
}
for _, tc := range []struct {
name string
prefixes []string
request *http.Request
err bool
errCode errors.ErrorCode
}{
{
name: "valid sign",
prefixes: []string{addr.Container().String()},
request: func() *http.Request {
r := httptest.NewRequest(http.MethodPost, "/", nil)
_, err = defaultSigner.Sign(r, nil, service, region, time.Now())
require.NoError(t, err)
return r
}(),
},
{
name: "no authorization header",
request: func() *http.Request {
return httptest.NewRequest(http.MethodPost, "/", nil)
}(),
err: true,
},
{
name: "invalid authorization header",
request: func() *http.Request {
r := httptest.NewRequest(http.MethodPost, "/", nil)
r.Header.Set(AuthorizationHdr, invalidValue)
return r
}(),
err: true,
errCode: errors.ErrAuthorizationHeaderMalformed,
},
{
name: "invalid access key id format",
request: func() *http.Request {
r := httptest.NewRequest(http.MethodPost, "/", nil)
signer := v4.NewSigner(credentials.NewStaticCredentials(addr.Object().String(), secret.SecretKey, ""))
_, err = signer.Sign(r, nil, service, region, time.Now())
require.NoError(t, err)
return r
}(),
err: true,
errCode: errors.ErrInvalidAccessKeyID,
},
{
name: "not allowed access key id",
prefixes: []string{addr.Object().String()},
request: func() *http.Request {
r := httptest.NewRequest(http.MethodPost, "/", nil)
_, err = defaultSigner.Sign(r, nil, service, region, time.Now())
require.NoError(t, err)
return r
}(),
err: true,
errCode: errors.ErrAccessDenied,
},
{
name: "invalid access key id value",
request: func() *http.Request {
r := httptest.NewRequest(http.MethodPost, "/", nil)
signer := v4.NewSigner(credentials.NewStaticCredentials(accessKeyID[:len(accessKeyID)-4], secret.SecretKey, ""))
_, err = signer.Sign(r, nil, service, region, time.Now())
require.NoError(t, err)
return r
}(),
err: true,
errCode: errors.ErrInvalidAccessKeyID,
},
{
name: "unknown access key id",
request: func() *http.Request {
r := httptest.NewRequest(http.MethodPost, "/", nil)
signer := v4.NewSigner(credentials.NewStaticCredentials(addr.Object().String()+"0"+addr.Container().String(), secret.SecretKey, ""))
_, err = signer.Sign(r, nil, service, region, time.Now())
require.NoError(t, err)
return r
}(),
err: true,
},
{
name: "invalid signature",
request: func() *http.Request {
r := httptest.NewRequest(http.MethodPost, "/", nil)
signer := v4.NewSigner(credentials.NewStaticCredentials(accessKeyID, "secret", ""))
_, err = signer.Sign(r, nil, service, region, time.Now())
require.NoError(t, err)
return r
}(),
err: true,
errCode: errors.ErrSignatureDoesNotMatch,
},
{
name: "invalid signature - AmzDate",
prefixes: []string{addr.Container().String()},
request: func() *http.Request {
r := httptest.NewRequest(http.MethodPost, "/", nil)
_, err = defaultSigner.Sign(r, nil, service, region, time.Now())
r.Header.Set(AmzDate, invalidValue)
require.NoError(t, err)
return r
}(),
err: true,
},
{
name: "invalid AmzContentSHA256",
prefixes: []string{addr.Container().String()},
request: func() *http.Request {
r := httptest.NewRequest(http.MethodPost, "/", nil)
_, err = defaultSigner.Sign(r, nil, service, region, time.Now())
r.Header.Set(AmzContentSHA256, invalidValue)
require.NoError(t, err)
return r
}(),
err: true,
},
{
name: "valid presign",
request: func() *http.Request {
r := httptest.NewRequest(http.MethodPost, "/", nil)
_, err = defaultSigner.Presign(r, nil, service, region, time.Minute, time.Now())
require.NoError(t, err)
return r
}(),
},
{
name: "presign, bad X-Amz-Credential",
request: func() *http.Request {
r := httptest.NewRequest(http.MethodPost, "/", nil)
query := url.Values{
AmzAlgorithm: []string{"AWS4-HMAC-SHA256"},
AmzCredential: []string{invalidValue},
}
r.URL.RawQuery = query.Encode()
return r
}(),
err: true,
},
{
name: "presign, bad X-Amz-Expires",
request: func() *http.Request {
r := httptest.NewRequest(http.MethodPost, "/", nil)
_, err = defaultSigner.Presign(r, nil, service, region, time.Minute, time.Now())
queryParams := r.URL.Query()
queryParams.Set("X-Amz-Expires", invalidValue)
r.URL.RawQuery = queryParams.Encode()
require.NoError(t, err)
return r
}(),
err: true,
},
{
name: "presign, expired",
request: func() *http.Request {
r := httptest.NewRequest(http.MethodPost, "/", nil)
_, err = defaultSigner.Presign(r, nil, service, region, time.Minute, time.Now().Add(-time.Minute))
require.NoError(t, err)
return r
}(),
err: true,
errCode: errors.ErrExpiredPresignRequest,
},
{
name: "presign, signature from future",
request: func() *http.Request {
r := httptest.NewRequest(http.MethodPost, "/", nil)
_, err = defaultSigner.Presign(r, nil, service, region, time.Minute, time.Now().Add(time.Minute))
require.NoError(t, err)
return r
}(),
err: true,
errCode: errors.ErrBadRequest,
},
} {
t.Run(tc.name, func(t *testing.T) {
creds := tokens.New(bigConfig)
cntr := New(creds, tc.prefixes)
box, err := cntr.Authenticate(tc.request)
if tc.err {
require.Error(t, err)
if tc.errCode > 0 {
err = frostfsErrors.UnwrapErr(err)
require.Equal(t, errors.GetAPIError(tc.errCode), err)
}
} else {
require.NoError(t, err)
require.Equal(t, accessKeyID, box.AuthHeaders.AccessKeyID)
require.Equal(t, region, box.AuthHeaders.Region)
require.Equal(t, secret.SecretKey, box.AccessBox.Gate.SecretKey)
}
})
}
}
func TestHTTPPostAuthenticate(t *testing.T) {
const (
policyBase64 = "eyAiZXhwaXJhdGlvbiI6ICIyMDA3LTEyLTAxVDEyOjAwOjAwLjAwMFoiLAogICJjb25kaXRpb25zIjogWwogICAgeyJhY2wiOiAicHVibGljLXJlYWQiIH0sCiAgICB7ImJ1Y2tldCI6ICJqb2huc21pdGgiIH0sCiAgICBbInN0YXJ0cy13aXRoIiwgIiRrZXkiLCAidXNlci9lcmljLyJdLAogIF0KfQ=="
invalidValue = "invalid-value"
defaultFieldName = "file"
service = "s3"
region = "default"
)
key, err := keys.NewPrivateKey()
require.NoError(t, err)
cfg := &cache.Config{
Size: 10,
Lifetime: 24 * time.Hour,
Logger: zaptest.NewLogger(t),
}
gateData := []*accessbox.GateData{{
BearerToken: &bearer.Token{},
GateKey: key.PublicKey(),
}}
accessBox, secret, err := accessbox.PackTokens(gateData, []byte("secret"))
require.NoError(t, err)
data, err := accessBox.Marshal()
require.NoError(t, err)
var obj object.Object
obj.SetPayload(data)
addr := oidtest.Address()
obj.SetContainerID(addr.Container())
obj.SetID(addr.Object())
frostfs := newFrostFSMock()
frostfs.objects[addr] = &obj
accessKeyID := addr.Container().String() + "0" + addr.Object().String()
invalidAccessKeyID := oidtest.Address().String() + "0" + oidtest.Address().Object().String()
timeToSign := time.Now()
timeToSignStr := timeToSign.Format("20060102T150405Z")
bigConfig := tokens.Config{
FrostFS: frostfs,
Key: key,
CacheConfig: cfg,
}
for _, tc := range []struct {
name string
prefixes []string
request *http.Request
err bool
errCode errors.ErrorCode
}{
{
name: "HTTP POST valid",
request: func() *http.Request {
creds := getCredsStr(accessKeyID, timeToSignStr, region, service)
sign := signStr(secret.SecretKey, service, region, timeToSign, policyBase64)
return getRequestWithMultipartForm(t, policyBase64, creds, timeToSignStr, sign, defaultFieldName)
}(),
},
{
name: "HTTP POST valid with custom field name",
request: func() *http.Request {
creds := getCredsStr(accessKeyID, timeToSignStr, region, service)
sign := signStr(secret.SecretKey, service, region, timeToSign, policyBase64)
return getRequestWithMultipartForm(t, policyBase64, creds, timeToSignStr, sign, "files")
}(),
},
{
name: "HTTP POST valid with field name with a capital letter",
request: func() *http.Request {
creds := getCredsStr(accessKeyID, timeToSignStr, region, service)
sign := signStr(secret.SecretKey, service, region, timeToSign, policyBase64)
return getRequestWithMultipartForm(t, policyBase64, creds, timeToSignStr, sign, "File")
}(),
},
{
name: "HTTP POST invalid multipart form",
request: func() *http.Request {
req := httptest.NewRequest(http.MethodPost, "/", nil)
req.Header.Set(ContentTypeHdr, "multipart/form-data")
return req
}(),
err: true,
errCode: errors.ErrInvalidArgument,
},
{
name: "HTTP POST invalid signature date time",
request: func() *http.Request {
creds := getCredsStr(accessKeyID, timeToSignStr, region, service)
sign := signStr(secret.SecretKey, service, region, timeToSign, policyBase64)
return getRequestWithMultipartForm(t, policyBase64, creds, invalidValue, sign, defaultFieldName)
}(),
err: true,
},
{
name: "HTTP POST invalid creds",
request: func() *http.Request {
sign := signStr(secret.SecretKey, service, region, timeToSign, policyBase64)
return getRequestWithMultipartForm(t, policyBase64, invalidValue, timeToSignStr, sign, defaultFieldName)
}(),
err: true,
errCode: errors.ErrAuthorizationHeaderMalformed,
},
{
name: "HTTP POST missing policy",
request: func() *http.Request {
creds := getCredsStr(accessKeyID, timeToSignStr, region, service)
sign := signStr(secret.SecretKey, service, region, timeToSign, policyBase64)
return getRequestWithMultipartForm(t, "", creds, timeToSignStr, sign, defaultFieldName)
}(),
err: true,
},
{
name: "HTTP POST invalid accessKeyId",
request: func() *http.Request {
creds := getCredsStr(invalidValue, timeToSignStr, region, service)
sign := signStr(secret.SecretKey, service, region, timeToSign, policyBase64)
return getRequestWithMultipartForm(t, policyBase64, creds, timeToSignStr, sign, defaultFieldName)
}(),
err: true,
},
{
name: "HTTP POST invalid accessKeyId - a non-existent box",
request: func() *http.Request {
creds := getCredsStr(invalidAccessKeyID, timeToSignStr, region, service)
sign := signStr(secret.SecretKey, service, region, timeToSign, policyBase64)
return getRequestWithMultipartForm(t, policyBase64, creds, timeToSignStr, sign, defaultFieldName)
}(),
err: true,
},
{
name: "HTTP POST invalid signature",
request: func() *http.Request {
creds := getCredsStr(accessKeyID, timeToSignStr, region, service)
sign := signStr(secret.SecretKey, service, region, timeToSign, invalidValue)
return getRequestWithMultipartForm(t, policyBase64, creds, timeToSignStr, sign, defaultFieldName)
}(),
err: true,
errCode: errors.ErrSignatureDoesNotMatch,
},
} {
t.Run(tc.name, func(t *testing.T) {
creds := tokens.New(bigConfig)
cntr := New(creds, tc.prefixes)
box, err := cntr.Authenticate(tc.request)
if tc.err {
require.Error(t, err)
if tc.errCode > 0 {
err = frostfsErrors.UnwrapErr(err)
require.Equal(t, errors.GetAPIError(tc.errCode), err)
}
} else {
require.NoError(t, err)
require.Equal(t, secret.SecretKey, box.AccessBox.Gate.SecretKey)
}
})
}
}
func getCredsStr(accessKeyID, timeToSign, region, service string) string {
return accessKeyID + "/" + timeToSign + "/" + region + "/" + service + "/aws4_request"
}
func getRequestWithMultipartForm(t *testing.T, policy, creds, date, sign, fieldName string) *http.Request {
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
defer writer.Close()
err := writer.WriteField("Policy", policy)
require.NoError(t, err)
err = writer.WriteField(AmzCredential, creds)
require.NoError(t, err)
err = writer.WriteField(AmzDate, date)
require.NoError(t, err)
err = writer.WriteField(AmzSignature, sign)
require.NoError(t, err)
_, err = writer.CreateFormFile(fieldName, "test.txt")
require.NoError(t, err)
req := httptest.NewRequest(http.MethodPost, "/", body)
req.Header.Set(ContentTypeHdr, writer.FormDataContentType())
return req
}

View file

@ -10,6 +10,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/tokens"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/stretchr/testify/require"
@ -31,13 +32,13 @@ func (m credentialsMock) addBox(addr oid.Address, box *accessbox.Box) {
m.boxes[addr.String()] = box
}
func (m credentialsMock) GetBox(_ context.Context, addr oid.Address) (*accessbox.Box, error) {
func (m credentialsMock) GetBox(_ context.Context, addr oid.Address) (*accessbox.Box, []object.Attribute, error) {
box, ok := m.boxes[addr.String()]
if !ok {
return nil, &apistatus.ObjectNotFound{}
return nil, nil, &apistatus.ObjectNotFound{}
}
return box, nil
return box, nil, nil
}
func (m credentialsMock) Put(context.Context, cid.ID, tokens.CredentialsParam) (oid.Address, error) {

View file

@ -6,6 +6,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/bluele/gcache"
"go.uber.org/zap"
@ -26,8 +27,9 @@ type (
}
AccessBoxCacheValue struct {
Box *accessbox.Box
PutTime time.Time
Box *accessbox.Box
Attributes []object.Attribute
PutTime time.Time
}
)
@ -72,10 +74,11 @@ func (o *AccessBoxCache) Get(address oid.Address) *AccessBoxCacheValue {
}
// Put stores an accessbox to cache.
func (o *AccessBoxCache) Put(address oid.Address, box *accessbox.Box) error {
func (o *AccessBoxCache) Put(address oid.Address, box *accessbox.Box, attrs []object.Attribute) error {
val := &AccessBoxCacheValue{
Box: box,
PutTime: time.Now(),
Box: box,
Attributes: attrs,
PutTime: time.Now(),
}
return o.cache.Set(address, val)
}

View file

@ -3,10 +3,14 @@ package cache
import (
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-contract/frostfsid/client"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
cidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id/test"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id/test"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/stretchr/testify/require"
"go.uber.org/zap"
"go.uber.org/zap/zaptest/observer"
@ -18,11 +22,13 @@ func TestAccessBoxCacheType(t *testing.T) {
addr := oidtest.Address()
box := &accessbox.Box{}
var attrs []object.Attribute
err := cache.Put(addr, box)
err := cache.Put(addr, box, attrs)
require.NoError(t, err)
val := cache.Get(addr)
require.Equal(t, box, val.Box)
require.Equal(t, attrs, val.Attributes)
require.Equal(t, 0, observedLog.Len())
err = cache.cache.Set(addr, "tmp")
@ -176,22 +182,42 @@ func TestSettingsCacheType(t *testing.T) {
assertInvalidCacheEntry(t, cache.GetSettings(key), observedLog)
}
func TestNotificationConfigurationCacheType(t *testing.T) {
func TestFrostFSIDSubjectCacheType(t *testing.T) {
logger, observedLog := getObservedLogger()
cache := NewSystemCache(DefaultSystemConfig(logger))
cache := NewFrostfsIDCache(DefaultFrostfsIDConfig(logger))
key := "key"
notificationConfig := &data.NotificationConfiguration{}
err := cache.PutNotificationConfiguration(key, notificationConfig)
key, err := util.Uint160DecodeStringLE("4ea976429703418ef00fc4912a409b6a0b973034")
require.NoError(t, err)
val := cache.GetNotificationConfiguration(key)
require.Equal(t, notificationConfig, val)
value := &client.SubjectExtended{}
err = cache.PutSubject(key, value)
require.NoError(t, err)
val := cache.GetSubject(key)
require.Equal(t, value, val)
require.Equal(t, 0, observedLog.Len())
err = cache.cache.Set(key, "tmp")
require.NoError(t, err)
assertInvalidCacheEntry(t, cache.GetNotificationConfiguration(key), observedLog)
assertInvalidCacheEntry(t, cache.GetSubject(key), observedLog)
}
func TestFrostFSIDUserKeyCacheType(t *testing.T) {
logger, observedLog := getObservedLogger()
cache := NewFrostfsIDCache(DefaultFrostfsIDConfig(logger))
ns, name := "ns", "name"
value, err := keys.NewPrivateKey()
require.NoError(t, err)
err = cache.PutUserKey(ns, name, value.PublicKey())
require.NoError(t, err)
val := cache.GetUserKey(ns, name)
require.Equal(t, value.PublicKey(), val)
require.Equal(t, 0, observedLog.Len())
err = cache.cache.Set(ns+"/"+name, "tmp")
require.NoError(t, err)
assertInvalidCacheEntry(t, cache.GetUserKey(ns, name), observedLog)
}
func assertInvalidCacheEntry(t *testing.T, val interface{}, observedLog *observer.ObservedLogs) {

77
api/cache/frostfsid.go vendored Normal file
View file

@ -0,0 +1,77 @@
package cache
import (
"fmt"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-contract/frostfsid/client"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"github.com/bluele/gcache"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/util"
"go.uber.org/zap"
)
// FrostfsIDCache provides lru cache for frostfsid contract.
type FrostfsIDCache struct {
cache gcache.Cache
logger *zap.Logger
}
const (
// DefaultFrostfsIDCacheSize is a default maximum number of entries in cache.
DefaultFrostfsIDCacheSize = 1e4
// DefaultFrostfsIDCacheLifetime is a default lifetime of entries in cache.
DefaultFrostfsIDCacheLifetime = time.Minute
)
// DefaultFrostfsIDConfig returns new default cache expiration values.
func DefaultFrostfsIDConfig(logger *zap.Logger) *Config {
return &Config{
Size: DefaultFrostfsIDCacheSize,
Lifetime: DefaultFrostfsIDCacheLifetime,
Logger: logger,
}
}
// NewFrostfsIDCache creates an object of FrostfsIDCache.
func NewFrostfsIDCache(config *Config) *FrostfsIDCache {
gc := gcache.New(config.Size).LRU().Expiration(config.Lifetime).Build()
return &FrostfsIDCache{cache: gc, logger: config.Logger}
}
// GetSubject returns a cached client.SubjectExtended. Returns nil if value is missing.
func (c *FrostfsIDCache) GetSubject(key util.Uint160) *client.SubjectExtended {
return get[client.SubjectExtended](c, key)
}
// PutSubject puts a client.SubjectExtended to cache.
func (c *FrostfsIDCache) PutSubject(key util.Uint160, subject *client.SubjectExtended) error {
return c.cache.Set(key, subject)
}
// GetUserKey returns a cached *keys.PublicKey. Returns nil if value is missing.
func (c *FrostfsIDCache) GetUserKey(ns, name string) *keys.PublicKey {
return get[keys.PublicKey](c, ns+"/"+name)
}
// PutUserKey puts a client.SubjectExtended to cache.
func (c *FrostfsIDCache) PutUserKey(ns, name string, userKey *keys.PublicKey) error {
return c.cache.Set(ns+"/"+name, userKey)
}
func get[T any](c *FrostfsIDCache, key any) *T {
entry, err := c.cache.Get(key)
if err != nil {
return nil
}
result, ok := entry.(*T)
if !ok {
c.logger.Warn(logs.InvalidCacheEntryType, zap.String("actual", fmt.Sprintf("%T", entry)),
zap.String("expected", fmt.Sprintf("%T", result)))
return nil
}
return result
}

20
api/cache/system.go vendored
View file

@ -104,22 +104,6 @@ func (o *SystemCache) GetSettings(key string) *data.BucketSettings {
return result
}
func (o *SystemCache) GetNotificationConfiguration(key string) *data.NotificationConfiguration {
entry, err := o.cache.Get(key)
if err != nil {
return nil
}
result, ok := entry.(*data.NotificationConfiguration)
if !ok {
o.logger.Warn(logs.InvalidCacheEntryType, zap.String("actual", fmt.Sprintf("%T", entry)),
zap.String("expected", fmt.Sprintf("%T", result)))
return nil
}
return result
}
// GetTagging returns tags of a bucket or an object.
func (o *SystemCache) GetTagging(key string) map[string]string {
entry, err := o.cache.Get(key)
@ -153,10 +137,6 @@ func (o *SystemCache) PutSettings(key string, settings *data.BucketSettings) err
return o.cache.Set(key, settings)
}
func (o *SystemCache) PutNotificationConfiguration(key string, obj *data.NotificationConfiguration) error {
return o.cache.Set(key, obj)
}
// PutTagging puts tags of a bucket or an object.
func (o *SystemCache) PutTagging(key string, tagSet map[string]string) error {
return o.cache.Set(key, tagSet)

View file

@ -12,9 +12,8 @@ import (
)
const (
bktSettingsObject = ".s3-settings"
bktCORSConfigurationObject = ".s3-cors"
bktNotificationConfigurationObject = ".s3-notifications"
bktSettingsObject = ".s3-settings"
bktCORSConfigurationObject = ".s3-cors"
VersioningUnversioned = "Unversioned"
VersioningEnabled = "Enabled"
@ -32,7 +31,6 @@ type (
LocationConstraint string
ObjectLockEnabled bool
HomomorphicHashDisabled bool
APEEnabled bool
}
// ObjectInfo holds S3 object data.
@ -52,14 +50,6 @@ type (
Headers map[string]string
}
// NotificationInfo store info to send s3 notification.
NotificationInfo struct {
Name string
Version string
Size uint64
HashSum string
}
// BucketSettings stores settings such as versioning.
BucketSettings struct {
Versioning string
@ -83,17 +73,15 @@ type (
ExposeHeaders []string `xml:"ExposeHeader" json:"ExposeHeaders"`
MaxAgeSeconds int `xml:"MaxAgeSeconds,omitempty" json:"MaxAgeSeconds,omitempty"`
}
)
// NotificationInfoFromObject creates new NotificationInfo from ObjectInfo.
func NotificationInfoFromObject(objInfo *ObjectInfo, md5Enabled bool) *NotificationInfo {
return &NotificationInfo{
Name: objInfo.Name,
Version: objInfo.VersionID(),
Size: objInfo.Size,
HashSum: Quote(objInfo.ETag(md5Enabled)),
// ObjectVersion stores object version info.
ObjectVersion struct {
BktInfo *BucketInfo
ObjectName string
VersionID string
NoErrorOnDeleteMarker bool
}
}
)
// SettingsObjectName is a system name for a bucket settings file.
func (b *BucketInfo) SettingsObjectName() string { return bktSettingsObject }
@ -101,10 +89,6 @@ func (b *BucketInfo) SettingsObjectName() string { return bktSettingsObject }
// CORSObjectName returns a system name for a bucket CORS configuration file.
func (b *BucketInfo) CORSObjectName() string { return bktCORSConfigurationObject }
func (b *BucketInfo) NotificationConfigurationObjectName() string {
return bktNotificationConfigurationObject
}
// VersionID returns object version from ObjectInfo.
func (o *ObjectInfo) VersionID() string { return o.ID.EncodeToString() }

View file

@ -1,42 +0,0 @@
package data
import "encoding/xml"
type (
NotificationConfiguration struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ NotificationConfiguration" json:"-"`
QueueConfigurations []QueueConfiguration `xml:"QueueConfiguration" json:"QueueConfigurations"`
// Not supported topics
TopicConfigurations []TopicConfiguration `xml:"TopicConfiguration" json:"TopicConfigurations"`
LambdaFunctionConfigurations []LambdaFunctionConfiguration `xml:"CloudFunctionConfiguration" json:"CloudFunctionConfigurations"`
}
QueueConfiguration struct {
ID string `xml:"Id" json:"Id"`
QueueArn string `xml:"Queue" json:"Queue"`
Events []string `xml:"Event" json:"Events"`
Filter Filter `xml:"Filter" json:"Filter"`
}
Filter struct {
Key Key `xml:"S3Key" json:"S3Key"`
}
Key struct {
FilterRules []FilterRule `xml:"FilterRule" json:"FilterRules"`
}
FilterRule struct {
Name string `xml:"Name" json:"Name"`
Value string `xml:"Value" json:"Value"`
}
// TopicConfiguration and LambdaFunctionConfiguration -- we don't support these configurations,
// but we need them to detect in notification configurations in requests.
TopicConfiguration struct{}
LambdaFunctionConfiguration struct{}
)
func (n NotificationConfiguration) IsEmpty() bool {
return len(n.QueueConfigurations) == 0 && len(n.TopicConfigurations) == 0 && len(n.LambdaFunctionConfigurations) == 0
}

30
api/data/tagging.go Normal file
View file

@ -0,0 +1,30 @@
package data
import "encoding/xml"
// Tagging contains tag set.
type Tagging struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ Tagging"`
TagSet []Tag `xml:"TagSet>Tag"`
}
// Tag is an AWS key-value tag.
type Tag struct {
Key string
Value string
}
type GetObjectTaggingParams struct {
ObjectVersion *ObjectVersion
// NodeVersion can be nil. If not nil we save one request to tree service.
NodeVersion *NodeVersion // optional
}
type PutObjectTaggingParams struct {
ObjectVersion *ObjectVersion
TagSet map[string]string
// NodeVersion can be nil. If not nil we save one request to tree service.
NodeVersion *NodeVersion // optional
}

View file

@ -91,6 +91,7 @@ const (
ErrBucketNotEmpty
ErrAllAccessDisabled
ErrMalformedPolicy
ErrMalformedPolicyNotPrincipal
ErrMissingFields
ErrMissingCredTag
ErrCredMalformed
@ -665,6 +666,12 @@ var errorCodes = errorCodeMap{
Description: "Policy has invalid resource.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMalformedPolicyNotPrincipal: {
ErrCode: ErrMalformedPolicyNotPrincipal,
Code: "MalformedPolicy",
Description: "Allow with NotPrincipal is not allowed.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingFields: {
ErrCode: ErrMissingFields,
Code: "MissingFields",

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -11,7 +11,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
@ -20,17 +19,11 @@ import (
type (
handler struct {
log *zap.Logger
obj layer.Client
notificator Notificator
cfg Config
ape APE
frostfsid FrostFSID
}
Notificator interface {
SendNotifications(topics map[string]string, p *SendNotificationParams) error
SendTestNotification(topic, bucketName, requestID, HostID string, now time.Time) error
log *zap.Logger
obj *layer.Layer
cfg Config
ape APE
frostfsid FrostFSID
}
// Config contains data which handler needs to keep.
@ -41,12 +34,13 @@ type (
DefaultCopiesNumbers(namespace string) []uint32
NewXMLDecoder(io.Reader) *xml.Decoder
DefaultMaxAge() int
NotificatorEnabled() bool
ResolveZoneList() []string
IsResolveListAllow() bool
BypassContentEncodingInChunks() bool
MD5Enabled() bool
ACLEnabled() bool
RetryMaxAttempts() int
RetryMaxBackoff() time.Duration
RetryStrategy() RetryStrategy
}
FrostFSID interface {
@ -57,16 +51,23 @@ type (
// APE is Access Policy Engine that needs to save policy and acl info to different places.
APE interface {
PutBucketPolicy(ns string, cnrID cid.ID, policy []byte, chains []*chain.Chain) error
DeleteBucketPolicy(ns string, cnrID cid.ID, chainID chain.ID) error
DeleteBucketPolicy(ns string, cnrID cid.ID, chainIDs []chain.ID) error
GetBucketPolicy(ns string, cnrID cid.ID) ([]byte, error)
SaveACLChains(ns string, chains []*chain.Chain) error
SaveACLChains(cid string, chains []*chain.Chain) error
}
)
type RetryStrategy string
const (
RetryStrategyExponential = "exponential"
RetryStrategyConstant = "constant"
)
var _ api.Handler = (*handler)(nil)
// New creates new api.Handler using given logger and client.
func New(log *zap.Logger, obj layer.Client, notificator Notificator, cfg Config, storage APE, ffsid FrostFSID) (api.Handler, error) {
func New(log *zap.Logger, obj *layer.Layer, cfg Config, storage APE, ffsid FrostFSID) (api.Handler, error) {
switch {
case obj == nil:
return nil, errors.New("empty FrostFS Object Layer")
@ -78,19 +79,12 @@ func New(log *zap.Logger, obj layer.Client, notificator Notificator, cfg Config,
return nil, errors.New("empty frostfsid")
}
if !cfg.NotificatorEnabled() {
log.Warn(logs.NotificatorIsDisabledS3WontProduceNotificationEvents)
} else if notificator == nil {
return nil, errors.New("empty notificator")
}
return &handler{
log: log,
obj: obj,
cfg: cfg,
ape: storage,
notificator: notificator,
frostfsid: ffsid,
log: log,
obj: obj,
cfg: cfg,
ape: storage,
frostfsid: ffsid,
}, nil
}

View file

@ -13,7 +13,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
"go.uber.org/zap"
)
@ -42,16 +41,15 @@ func path2BucketObject(path string) (string, string, error) {
func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
var (
err error
versionID string
metadata map[string]string
tagSet map[string]string
sessionTokenEACL *session.Container
err error
versionID string
metadata map[string]string
tagSet map[string]string
ctx = r.Context()
reqInfo = middleware.GetReqInfo(ctx)
containsACL = containsACLHeaders(r)
cannedACLStatus = aclHeadersStatus(r)
)
src := r.Header.Get(api.AmzCopySource)
@ -93,11 +91,9 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
return
}
if containsACL {
if sessionTokenEACL, err = getSessionTokenSetEACL(ctx); err != nil {
h.logAndSendError(w, "could not get eacl session token from a box", reqInfo, err)
return
}
if cannedACLStatus == aclStatusYes {
h.logAndSendError(w, "acl not supported for this bucket", reqInfo, errors.GetAPIError(errors.ErrAccessControlListNotSupported))
return
}
extendedSrcObjInfo, err := h.obj.GetExtendedObjectInfo(ctx, srcObjPrm)
@ -161,8 +157,8 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
return
}
} else {
tagPrm := &layer.GetObjectTaggingParams{
ObjectVersion: &layer.ObjectVersion{
tagPrm := &data.GetObjectTaggingParams{
ObjectVersion: &data.ObjectVersion{
BktInfo: srcObjPrm.BktInfo,
ObjectName: srcObject,
VersionID: srcObjInfo.VersionID(),
@ -232,28 +228,9 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
return
}
if containsACL {
newEaclTable, err := h.getNewEAclTable(r, dstBktInfo, dstObjInfo)
if err != nil {
h.logAndSendError(w, "could not get new eacl table", reqInfo, err)
return
}
p := &layer.PutBucketACLParams{
BktInfo: dstBktInfo,
EACL: newEaclTable,
SessionToken: sessionTokenEACL,
}
if err = h.obj.PutBucketACL(ctx, p); err != nil {
h.logAndSendError(w, "could not put bucket acl", reqInfo, err)
return
}
}
if tagSet != nil {
tagPrm := &layer.PutObjectTaggingParams{
ObjectVersion: &layer.ObjectVersion{
tagPrm := &data.PutObjectTaggingParams{
ObjectVersion: &data.ObjectVersion{
BktInfo: dstBktInfo,
ObjectName: reqInfo.ObjectName,
VersionID: dstObjInfo.VersionID(),
@ -261,7 +238,7 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
TagSet: tagSet,
NodeVersion: extendedDstObjInfo.NodeVersion,
}
if _, err = h.obj.PutObjectTagging(ctx, tagPrm); err != nil {
if err = h.obj.PutObjectTagging(ctx, tagPrm); err != nil {
h.logAndSendError(w, "could not upload object tagging", reqInfo, err)
return
}
@ -269,16 +246,6 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
h.reqLogger(ctx).Info(logs.ObjectIsCopied, zap.Stringer("object_id", dstObjInfo.ID))
s := &SendNotificationParams{
Event: EventObjectCreatedCopy,
NotificationInfo: data.NotificationInfoFromObject(dstObjInfo, h.cfg.MD5Enabled()),
BktInfo: dstBktInfo,
ReqInfo: reqInfo,
}
if err = h.sendNotifications(ctx, s); err != nil {
h.reqLogger(ctx).Error(logs.CouldntSendNotification, zap.Error(err))
}
if dstEncryptionParams.Enabled() {
addSSECHeaders(w.Header(), r.Header)
}

View file

@ -11,9 +11,11 @@ import (
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer/encryption"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"github.com/stretchr/testify/require"
)
@ -22,6 +24,7 @@ type CopyMeta struct {
Tags map[string]string
MetadataDirective string
Metadata map[string]string
Headers map[string]string
}
func TestCopyWithTaggingDirective(t *testing.T) {
@ -279,28 +282,33 @@ func copyObject(hc *handlerContext, bktName, fromObject, toObject string, copyMe
}
r.Header.Set(api.AmzTagging, tagsQuery.Encode())
for key, val := range copyMeta.Headers {
r.Header.Set(key, val)
}
hc.Handler().CopyObjectHandler(w, r)
assertStatus(hc.t, w, statusCode)
}
func putObjectTagging(t *testing.T, tc *handlerContext, bktName, objName string, tags map[string]string) {
body := &Tagging{
TagSet: make([]Tag, 0, len(tags)),
body := &data.Tagging{
TagSet: make([]data.Tag, 0, len(tags)),
}
for key, val := range tags {
body.TagSet = append(body.TagSet, Tag{
body.TagSet = append(body.TagSet, data.Tag{
Key: key,
Value: val,
})
}
w, r := prepareTestRequest(tc, bktName, objName, body)
middleware.GetReqInfo(r.Context()).Tagging = body
tc.Handler().PutObjectTaggingHandler(w, r)
assertStatus(t, w, http.StatusOK)
}
func getObjectTagging(t *testing.T, tc *handlerContext, bktName, objName, version string) *Tagging {
func getObjectTagging(t *testing.T, tc *handlerContext, bktName, objName, version string) *data.Tagging {
query := make(url.Values)
query.Add(api.QueryVersionID, version)
@ -308,7 +316,7 @@ func getObjectTagging(t *testing.T, tc *handlerContext, bktName, objName, versio
tc.Handler().GetObjectTaggingHandler(w, r)
assertStatus(t, w, http.StatusOK)
tagging := &Tagging{}
tagging := &data.Tagging{}
err := xml.NewDecoder(w.Result().Body).Decode(tagging)
require.NoError(t, err)
return tagging

View file

@ -66,7 +66,10 @@ func (h *handler) PutBucketCorsHandler(w http.ResponseWriter, r *http.Request) {
return
}
middleware.WriteSuccessResponseHeadersOnly(w)
if err = middleware.WriteSuccessResponseHeadersOnly(w); err != nil {
h.logAndSendError(w, "write response", reqInfo, err)
return
}
}
func (h *handler) DeleteBucketCorsHandler(w http.ResponseWriter, r *http.Request) {
@ -184,8 +187,8 @@ func (h *handler) Preflight(w http.ResponseWriter, r *http.Request) {
if !checkSubslice(rule.AllowedHeaders, headers) {
continue
}
w.Header().Set(api.AccessControlAllowOrigin, o)
w.Header().Set(api.AccessControlAllowMethods, strings.Join(rule.AllowedMethods, ", "))
w.Header().Set(api.AccessControlAllowOrigin, origin)
w.Header().Set(api.AccessControlAllowMethods, method)
if headers != nil {
w.Header().Set(api.AccessControlAllowHeaders, requestHeaders)
}
@ -200,7 +203,10 @@ func (h *handler) Preflight(w http.ResponseWriter, r *http.Request) {
if o != wildcard {
w.Header().Set(api.AccessControlAllowCredentials, "true")
}
middleware.WriteSuccessResponseHeadersOnly(w)
if err = middleware.WriteSuccessResponseHeadersOnly(w); err != nil {
h.logAndSendError(w, "write response", reqInfo, err)
return
}
return
}
}

View file

@ -7,6 +7,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"github.com/stretchr/testify/require"
)
func TestCORSOriginWildcard(t *testing.T) {
@ -23,14 +24,14 @@ func TestCORSOriginWildcard(t *testing.T) {
bktName := "bucket-for-cors"
box, _ := createAccessBox(t)
w, r := prepareTestRequest(hc, bktName, "", nil)
ctx := middleware.SetBoxData(r.Context(), box)
ctx := middleware.SetBox(r.Context(), &middleware.Box{AccessBox: box})
r = r.WithContext(ctx)
r.Header.Add(api.AmzACL, "public-read")
hc.Handler().CreateBucketHandler(w, r)
assertStatus(t, w, http.StatusOK)
w, r = prepareTestPayloadRequest(hc, bktName, "", strings.NewReader(body))
ctx = middleware.SetBoxData(r.Context(), box)
ctx = middleware.SetBox(r.Context(), &middleware.Box{AccessBox: box})
r = r.WithContext(ctx)
hc.Handler().PutBucketCorsHandler(w, r)
assertStatus(t, w, http.StatusOK)
@ -39,3 +40,181 @@ func TestCORSOriginWildcard(t *testing.T) {
hc.Handler().GetBucketCorsHandler(w, r)
assertStatus(t, w, http.StatusOK)
}
func TestPreflight(t *testing.T) {
body := `
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedMethod>GET</AllowedMethod>
<AllowedOrigin>http://www.example.com</AllowedOrigin>
<AllowedHeader>Authorization</AllowedHeader>
<ExposeHeader>x-amz-*</ExposeHeader>
<ExposeHeader>X-Amz-*</ExposeHeader>
<MaxAgeSeconds>600</MaxAgeSeconds>
</CORSRule>
</CORSConfiguration>
`
hc := prepareHandlerContext(t)
bktName := "bucket-preflight-test"
box, _ := createAccessBox(t)
w, r := prepareTestRequest(hc, bktName, "", nil)
ctx := middleware.SetBox(r.Context(), &middleware.Box{AccessBox: box})
r = r.WithContext(ctx)
hc.Handler().CreateBucketHandler(w, r)
assertStatus(t, w, http.StatusOK)
w, r = prepareTestPayloadRequest(hc, bktName, "", strings.NewReader(body))
ctx = middleware.SetBox(r.Context(), &middleware.Box{AccessBox: box})
r = r.WithContext(ctx)
hc.Handler().PutBucketCorsHandler(w, r)
assertStatus(t, w, http.StatusOK)
for _, tc := range []struct {
name string
origin string
method string
headers string
expectedStatus int
}{
{
name: "Valid",
origin: "http://www.example.com",
method: "GET",
headers: "Authorization",
expectedStatus: http.StatusOK,
},
{
name: "Empty origin",
method: "GET",
headers: "Authorization",
expectedStatus: http.StatusBadRequest,
},
{
name: "Empty request method",
origin: "http://www.example.com",
headers: "Authorization",
expectedStatus: http.StatusBadRequest,
},
{
name: "Not allowed method",
origin: "http://www.example.com",
method: "PUT",
headers: "Authorization",
expectedStatus: http.StatusForbidden,
},
{
name: "Not allowed headers",
origin: "http://www.example.com",
method: "GET",
headers: "Authorization, Last-Modified",
expectedStatus: http.StatusForbidden,
},
} {
t.Run(tc.name, func(t *testing.T) {
w, r = prepareTestPayloadRequest(hc, bktName, "", nil)
r.Header.Set(api.Origin, tc.origin)
r.Header.Set(api.AccessControlRequestMethod, tc.method)
r.Header.Set(api.AccessControlRequestHeaders, tc.headers)
hc.Handler().Preflight(w, r)
assertStatus(t, w, tc.expectedStatus)
if tc.expectedStatus == http.StatusOK {
require.Equal(t, tc.origin, w.Header().Get(api.AccessControlAllowOrigin))
require.Equal(t, tc.method, w.Header().Get(api.AccessControlAllowMethods))
require.Equal(t, tc.headers, w.Header().Get(api.AccessControlAllowHeaders))
require.Equal(t, "x-amz-*, X-Amz-*", w.Header().Get(api.AccessControlExposeHeaders))
require.Equal(t, "true", w.Header().Get(api.AccessControlAllowCredentials))
require.Equal(t, "600", w.Header().Get(api.AccessControlMaxAge))
}
})
}
}
func TestPreflightWildcardOrigin(t *testing.T) {
body := `
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedOrigin>*</AllowedOrigin>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
`
hc := prepareHandlerContext(t)
bktName := "bucket-preflight-wildcard-test"
box, _ := createAccessBox(t)
w, r := prepareTestRequest(hc, bktName, "", nil)
ctx := middleware.SetBox(r.Context(), &middleware.Box{AccessBox: box})
r = r.WithContext(ctx)
hc.Handler().CreateBucketHandler(w, r)
assertStatus(t, w, http.StatusOK)
w, r = prepareTestPayloadRequest(hc, bktName, "", strings.NewReader(body))
ctx = middleware.SetBox(r.Context(), &middleware.Box{AccessBox: box})
r = r.WithContext(ctx)
hc.Handler().PutBucketCorsHandler(w, r)
assertStatus(t, w, http.StatusOK)
for _, tc := range []struct {
name string
origin string
method string
headers string
expectedStatus int
}{
{
name: "Valid get",
origin: "http://www.example.com",
method: "GET",
headers: "Authorization, Last-Modified",
expectedStatus: http.StatusOK,
},
{
name: "Valid put",
origin: "http://example.com",
method: "PUT",
headers: "Authorization, Content-Type",
expectedStatus: http.StatusOK,
},
{
name: "Empty origin",
method: "GET",
headers: "Authorization, Last-Modified",
expectedStatus: http.StatusBadRequest,
},
{
name: "Empty request method",
origin: "http://www.example.com",
headers: "Authorization, Last-Modified",
expectedStatus: http.StatusBadRequest,
},
{
name: "Not allowed method",
origin: "http://www.example.com",
method: "DELETE",
headers: "Authorization, Last-Modified",
expectedStatus: http.StatusForbidden,
},
} {
t.Run(tc.name, func(t *testing.T) {
w, r = prepareTestPayloadRequest(hc, bktName, "", nil)
r.Header.Set(api.Origin, tc.origin)
r.Header.Set(api.AccessControlRequestMethod, tc.method)
r.Header.Set(api.AccessControlRequestHeaders, tc.headers)
hc.Handler().Preflight(w, r)
assertStatus(t, w, tc.expectedStatus)
if tc.expectedStatus == http.StatusOK {
require.Equal(t, tc.origin, w.Header().Get(api.AccessControlAllowOrigin))
require.Equal(t, tc.method, w.Header().Get(api.AccessControlAllowMethods))
require.Equal(t, tc.headers, w.Header().Get(api.AccessControlAllowHeaders))
require.Empty(t, w.Header().Get(api.AccessControlExposeHeaders))
require.Empty(t, w.Header().Get(api.AccessControlAllowCredentials))
require.Equal(t, "0", w.Header().Get(api.AccessControlMaxAge))
}
})
}
}

View file

@ -2,20 +2,18 @@ package handler
import (
"encoding/xml"
"fmt"
"net/http"
"strconv"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
"go.uber.org/zap"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
)
// limitation of AWS https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html
@ -99,41 +97,6 @@ func (h *handler) DeleteObjectHandler(w http.ResponseWriter, r *http.Request) {
return
}
var m *SendNotificationParams
if bktSettings.VersioningEnabled() && len(versionID) == 0 {
m = &SendNotificationParams{
Event: EventObjectRemovedDeleteMarkerCreated,
NotificationInfo: &data.NotificationInfo{
Name: reqInfo.ObjectName,
HashSum: deletedObject.DeleteMarkerEtag,
},
BktInfo: bktInfo,
ReqInfo: reqInfo,
}
} else {
var objID oid.ID
if len(versionID) != 0 {
if err = objID.DecodeString(versionID); err != nil {
h.reqLogger(ctx).Error(logs.CouldntSendNotification, zap.Error(err))
}
}
m = &SendNotificationParams{
Event: EventObjectRemovedDelete,
NotificationInfo: &data.NotificationInfo{
Name: reqInfo.ObjectName,
Version: objID.EncodeToString(),
},
BktInfo: bktInfo,
ReqInfo: reqInfo,
}
}
if err = h.sendNotifications(ctx, m); err != nil {
h.reqLogger(ctx).Error(logs.CouldntSendNotification, zap.Error(err))
}
if deletedObject.VersionID != "" {
w.Header().Set(api.AmzVersionID, deletedObject.VersionID)
}
@ -178,7 +141,7 @@ func (h *handler) DeleteMultipleObjectsHandler(w http.ResponseWriter, r *http.Re
// Unmarshal list of keys to be deleted.
requested := &DeleteObjectsRequest{}
if err := h.cfg.NewXMLDecoder(r.Body).Decode(requested); err != nil {
h.logAndSendError(w, "couldn't decode body", reqInfo, errors.GetAPIError(errors.ErrMalformedXML))
h.logAndSendError(w, "couldn't decode body", reqInfo, fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrMalformedXML), err.Error()))
return
}
@ -187,15 +150,18 @@ func (h *handler) DeleteMultipleObjectsHandler(w http.ResponseWriter, r *http.Re
return
}
removed := make(map[string]*layer.VersionedObject)
unique := make(map[string]struct{})
toRemove := make([]*layer.VersionedObject, 0, len(requested.Objects))
for _, obj := range requested.Objects {
versionedObj := &layer.VersionedObject{
Name: obj.ObjectName,
VersionID: obj.VersionID,
}
toRemove = append(toRemove, versionedObj)
removed[versionedObj.String()] = versionedObj
key := versionedObj.String()
if _, ok := unique[key]; !ok {
toRemove = append(toRemove, versionedObj)
unique[key] = struct{}{}
}
}
response := &DeleteObjectsResponse{
@ -276,6 +242,19 @@ func (h *handler) DeleteBucketHandler(w http.ResponseWriter, r *http.Request) {
SessionToken: sessionToken,
}); err != nil {
h.logAndSendError(w, "couldn't delete bucket", reqInfo, err)
return
}
chainIDs := []chain.ID{
getBucketChainID(chain.S3, bktInfo),
getBucketChainID(chain.Ingress, bktInfo),
getBucketCannedChainID(chain.S3, bktInfo.CID),
getBucketCannedChainID(chain.Ingress, bktInfo.CID),
}
if err = h.ape.DeleteBucketPolicy(reqInfo.Namespace, bktInfo.CID, chainIDs); err != nil {
h.logAndSendError(w, "failed to delete policy from storage", reqInfo, err)
return
}
w.WriteHeader(http.StatusNoContent)
}

View file

@ -85,6 +85,19 @@ func TestDeleteBucketOnNotFoundError(t *testing.T) {
deleteBucket(t, hc, bktName, http.StatusNoContent)
}
func TestDeleteMultipleObjectCheckUniqueness(t *testing.T) {
hc := prepareHandlerContext(t)
bktName, objName := "bucket", "object"
createTestBucket(hc, bktName)
putObject(hc, bktName, objName)
resp := deleteObjects(t, hc, bktName, [][2]string{{objName, emptyVersion}, {objName, emptyVersion}})
require.Empty(t, resp.Errors)
require.Len(t, resp.DeletedObjects, 1)
}
func TestDeleteObjectsError(t *testing.T) {
hc := prepareHandlerContext(t)
@ -458,6 +471,16 @@ func putBucketVersioning(t *testing.T, tc *handlerContext, bktName string, enabl
assertStatus(t, w, http.StatusOK)
}
func getBucketVersioning(hc *handlerContext, bktName string) *VersioningConfiguration {
w, r := prepareTestRequest(hc, bktName, "", nil)
hc.Handler().GetBucketVersioningHandler(w, r)
assertStatus(hc.t, w, http.StatusOK)
res := &VersioningConfiguration{}
parseTestResponse(hc.t, w, res)
return res
}
func deleteObject(t *testing.T, tc *handlerContext, bktName, objName, version string) (string, bool) {
query := make(url.Values)
query.Add(api.QueryVersionID, version)

View file

@ -14,6 +14,7 @@ import (
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"github.com/stretchr/testify/require"
)
@ -234,24 +235,33 @@ func multipartUpload(hc *handlerContext, bktName, objName string, headers map[st
}
func createMultipartUploadEncrypted(hc *handlerContext, bktName, objName string, headers map[string]string) *InitiateMultipartUploadResponse {
return createMultipartUploadBase(hc, bktName, objName, true, headers)
return createMultipartUploadOkBase(hc, bktName, objName, true, headers)
}
func createMultipartUpload(hc *handlerContext, bktName, objName string, headers map[string]string) *InitiateMultipartUploadResponse {
return createMultipartUploadBase(hc, bktName, objName, false, headers)
return createMultipartUploadOkBase(hc, bktName, objName, false, headers)
}
func createMultipartUploadBase(hc *handlerContext, bktName, objName string, encrypted bool, headers map[string]string) *InitiateMultipartUploadResponse {
func createMultipartUploadOkBase(hc *handlerContext, bktName, objName string, encrypted bool, headers map[string]string) *InitiateMultipartUploadResponse {
w := createMultipartUploadBase(hc, bktName, objName, encrypted, headers)
multipartInitInfo := &InitiateMultipartUploadResponse{}
readResponse(hc.t, w, http.StatusOK, multipartInitInfo)
return multipartInitInfo
}
func createMultipartUploadAssertS3Error(hc *handlerContext, bktName, objName string, headers map[string]string, code errors.ErrorCode) {
w := createMultipartUploadBase(hc, bktName, objName, false, headers)
assertS3Error(hc.t, w, errors.GetAPIError(code))
}
func createMultipartUploadBase(hc *handlerContext, bktName, objName string, encrypted bool, headers map[string]string) *httptest.ResponseRecorder {
w, r := prepareTestRequest(hc, bktName, objName, nil)
if encrypted {
setEncryptHeaders(r)
}
setHeaders(r, headers)
hc.Handler().CreateMultipartUploadHandler(w, r)
multipartInitInfo := &InitiateMultipartUploadResponse{}
readResponse(hc.t, w, http.StatusOK, multipartInitInfo)
return multipartInitInfo
return w
}
func completeMultipartUpload(hc *handlerContext, bktName, objName, uploadID string, partsETags []string) {

View file

@ -184,7 +184,7 @@ func (h *handler) GetObjectHandler(w http.ResponseWriter, r *http.Request) {
return
}
t := &layer.ObjectVersion{
t := &data.ObjectVersion{
BktInfo: bktInfo,
ObjectName: info.Name,
VersionID: info.VersionID(),

View file

@ -4,8 +4,10 @@ import (
"bytes"
"context"
"crypto/rand"
"encoding/hex"
"encoding/xml"
"errors"
"fmt"
"io"
"net/http"
"net/http/httptest"
@ -56,7 +58,7 @@ func (hc *handlerContext) MockedPool() *layer.TestFrostFS {
return hc.tp
}
func (hc *handlerContext) Layer() layer.Client {
func (hc *handlerContext) Layer() *layer.Layer {
return hc.h.obj
}
@ -70,7 +72,6 @@ type configMock struct {
defaultCopiesNumbers []uint32
bypassContentEncodingInChunks bool
md5Enabled bool
aclEnabled bool
}
func (c *configMock) DefaultPlacementPolicy(_ string) netmap.PlacementPolicy {
@ -102,10 +103,6 @@ func (c *configMock) DefaultMaxAge() int {
return 0
}
func (c *configMock) NotificatorEnabled() bool {
return false
}
func (c *configMock) ResolveZoneList() []string {
return []string{}
}
@ -122,14 +119,22 @@ func (c *configMock) MD5Enabled() bool {
return c.md5Enabled
}
func (c *configMock) ACLEnabled() bool {
return c.aclEnabled
}
func (c *configMock) ResolveNamespaceAlias(ns string) string {
return ns
}
func (c *configMock) RetryMaxAttempts() int {
return 1
}
func (c *configMock) RetryMaxBackoff() time.Duration {
return 0
}
func (c *configMock) RetryStrategy() RetryStrategy {
return RetryStrategyConstant
}
func prepareHandlerContext(t *testing.T) *handlerContext {
return prepareHandlerContextBase(t, layer.DefaultCachesConfigs(zap.NewExample()))
}
@ -177,10 +182,11 @@ func prepareHandlerContextBase(t *testing.T, cacheCfg *layer.CachesConfig) *hand
defaultPolicy: pp,
}
h := &handler{
log: l,
obj: layer.NewLayer(l, tp, layerCfg),
cfg: cfg,
ape: newAPEMock(),
log: l,
obj: layer.NewLayer(l, tp, layerCfg),
cfg: cfg,
ape: newAPEMock(),
frostfsid: newFrostfsIDMock(),
}
return &handlerContext{
@ -189,7 +195,7 @@ func prepareHandlerContextBase(t *testing.T, cacheCfg *layer.CachesConfig) *hand
h: h,
tp: tp,
tree: treeMock,
context: middleware.SetBoxData(context.Background(), newTestAccessBox(t, key)),
context: middleware.SetBox(context.Background(), &middleware.Box{AccessBox: newTestAccessBox(t, key)}),
config: cfg,
layerFeatures: features,
@ -267,7 +273,7 @@ func (a *apeMock) PutBucketPolicy(ns string, cnrID cid.ID, policy []byte, chain
}
for i := range chain {
if err := a.AddChain(engine.NamespaceTarget(ns), chain[i]); err != nil {
if err := a.AddChain(engine.ContainerTarget(cnrID.EncodeToString()), chain[i]); err != nil {
return err
}
}
@ -275,11 +281,17 @@ func (a *apeMock) PutBucketPolicy(ns string, cnrID cid.ID, policy []byte, chain
return nil
}
func (a *apeMock) DeleteBucketPolicy(ns string, cnrID cid.ID, chainID chain.ID) error {
func (a *apeMock) DeleteBucketPolicy(ns string, cnrID cid.ID, chainIDs []chain.ID) error {
if err := a.DeletePolicy(ns, cnrID); err != nil {
return err
}
return a.RemoveChain(engine.NamespaceTarget(ns), chainID)
for i := range chainIDs {
if err := a.RemoveChain(engine.ContainerTarget(cnrID.EncodeToString()), chainIDs[i]); err != nil {
return err
}
}
return nil
}
func (a *apeMock) GetBucketPolicy(ns string, cnrID cid.ID) ([]byte, error) {
@ -291,9 +303,9 @@ func (a *apeMock) GetBucketPolicy(ns string, cnrID cid.ID) ([]byte, error) {
return policy, nil
}
func (a *apeMock) SaveACLChains(ns string, chains []*chain.Chain) error {
func (a *apeMock) SaveACLChains(cid string, chains []*chain.Chain) error {
for i := range chains {
if err := a.AddChain(engine.NamespaceTarget(ns), chains[i]); err != nil {
if err := a.AddChain(engine.ContainerTarget(cid), chains[i]); err != nil {
return err
}
}
@ -301,6 +313,32 @@ func (a *apeMock) SaveACLChains(ns string, chains []*chain.Chain) error {
return nil
}
type frostfsidMock struct {
data map[string]*keys.PublicKey
}
func newFrostfsIDMock() *frostfsidMock {
return &frostfsidMock{data: map[string]*keys.PublicKey{}}
}
func (f *frostfsidMock) GetUserAddress(account, user string) (string, error) {
res, ok := f.data[account+user]
if !ok {
return "", fmt.Errorf("not found")
}
return res.Address(), nil
}
func (f *frostfsidMock) GetUserKey(account, user string) (string, error) {
res, ok := f.data[account+user]
if !ok {
return "", fmt.Errorf("not found")
}
return hex.EncodeToString(res.Bytes()), nil
}
func createTestBucket(hc *handlerContext, bktName string) *data.BucketInfo {
info := createBucket(hc, bktName)
return info.BktInfo
@ -380,7 +418,7 @@ func prepareTestRequestWithQuery(hc *handlerContext, bktName, objName string, qu
r := httptest.NewRequest(http.MethodPut, defaultURL, bytes.NewReader(body))
r.URL.RawQuery = query.Encode()
reqInfo := middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: bktName, Object: objName})
reqInfo := middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: bktName, Object: objName}, "")
r = r.WithContext(middleware.SetReqInfo(hc.Context(), reqInfo))
return w, r
@ -390,7 +428,7 @@ func prepareTestPayloadRequest(hc *handlerContext, bktName, objName string, payl
w := httptest.NewRecorder()
r := httptest.NewRequest(http.MethodPut, defaultURL, payload)
reqInfo := middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: bktName, Object: objName})
reqInfo := middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: bktName, Object: objName}, "")
r = r.WithContext(middleware.SetReqInfo(hc.Context(), reqInfo))
return w, r

View file

@ -70,7 +70,7 @@ func (h *handler) HeadObjectHandler(w http.ResponseWriter, r *http.Request) {
return
}
t := &layer.ObjectVersion{
t := &data.ObjectVersion{
BktInfo: bktInfo,
ObjectName: info.Name,
VersionID: info.VersionID(),
@ -140,7 +140,10 @@ func (h *handler) HeadBucketHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContainerZone, bktInfo.Zone)
}
middleware.WriteResponse(w, http.StatusOK, nil, middleware.MimeNone)
if err = middleware.WriteResponse(w, http.StatusOK, nil, middleware.MimeNone); err != nil {
h.logAndSendError(w, "write response", reqInfo, err)
return
}
}
func (h *handler) setLockingHeaders(bktInfo *data.BucketInfo, lockInfo data.LockInfo, header http.Header) error {

View file

@ -7,12 +7,9 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
s3errors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/stretchr/testify/require"
)
@ -84,31 +81,6 @@ func headObject(t *testing.T, tc *handlerContext, bktName, objName string, heade
assertStatus(t, w, status)
}
func TestInvalidAccessThroughCache(t *testing.T) {
hc := prepareHandlerContext(t)
bktName, objName := "bucket-for-cache", "obj-for-cache"
bktInfo, _ := createBucketAndObject(hc, bktName, objName)
setContainerEACL(hc, bktInfo.CID)
headObject(t, hc, bktName, objName, nil, http.StatusOK)
w, r := prepareTestRequest(hc, bktName, objName, nil)
hc.Handler().HeadObjectHandler(w, r.WithContext(middleware.SetBoxData(r.Context(), newTestAccessBox(t, nil))))
assertStatus(t, w, http.StatusForbidden)
}
func setContainerEACL(hc *handlerContext, cnrID cid.ID) {
table := eacl.NewTable()
table.SetCID(cnrID)
for _, op := range fullOps {
table.AddRecord(getOthersRecord(op, eacl.ActionDeny))
}
err := hc.MockedPool().SetContainerEACL(hc.Context(), *table, nil)
require.NoError(hc.t, err)
}
func TestHeadObject(t *testing.T) {
hc := prepareHandlerContextWithMinCache(t)
bktName, objName := "bucket", "obj"
@ -155,7 +127,7 @@ func newTestAccessBox(t *testing.T, key *keys.PrivateKey) *accessbox.Box {
}
var btoken bearer.Token
btoken.SetEACLTable(*eacl.NewTable())
btoken.SetImpersonate(true)
err = btoken.Sign(key.PrivateKey)
require.NoError(t, err)

View file

@ -133,7 +133,7 @@ func (h *handler) PutObjectLegalHoldHandler(w http.ResponseWriter, r *http.Reque
}
p := &layer.PutLockInfoParams{
ObjVersion: &layer.ObjectVersion{
ObjVersion: &data.ObjectVersion{
BktInfo: bktInfo,
ObjectName: reqInfo.ObjectName,
VersionID: reqInfo.URL.Query().Get(api.QueryVersionID),
@ -172,7 +172,7 @@ func (h *handler) GetObjectLegalHoldHandler(w http.ResponseWriter, r *http.Reque
return
}
p := &layer.ObjectVersion{
p := &data.ObjectVersion{
BktInfo: bktInfo,
ObjectName: reqInfo.ObjectName,
VersionID: reqInfo.URL.Query().Get(api.QueryVersionID),
@ -221,7 +221,7 @@ func (h *handler) PutObjectRetentionHandler(w http.ResponseWriter, r *http.Reque
}
p := &layer.PutLockInfoParams{
ObjVersion: &layer.ObjectVersion{
ObjVersion: &data.ObjectVersion{
BktInfo: bktInfo,
ObjectName: reqInfo.ObjectName,
VersionID: reqInfo.URL.Query().Get(api.QueryVersionID),
@ -256,7 +256,7 @@ func (h *handler) GetObjectRetentionHandler(w http.ResponseWriter, r *http.Reque
return
}
p := &layer.ObjectVersion{
p := &data.ObjectVersion{
BktInfo: bktInfo,
ObjectName: reqInfo.ObjectName,
VersionID: reqInfo.URL.Query().Get(api.QueryVersionID),

View file

@ -315,7 +315,7 @@ func TestPutBucketLockConfigurationHandler(t *testing.T) {
w := httptest.NewRecorder()
r := httptest.NewRequest(http.MethodPut, defaultURL, bytes.NewReader(body))
r = r.WithContext(middleware.SetReqInfo(r.Context(), middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: tc.bucket})))
r = r.WithContext(middleware.SetReqInfo(r.Context(), middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: tc.bucket}, "")))
hc.Handler().PutBucketObjectLockConfigHandler(w, r)
@ -388,7 +388,7 @@ func TestGetBucketLockConfigurationHandler(t *testing.T) {
t.Run(tc.name, func(t *testing.T) {
w := httptest.NewRecorder()
r := httptest.NewRequest(http.MethodPut, defaultURL, bytes.NewReader(nil))
r = r.WithContext(middleware.SetReqInfo(r.Context(), middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: tc.bucket})))
r = r.WithContext(middleware.SetReqInfo(r.Context(), middleware.NewReqInfo(w, r, middleware.ObjectRequest{Bucket: tc.bucket}, "")))
hc.Handler().GetBucketObjectLockConfigHandler(w, r)

View file

@ -13,7 +13,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"github.com/google/uuid"
"go.uber.org/zap"
)
@ -103,6 +102,9 @@ const (
func (h *handler) CreateMultipartUploadHandler(w http.ResponseWriter, r *http.Request) {
reqInfo := middleware.GetReqInfo(r.Context())
uploadID := uuid.New()
cannedACLStatus := aclHeadersStatus(r)
additional := []zap.Field{zap.String("uploadID", uploadID.String())}
bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil {
@ -110,8 +112,10 @@ func (h *handler) CreateMultipartUploadHandler(w http.ResponseWriter, r *http.Re
return
}
uploadID := uuid.New()
additional := []zap.Field{zap.String("uploadID", uploadID.String())}
if cannedACLStatus == aclStatusYes {
h.logAndSendError(w, "acl not supported for this bucket", reqInfo, errors.GetAPIError(errors.ErrAccessControlListNotSupported))
return
}
p := &layer.CreateMultipartParams{
Info: &layer.UploadInfoParams{
@ -122,19 +126,6 @@ func (h *handler) CreateMultipartUploadHandler(w http.ResponseWriter, r *http.Re
Data: &layer.UploadData{},
}
if containsACLHeaders(r) {
key, err := h.bearerTokenIssuerKey(r.Context())
if err != nil {
h.logAndSendError(w, "couldn't get gate key", reqInfo, err, additional...)
return
}
if _, err = parseACLHeaders(r.Header, key); err != nil {
h.logAndSendError(w, "could not parse acl", reqInfo, err, additional...)
return
}
p.Data.ACLHeaders = formACLHeadersForMultipart(r.Header)
}
if len(r.Header.Get(api.AmzTagging)) > 0 {
p.Data.TagSet, err = parseTaggingHeader(r.Header)
if err != nil {
@ -184,25 +175,6 @@ func (h *handler) CreateMultipartUploadHandler(w http.ResponseWriter, r *http.Re
}
}
func formACLHeadersForMultipart(header http.Header) map[string]string {
result := make(map[string]string)
if value := header.Get(api.AmzACL); value != "" {
result[api.AmzACL] = value
}
if value := header.Get(api.AmzGrantRead); value != "" {
result[api.AmzGrantRead] = value
}
if value := header.Get(api.AmzGrantFullControl); value != "" {
result[api.AmzGrantFullControl] = value
}
if value := header.Get(api.AmzGrantWrite); value != "" {
result[api.AmzGrantWrite] = value
}
return result
}
func (h *handler) UploadPartHandler(w http.ResponseWriter, r *http.Request) {
reqInfo := middleware.GetReqInfo(r.Context())
@ -266,7 +238,10 @@ func (h *handler) UploadPartHandler(w http.ResponseWriter, r *http.Request) {
}
w.Header().Set(api.ETag, data.Quote(hash))
middleware.WriteSuccessResponseHeadersOnly(w)
if err = middleware.WriteSuccessResponseHeadersOnly(w); err != nil {
h.logAndSendError(w, "write response", reqInfo, err)
return
}
}
func (h *handler) UploadPartCopy(w http.ResponseWriter, r *http.Request) {
@ -425,7 +400,7 @@ func (h *handler) CompleteMultipartUploadHandler(w http.ResponseWriter, r *http.
reqBody := new(CompleteMultipartUpload)
if err = h.cfg.NewXMLDecoder(r.Body).Decode(reqBody); err != nil {
h.logAndSendError(w, "could not read complete multipart upload xml", reqInfo,
errors.GetAPIError(errors.ErrMalformedXML), additional...)
fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrMalformedXML), err.Error()), additional...)
return
}
if len(reqBody.Parts) == 0 {
@ -440,7 +415,7 @@ func (h *handler) CompleteMultipartUploadHandler(w http.ResponseWriter, r *http.
// Start complete multipart upload which may take some time to fetch object
// and re-upload it part by part.
objInfo, err := h.completeMultipartUpload(r, c, bktInfo, reqInfo)
objInfo, err := h.completeMultipartUpload(r, c, bktInfo)
if err != nil {
h.logAndSendError(w, "complete multipart error", reqInfo, err, additional...)
@ -462,7 +437,7 @@ func (h *handler) CompleteMultipartUploadHandler(w http.ResponseWriter, r *http.
}
}
func (h *handler) completeMultipartUpload(r *http.Request, c *layer.CompleteMultipartParams, bktInfo *data.BucketInfo, reqInfo *middleware.ReqInfo) (*data.ObjectInfo, error) {
func (h *handler) completeMultipartUpload(r *http.Request, c *layer.CompleteMultipartParams, bktInfo *data.BucketInfo) (*data.ObjectInfo, error) {
ctx := r.Context()
uploadData, extendedObjInfo, err := h.obj.CompleteMultipartUpload(ctx, c)
if err != nil {
@ -471,8 +446,8 @@ func (h *handler) completeMultipartUpload(r *http.Request, c *layer.CompleteMult
objInfo := extendedObjInfo.ObjectInfo
if len(uploadData.TagSet) != 0 {
tagPrm := &layer.PutObjectTaggingParams{
ObjectVersion: &layer.ObjectVersion{
tagPrm := &data.PutObjectTaggingParams{
ObjectVersion: &data.ObjectVersion{
BktInfo: bktInfo,
ObjectName: objInfo.Name,
VersionID: objInfo.VersionID(),
@ -480,48 +455,11 @@ func (h *handler) completeMultipartUpload(r *http.Request, c *layer.CompleteMult
TagSet: uploadData.TagSet,
NodeVersion: extendedObjInfo.NodeVersion,
}
if _, err = h.obj.PutObjectTagging(ctx, tagPrm); err != nil {
if err = h.obj.PutObjectTagging(ctx, tagPrm); err != nil {
return nil, fmt.Errorf("could not put tagging file of completed multipart upload: %w", err)
}
}
if len(uploadData.ACLHeaders) != 0 {
sessionTokenSetEACL, err := getSessionTokenSetEACL(ctx)
if err != nil {
return nil, fmt.Errorf("couldn't get eacl token: %w", err)
}
key, err := h.bearerTokenIssuerKey(ctx)
if err != nil {
return nil, fmt.Errorf("couldn't get gate key: %w", err)
}
acl, err := parseACLHeaders(r.Header, key)
if err != nil {
return nil, fmt.Errorf("could not parse acl: %w", err)
}
resInfo := &resourceInfo{
Bucket: objInfo.Bucket,
Object: objInfo.Name,
}
astObject, err := aclToAst(acl, resInfo)
if err != nil {
return nil, fmt.Errorf("could not translate acl of completed multipart upload to ast: %w", err)
}
if _, err = h.updateBucketACL(r, astObject, bktInfo, sessionTokenSetEACL); err != nil {
return nil, fmt.Errorf("could not update bucket acl while completing multipart upload: %w", err)
}
}
s := &SendNotificationParams{
Event: EventObjectCreatedCompleteMultipartUpload,
NotificationInfo: data.NotificationInfoFromObject(objInfo, h.cfg.MD5Enabled()),
BktInfo: bktInfo,
ReqInfo: reqInfo,
}
if err = h.sendNotifications(ctx, s); err != nil {
h.reqLogger(ctx).Error(logs.CouldntSendNotification, zap.Error(err))
}
return objInfo, nil
}

View file

@ -38,6 +38,49 @@ func TestMultipartUploadInvalidPart(t *testing.T) {
assertS3Error(hc.t, w, s3Errors.GetAPIError(s3Errors.ErrEntityTooSmall))
}
func TestDeleteMultipartAllParts(t *testing.T) {
hc := prepareHandlerContext(t)
partSize := layer.UploadMinSize
objLen := 6 * partSize
bktName, bktName2, objName := "bucket", "bucket2", "object"
// unversioned bucket
createTestBucket(hc, bktName)
multipartUpload(hc, bktName, objName, nil, objLen, partSize)
deleteObject(t, hc, bktName, objName, emptyVersion)
require.Empty(t, hc.tp.Objects())
// encrypted multipart
multipartUploadEncrypted(hc, bktName, objName, nil, objLen, partSize)
deleteObject(t, hc, bktName, objName, emptyVersion)
require.Empty(t, hc.tp.Objects())
// versions bucket
createTestBucket(hc, bktName2)
putBucketVersioning(t, hc, bktName2, true)
multipartUpload(hc, bktName2, objName, nil, objLen, partSize)
_, hdr := getObject(hc, bktName2, objName)
versionID := hdr.Get("X-Amz-Version-Id")
deleteObject(t, hc, bktName2, objName, emptyVersion)
deleteObject(t, hc, bktName2, objName, versionID)
require.Empty(t, hc.tp.Objects())
}
func TestSpecialMultipartName(t *testing.T) {
hc := prepareHandlerContextWithMinCache(t)
bktName, objName := "bucket", "bucket-settings"
createTestBucket(hc, bktName)
putBucketVersioning(t, hc, bktName, true)
createMultipartUpload(hc, bktName, objName, nil)
res := getBucketVersioning(hc, bktName)
require.Equal(t, enabledValue, res.Status)
}
func TestMultipartReUploadPart(t *testing.T) {
hc := prepareHandlerContext(t)
@ -271,9 +314,9 @@ func TestMultipartUploadWithContentLanguage(t *testing.T) {
createTestBucket(hc, bktName)
partSize := 5 * 1024 * 1024
exceptedContentLanguage := "en"
expectedContentLanguage := "en"
headers := map[string]string{
api.ContentLanguage: exceptedContentLanguage,
api.ContentLanguage: expectedContentLanguage,
}
multipartUpload := createMultipartUpload(hc, bktName, objName, headers)
@ -284,7 +327,7 @@ func TestMultipartUploadWithContentLanguage(t *testing.T) {
w, r := prepareTestRequest(hc, bktName, objName, nil)
hc.Handler().HeadObjectHandler(w, r)
require.Equal(t, exceptedContentLanguage, w.Header().Get(api.ContentLanguage))
require.Equal(t, expectedContentLanguage, w.Header().Get(api.ContentLanguage))
}
func TestMultipartUploadEnabledMD5(t *testing.T) {

View file

@ -1,274 +0,0 @@
package handler
import (
"context"
"fmt"
"net/http"
"strings"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
"github.com/google/uuid"
)
type (
SendNotificationParams struct {
Event string
NotificationInfo *data.NotificationInfo
BktInfo *data.BucketInfo
ReqInfo *middleware.ReqInfo
User string
Time time.Time
}
)
const (
filterRuleSuffixName = "suffix"
filterRulePrefixName = "prefix"
EventObjectCreated = "s3:ObjectCreated:*"
EventObjectCreatedPut = "s3:ObjectCreated:Put"
EventObjectCreatedPost = "s3:ObjectCreated:Post"
EventObjectCreatedCopy = "s3:ObjectCreated:Copy"
EventReducedRedundancyLostObject = "s3:ReducedRedundancyLostObject"
EventObjectCreatedCompleteMultipartUpload = "s3:ObjectCreated:CompleteMultipartUpload"
EventObjectRemoved = "s3:ObjectRemoved:*"
EventObjectRemovedDelete = "s3:ObjectRemoved:Delete"
EventObjectRemovedDeleteMarkerCreated = "s3:ObjectRemoved:DeleteMarkerCreated"
EventObjectRestore = "s3:ObjectRestore:*"
EventObjectRestorePost = "s3:ObjectRestore:Post"
EventObjectRestoreCompleted = "s3:ObjectRestore:Completed"
EventReplication = "s3:Replication:*"
EventReplicationOperationFailedReplication = "s3:Replication:OperationFailedReplication"
EventReplicationOperationNotTracked = "s3:Replication:OperationNotTracked"
EventReplicationOperationMissedThreshold = "s3:Replication:OperationMissedThreshold"
EventReplicationOperationReplicatedAfterThreshold = "s3:Replication:OperationReplicatedAfterThreshold"
EventObjectRestoreDelete = "s3:ObjectRestore:Delete"
EventLifecycleTransition = "s3:LifecycleTransition"
EventIntelligentTiering = "s3:IntelligentTiering"
EventObjectACLPut = "s3:ObjectAcl:Put"
EventLifecycleExpiration = "s3:LifecycleExpiration:*"
EventLifecycleExpirationDelete = "s3:LifecycleExpiration:Delete"
EventLifecycleExpirationDeleteMarkerCreated = "s3:LifecycleExpiration:DeleteMarkerCreated"
EventObjectTagging = "s3:ObjectTagging:*"
EventObjectTaggingPut = "s3:ObjectTagging:Put"
EventObjectTaggingDelete = "s3:ObjectTagging:Delete"
)
var validEvents = map[string]struct{}{
EventReducedRedundancyLostObject: {},
EventObjectCreated: {},
EventObjectCreatedPut: {},
EventObjectCreatedPost: {},
EventObjectCreatedCopy: {},
EventObjectCreatedCompleteMultipartUpload: {},
EventObjectRemoved: {},
EventObjectRemovedDelete: {},
EventObjectRemovedDeleteMarkerCreated: {},
EventObjectRestore: {},
EventObjectRestorePost: {},
EventObjectRestoreCompleted: {},
EventReplication: {},
EventReplicationOperationFailedReplication: {},
EventReplicationOperationNotTracked: {},
EventReplicationOperationMissedThreshold: {},
EventReplicationOperationReplicatedAfterThreshold: {},
EventObjectRestoreDelete: {},
EventLifecycleTransition: {},
EventIntelligentTiering: {},
EventObjectACLPut: {},
EventLifecycleExpiration: {},
EventLifecycleExpirationDelete: {},
EventLifecycleExpirationDeleteMarkerCreated: {},
EventObjectTagging: {},
EventObjectTaggingPut: {},
EventObjectTaggingDelete: {},
}
func (h *handler) PutBucketNotificationHandler(w http.ResponseWriter, r *http.Request) {
reqInfo := middleware.GetReqInfo(r.Context())
bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket info", reqInfo, err)
return
}
conf := &data.NotificationConfiguration{}
if err = h.cfg.NewXMLDecoder(r.Body).Decode(conf); err != nil {
h.logAndSendError(w, "couldn't decode notification configuration", reqInfo, errors.GetAPIError(errors.ErrMalformedXML))
return
}
if _, err = h.checkBucketConfiguration(r.Context(), conf, reqInfo); err != nil {
h.logAndSendError(w, "couldn't check bucket configuration", reqInfo, err)
return
}
p := &layer.PutBucketNotificationConfigurationParams{
RequestInfo: reqInfo,
BktInfo: bktInfo,
Configuration: conf,
}
p.CopiesNumbers, err = h.pickCopiesNumbers(parseMetadata(r), reqInfo.Namespace, bktInfo.LocationConstraint)
if err != nil {
h.logAndSendError(w, "invalid copies number", reqInfo, err)
return
}
if err = h.obj.PutBucketNotificationConfiguration(r.Context(), p); err != nil {
h.logAndSendError(w, "couldn't put bucket configuration", reqInfo, err)
return
}
}
func (h *handler) GetBucketNotificationHandler(w http.ResponseWriter, r *http.Request) {
reqInfo := middleware.GetReqInfo(r.Context())
bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket info", reqInfo, err)
return
}
conf, err := h.obj.GetBucketNotificationConfiguration(r.Context(), bktInfo)
if err != nil {
h.logAndSendError(w, "could not get bucket notification configuration", reqInfo, err)
return
}
if err = middleware.EncodeToResponse(w, conf); err != nil {
h.logAndSendError(w, "could not encode bucket notification configuration to response", reqInfo, err)
return
}
}
func (h *handler) sendNotifications(ctx context.Context, p *SendNotificationParams) error {
if !h.cfg.NotificatorEnabled() {
return nil
}
conf, err := h.obj.GetBucketNotificationConfiguration(ctx, p.BktInfo)
if err != nil {
return fmt.Errorf("failed to get notification configuration: %w", err)
}
if conf.IsEmpty() {
return nil
}
box, err := middleware.GetBoxData(ctx)
if err == nil && box.Gate.BearerToken != nil {
p.User = bearer.ResolveIssuer(*box.Gate.BearerToken).EncodeToString()
}
p.Time = layer.TimeNow(ctx)
topics := filterSubjects(conf, p.Event, p.NotificationInfo.Name)
return h.notificator.SendNotifications(topics, p)
}
// checkBucketConfiguration checks notification configuration and generates an ID for configurations with empty ids.
func (h *handler) checkBucketConfiguration(ctx context.Context, conf *data.NotificationConfiguration, r *middleware.ReqInfo) (completed bool, err error) {
if conf == nil {
return
}
if conf.TopicConfigurations != nil || conf.LambdaFunctionConfigurations != nil {
return completed, errors.GetAPIError(errors.ErrNotificationTopicNotSupported)
}
for i, q := range conf.QueueConfigurations {
if err = checkEvents(q.Events); err != nil {
return
}
if err = checkRules(q.Filter.Key.FilterRules); err != nil {
return
}
if h.cfg.NotificatorEnabled() {
if err = h.notificator.SendTestNotification(q.QueueArn, r.BucketName, r.RequestID, r.Host, layer.TimeNow(ctx)); err != nil {
return
}
} else {
h.reqLogger(ctx).Warn(logs.FailedToSendTestEventBecauseNotificationsIsDisabled)
}
if q.ID == "" {
completed = true
conf.QueueConfigurations[i].ID = uuid.NewString()
}
}
return
}
func checkRules(rules []data.FilterRule) error {
names := make(map[string]struct{})
for _, r := range rules {
if r.Name != filterRuleSuffixName && r.Name != filterRulePrefixName {
return errors.GetAPIError(errors.ErrFilterNameInvalid)
}
if _, ok := names[r.Name]; ok {
if r.Name == filterRuleSuffixName {
return errors.GetAPIError(errors.ErrFilterNameSuffix)
}
return errors.GetAPIError(errors.ErrFilterNamePrefix)
}
names[r.Name] = struct{}{}
}
return nil
}
func checkEvents(events []string) error {
for _, e := range events {
if _, ok := validEvents[e]; !ok {
return errors.GetAPIError(errors.ErrEventNotification)
}
}
return nil
}
func filterSubjects(conf *data.NotificationConfiguration, eventType, objName string) map[string]string {
topics := make(map[string]string)
for _, t := range conf.QueueConfigurations {
event := false
for _, e := range t.Events {
// the second condition is comparison with the events ending with *:
// s3:ObjectCreated:*, s3:ObjectRemoved:* etc without the last char
if eventType == e || strings.HasPrefix(eventType, e[:len(e)-1]) {
event = true
break
}
}
if !event {
continue
}
filter := true
for _, f := range t.Filter.Key.FilterRules {
if f.Name == filterRulePrefixName && !strings.HasPrefix(objName, f.Value) ||
f.Name == filterRuleSuffixName && !strings.HasSuffix(objName, f.Value) {
filter = false
break
}
}
if filter {
topics[t.ID] = t.QueueArn
}
}
return topics
}

View file

@ -1,115 +0,0 @@
package handler
import (
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"github.com/stretchr/testify/require"
)
func TestFilterSubjects(t *testing.T) {
config := &data.NotificationConfiguration{
QueueConfigurations: []data.QueueConfiguration{
{
ID: "test1",
QueueArn: "test1",
Events: []string{EventObjectCreated, EventObjectRemovedDelete},
},
{
ID: "test2",
QueueArn: "test2",
Events: []string{EventObjectTagging},
Filter: data.Filter{Key: data.Key{FilterRules: []data.FilterRule{
{Name: "prefix", Value: "dir/"},
{Name: "suffix", Value: ".png"},
}}},
},
},
}
t.Run("no topics because suitable events not found", func(t *testing.T) {
topics := filterSubjects(config, EventObjectACLPut, "dir/a.png")
require.Empty(t, topics)
})
t.Run("no topics because of not suitable prefix", func(t *testing.T) {
topics := filterSubjects(config, EventObjectTaggingPut, "dirw/cat.png")
require.Empty(t, topics)
})
t.Run("no topics because of not suitable suffix", func(t *testing.T) {
topics := filterSubjects(config, EventObjectTaggingPut, "a.jpg")
require.Empty(t, topics)
})
t.Run("filter topics from queue configs without prefix suffix filter and exact event", func(t *testing.T) {
topics := filterSubjects(config, EventObjectCreatedPut, "dir/a.png")
require.Contains(t, topics, "test1")
require.Len(t, topics, 1)
require.Equal(t, topics["test1"], "test1")
})
t.Run("filter topics from queue configs with prefix suffix filter and '*' ending event", func(t *testing.T) {
topics := filterSubjects(config, EventObjectTaggingPut, "dir/a.png")
require.Contains(t, topics, "test2")
require.Len(t, topics, 1)
require.Equal(t, topics["test2"], "test2")
})
}
func TestCheckRules(t *testing.T) {
t.Run("correct rules with prefix and suffix", func(t *testing.T) {
rules := []data.FilterRule{
{Name: "prefix", Value: "asd"},
{Name: "suffix", Value: "asd"},
}
err := checkRules(rules)
require.NoError(t, err)
})
t.Run("correct rules with prefix", func(t *testing.T) {
rules := []data.FilterRule{
{Name: "prefix", Value: "asd"},
}
err := checkRules(rules)
require.NoError(t, err)
})
t.Run("correct rules with suffix", func(t *testing.T) {
rules := []data.FilterRule{
{Name: "suffix", Value: "asd"},
}
err := checkRules(rules)
require.NoError(t, err)
})
t.Run("incorrect rules with wrong name", func(t *testing.T) {
rules := []data.FilterRule{
{Name: "prefix", Value: "sdf"},
{Name: "sfx", Value: "asd"},
}
err := checkRules(rules)
require.ErrorIs(t, err, errors.GetAPIError(errors.ErrFilterNameInvalid))
})
t.Run("incorrect rules with repeating suffix", func(t *testing.T) {
rules := []data.FilterRule{
{Name: "suffix", Value: "asd"},
{Name: "suffix", Value: "asdf"},
{Name: "prefix", Value: "jk"},
}
err := checkRules(rules)
require.ErrorIs(t, err, errors.GetAPIError(errors.ErrFilterNameSuffix))
})
t.Run("incorrect rules with repeating prefix", func(t *testing.T) {
rules := []data.FilterRule{
{Name: "suffix", Value: "ds"},
{Name: "prefix", Value: "asd"},
{Name: "prefix", Value: "asdf"},
}
err := checkRules(rules)
require.ErrorIs(t, err, errors.GetAPIError(errors.ErrFilterNamePrefix))
})
}

View file

@ -94,11 +94,11 @@ func TestListObjectsWithOldTreeNodes(t *testing.T) {
}
func makeAllTreeObjectsOld(hc *handlerContext, bktInfo *data.BucketInfo) {
nodes, err := hc.treeMock.GetSubTree(hc.Context(), bktInfo, "version", 0, 0)
nodes, err := hc.treeMock.GetSubTree(hc.Context(), bktInfo, "version", []uint64{0}, 0)
require.NoError(hc.t, err)
for _, node := range nodes {
if node.GetNodeID() == 0 {
if node.GetNodeID()[0] == 0 {
continue
}
meta := make(map[string]string, len(node.GetMeta()))
@ -108,7 +108,7 @@ func makeAllTreeObjectsOld(hc *handlerContext, bktInfo *data.BucketInfo) {
}
}
err = hc.treeMock.MoveNode(hc.Context(), bktInfo, "version", node.GetNodeID(), node.GetParentID(), meta)
err = hc.treeMock.MoveNode(hc.Context(), bktInfo, "version", node.GetNodeID()[0], node.GetParentID()[0], meta)
require.NoError(hc.t, err)
}
}

View file

@ -4,7 +4,6 @@ import (
"bytes"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"encoding/json"
"encoding/xml"
stderrors "errors"
@ -26,12 +25,14 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/pkg/retryer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/pkg/service/tree"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"git.frostfs.info/TrueCloudLab/policy-engine/schema/native"
"git.frostfs.info/TrueCloudLab/policy-engine/schema/s3"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/aws/retry"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"go.uber.org/zap"
)
@ -170,10 +171,9 @@ func (p *policyCondition) UnmarshalJSON(data []byte) error {
// keywords of predefined basic ACL values.
const (
basicACLPrivate = "private"
basicACLReadOnly = "public-read"
basicACLPublic = "public-read-write"
cannedACLAuthRead = "authenticated-read"
basicACLPrivate = "private"
basicACLReadOnly = "public-read"
basicACLPublic = "public-read-write"
)
type createBucketParams struct {
@ -183,19 +183,27 @@ type createBucketParams struct {
func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
var (
err error
newEaclTable *eacl.Table
sessionTokenEACL *session.Container
containsACL = containsACLHeaders(r)
ctx = r.Context()
reqInfo = middleware.GetReqInfo(ctx)
err error
cannedACLStatus = aclHeadersStatus(r)
ctx = r.Context()
reqInfo = middleware.GetReqInfo(ctx)
)
if containsACL {
if sessionTokenEACL, err = getSessionTokenSetEACL(r.Context()); err != nil {
h.logAndSendError(w, "could not get eacl session token from a box", reqInfo, err)
return
}
bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket objInfo", reqInfo, err)
return
}
settings, err := h.obj.GetBucketSettings(ctx, bktInfo)
if err != nil {
h.logAndSendError(w, "could not get bucket settings", reqInfo, err)
return
}
if cannedACLStatus == aclStatusYes {
h.logAndSendError(w, "acl not supported for this bucket", reqInfo, errors.GetAPIError(errors.ErrAccessControlListNotSupported))
return
}
tagSet, err := parseTaggingHeader(r.Header)
@ -204,12 +212,6 @@ func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
return
}
bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket objInfo", reqInfo, err)
return
}
metadata := parseMetadata(r)
if contentType := r.Header.Get(api.ContentType); len(contentType) > 0 {
metadata[api.ContentType] = contentType
@ -261,12 +263,6 @@ func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
return
}
settings, err := h.obj.GetBucketSettings(ctx, bktInfo)
if err != nil {
h.logAndSendError(w, "could not get bucket settings", reqInfo, err)
return
}
params.Lock, err = formObjectLock(ctx, bktInfo, settings.LockConfiguration, r.Header)
if err != nil {
h.logAndSendError(w, "could not form object lock", reqInfo, err)
@ -282,26 +278,9 @@ func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
}
objInfo := extendedObjInfo.ObjectInfo
s := &SendNotificationParams{
Event: EventObjectCreatedPut,
NotificationInfo: data.NotificationInfoFromObject(objInfo, h.cfg.MD5Enabled()),
BktInfo: bktInfo,
ReqInfo: reqInfo,
}
if err = h.sendNotifications(ctx, s); err != nil {
h.reqLogger(ctx).Error(logs.CouldntSendNotification, zap.Error(err))
}
if containsACL {
if newEaclTable, err = h.getNewEAclTable(r, bktInfo, objInfo); err != nil {
h.logAndSendError(w, "could not get new eacl table", reqInfo, err)
return
}
}
if tagSet != nil {
tagPrm := &layer.PutObjectTaggingParams{
ObjectVersion: &layer.ObjectVersion{
tagPrm := &data.PutObjectTaggingParams{
ObjectVersion: &data.ObjectVersion{
BktInfo: bktInfo,
ObjectName: objInfo.Name,
VersionID: objInfo.VersionID(),
@ -309,25 +288,12 @@ func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
TagSet: tagSet,
NodeVersion: extendedObjInfo.NodeVersion,
}
if _, err = h.obj.PutObjectTagging(r.Context(), tagPrm); err != nil {
if err = h.obj.PutObjectTagging(r.Context(), tagPrm); err != nil {
h.logAndSendError(w, "could not upload object tagging", reqInfo, err)
return
}
}
if newEaclTable != nil {
p := &layer.PutBucketACLParams{
BktInfo: bktInfo,
EACL: newEaclTable,
SessionToken: sessionTokenEACL,
}
if err = h.obj.PutBucketACL(r.Context(), p); err != nil {
h.logAndSendError(w, "could not put bucket acl", reqInfo, err)
return
}
}
if settings.VersioningEnabled() {
w.Header().Set(api.AmzVersionID, objInfo.VersionID())
}
@ -337,7 +303,10 @@ func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ETag, data.Quote(objInfo.ETag(h.cfg.MD5Enabled())))
middleware.WriteSuccessResponseHeadersOnly(w)
if err = middleware.WriteSuccessResponseHeadersOnly(w); err != nil {
h.logAndSendError(w, "write response", reqInfo, err)
return
}
}
func (h *handler) getBodyReader(r *http.Request) (io.ReadCloser, error) {
@ -456,13 +425,10 @@ func formEncryptionParamsBase(r *http.Request, isCopySource bool) (enc encryptio
func (h *handler) PostObject(w http.ResponseWriter, r *http.Request) {
var (
newEaclTable *eacl.Table
tagSet map[string]string
sessionTokenEACL *session.Container
ctx = r.Context()
reqInfo = middleware.GetReqInfo(ctx)
metadata = make(map[string]string)
containsACL = containsACLHeaders(r)
tagSet map[string]string
ctx = r.Context()
reqInfo = middleware.GetReqInfo(ctx)
metadata = make(map[string]string)
)
policy, err := checkPostPolicy(r, reqInfo, metadata)
@ -473,18 +439,34 @@ func (h *handler) PostObject(w http.ResponseWriter, r *http.Request) {
if tagging := auth.MultipartFormValue(r, "tagging"); tagging != "" {
buffer := bytes.NewBufferString(tagging)
tagSet, err = h.readTagSet(buffer)
tags := new(data.Tagging)
if err = h.cfg.NewXMLDecoder(buffer).Decode(tags); err != nil {
h.logAndSendError(w, "could not decode tag set", reqInfo,
fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrMalformedXML), err.Error()))
return
}
tagSet, err = h.readTagSet(tags)
if err != nil {
h.logAndSendError(w, "could not read tag set", reqInfo, err)
return
}
}
if containsACL {
if sessionTokenEACL, err = getSessionTokenSetEACL(ctx); err != nil {
h.logAndSendError(w, "could not get eacl session token from a box", reqInfo, err)
return
}
bktInfo, err := h.getBucketAndCheckOwner(r, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket objInfo", reqInfo, err)
return
}
settings, err := h.obj.GetBucketSettings(ctx, bktInfo)
if err != nil {
h.logAndSendError(w, "could not get bucket settings", reqInfo, err)
return
}
if acl := auth.MultipartFormValue(r, "acl"); acl != "" && acl != basicACLPrivate {
h.logAndSendError(w, "acl not supported for this bucket", reqInfo, errors.GetAPIError(errors.ErrAccessControlListNotSupported))
return
}
var contentReader io.Reader
@ -507,12 +489,6 @@ func (h *handler) PostObject(w http.ResponseWriter, r *http.Request) {
return
}
bktInfo, err := h.obj.GetBucketInfo(ctx, reqInfo.BucketName)
if err != nil {
h.logAndSendError(w, "could not get bucket info", reqInfo, err)
return
}
params := &layer.PutObjectParams{
BktInfo: bktInfo,
Object: reqInfo.ObjectName,
@ -528,31 +504,9 @@ func (h *handler) PostObject(w http.ResponseWriter, r *http.Request) {
}
objInfo := extendedObjInfo.ObjectInfo
s := &SendNotificationParams{
Event: EventObjectCreatedPost,
NotificationInfo: data.NotificationInfoFromObject(objInfo, h.cfg.MD5Enabled()),
BktInfo: bktInfo,
ReqInfo: reqInfo,
}
if err = h.sendNotifications(ctx, s); err != nil {
h.reqLogger(ctx).Error(logs.CouldntSendNotification, zap.Error(err))
}
if acl := auth.MultipartFormValue(r, "acl"); acl != "" {
r.Header.Set(api.AmzACL, acl)
r.Header.Set(api.AmzGrantFullControl, "")
r.Header.Set(api.AmzGrantWrite, "")
r.Header.Set(api.AmzGrantRead, "")
if newEaclTable, err = h.getNewEAclTable(r, bktInfo, objInfo); err != nil {
h.logAndSendError(w, "could not get new eacl table", reqInfo, err)
return
}
}
if tagSet != nil {
tagPrm := &layer.PutObjectTaggingParams{
ObjectVersion: &layer.ObjectVersion{
tagPrm := &data.PutObjectTaggingParams{
ObjectVersion: &data.ObjectVersion{
BktInfo: bktInfo,
ObjectName: objInfo.Name,
VersionID: objInfo.VersionID(),
@ -560,28 +514,13 @@ func (h *handler) PostObject(w http.ResponseWriter, r *http.Request) {
NodeVersion: extendedObjInfo.NodeVersion,
}
if _, err = h.obj.PutObjectTagging(ctx, tagPrm); err != nil {
if err = h.obj.PutObjectTagging(ctx, tagPrm); err != nil {
h.logAndSendError(w, "could not upload object tagging", reqInfo, err)
return
}
}
if newEaclTable != nil {
p := &layer.PutBucketACLParams{
BktInfo: bktInfo,
EACL: newEaclTable,
SessionToken: sessionTokenEACL,
}
if err = h.obj.PutBucketACL(ctx, p); err != nil {
h.logAndSendError(w, "could not put bucket acl", reqInfo, err)
return
}
}
if settings, err := h.obj.GetBucketSettings(ctx, bktInfo); err != nil {
h.reqLogger(ctx).Warn(logs.CouldntGetBucketVersioning, zap.String("bucket name", reqInfo.BucketName), zap.Error(err))
} else if settings.VersioningEnabled() {
if settings.VersioningEnabled() {
w.Header().Set(api.AmzVersionID, objInfo.VersionID())
}
@ -602,7 +541,11 @@ func (h *handler) PostObject(w http.ResponseWriter, r *http.Request) {
ETag: data.Quote(objInfo.ETag(h.cfg.MD5Enabled())),
}
w.WriteHeader(status)
if _, err = w.Write(middleware.EncodeResponse(resp)); err != nil {
respData, err := middleware.EncodeResponse(resp)
if err != nil {
h.logAndSendError(w, "encode response", reqInfo, err)
}
if _, err = w.Write(respData); err != nil {
h.logAndSendError(w, "something went wrong", reqInfo, err)
}
return
@ -673,59 +616,33 @@ func checkPostPolicy(r *http.Request, reqInfo *middleware.ReqInfo, metadata map[
return policy, nil
}
func containsACLHeaders(r *http.Request) bool {
return r.Header.Get(api.AmzACL) != "" || r.Header.Get(api.AmzGrantRead) != "" ||
r.Header.Get(api.AmzGrantFullControl) != "" || r.Header.Get(api.AmzGrantWrite) != ""
}
type aclStatus int
func (h *handler) getNewEAclTable(r *http.Request, bktInfo *data.BucketInfo, objInfo *data.ObjectInfo) (*eacl.Table, error) {
var newEaclTable *eacl.Table
key, err := h.bearerTokenIssuerKey(r.Context())
if err != nil {
return nil, fmt.Errorf("get bearer token issuer: %w", err)
}
objectACL, err := parseACLHeaders(r.Header, key)
if err != nil {
return nil, fmt.Errorf("could not parse object acl: %w", err)
const (
// aclStatusNo means no acl headers at all.
aclStatusNo aclStatus = iota
// aclStatusYesAPECompatible means that only X-Amz-Acl present and equals to private.
aclStatusYesAPECompatible
// aclStatusYes means any other acl headers configuration.
aclStatusYes
)
func aclHeadersStatus(r *http.Request) aclStatus {
if r.Header.Get(api.AmzGrantRead) != "" ||
r.Header.Get(api.AmzGrantFullControl) != "" ||
r.Header.Get(api.AmzGrantWrite) != "" {
return aclStatusYes
}
resInfo := &resourceInfo{
Bucket: objInfo.Bucket,
Object: objInfo.Name,
Version: objInfo.VersionID(),
}
bktPolicy, err := aclToPolicy(objectACL, resInfo)
if err != nil {
return nil, fmt.Errorf("could not translate object acl to bucket policy: %w", err)
}
astChild, err := policyToAst(bktPolicy)
if err != nil {
return nil, fmt.Errorf("could not translate policy to ast: %w", err)
}
bacl, err := h.obj.GetBucketACL(r.Context(), bktInfo)
if err != nil {
return nil, fmt.Errorf("could not get bucket eacl: %w", err)
}
parentAst := tableToAst(bacl.EACL, objInfo.Bucket)
strCID := bacl.Info.CID.EncodeToString()
for _, resource := range parentAst.Resources {
if resource.Bucket == strCID {
resource.Bucket = objInfo.Bucket
cannedACL := r.Header.Get(api.AmzACL)
if cannedACL != "" {
if cannedACL == basicACLPrivate {
return aclStatusYesAPECompatible
}
return aclStatusYes
}
if resAst, updated := mergeAst(parentAst, astChild); updated {
if newEaclTable, err = astToTable(resAst); err != nil {
return nil, fmt.Errorf("could not translate ast to table: %w", err)
}
}
return newEaclTable, nil
return aclStatusNo
}
func parseTaggingHeader(header http.Header) (map[string]string, error) {
@ -740,7 +657,7 @@ func parseTaggingHeader(header http.Header) (map[string]string, error) {
}
tagSet = make(map[string]string, len(queries))
for k, v := range queries {
tag := Tag{Key: k, Value: v[0]}
tag := data.Tag{Key: k, Value: v[0]}
if err = checkTag(tag); err != nil {
return nil, err
}
@ -767,8 +684,7 @@ func parseCannedACL(header http.Header) (string, error) {
return basicACLPrivate, nil
}
if acl == basicACLPrivate || acl == basicACLPublic ||
acl == cannedACLAuthRead || acl == basicACLReadOnly {
if acl == basicACLPrivate || acl == basicACLPublic || acl == basicACLReadOnly {
return acl, nil
}
@ -776,11 +692,6 @@ func parseCannedACL(header http.Header) (string, error) {
}
func (h *handler) CreateBucketHandler(w http.ResponseWriter, r *http.Request) {
if h.cfg.ACLEnabled() {
h.createBucketHandlerACL(w, r)
return
}
h.createBucketHandlerPolicy(w, r)
}
@ -840,7 +751,6 @@ func (h *handler) createBucketHandlerPolicy(w http.ResponseWriter, r *http.Reque
return
}
p.APEEnabled = true
bktInfo, err := h.obj.CreateBucket(ctx, p)
if err != nil {
h.logAndSendError(w, "could not create bucket", reqInfo, err)
@ -848,8 +758,8 @@ func (h *handler) createBucketHandlerPolicy(w http.ResponseWriter, r *http.Reque
}
h.reqLogger(ctx).Info(logs.BucketIsCreated, zap.Stringer("container_id", bktInfo.CID))
chains := bucketCannedACLToAPERules(cannedACL, reqInfo, key, bktInfo.CID)
if err = h.ape.SaveACLChains(reqInfo.Namespace, chains); err != nil {
chains := bucketCannedACLToAPERules(cannedACL, reqInfo, bktInfo.CID)
if err = h.ape.SaveACLChains(bktInfo.CID.EncodeToString(), chains); err != nil {
h.logAndSendError(w, "failed to add morph rule chain", reqInfo, err)
return
}
@ -867,82 +777,40 @@ func (h *handler) createBucketHandlerPolicy(w http.ResponseWriter, r *http.Reque
sp.Settings.Versioning = data.VersioningEnabled
}
if err = h.obj.PutBucketSettings(ctx, sp); err != nil {
err = retryer.MakeWithRetry(ctx, func() error {
return h.obj.PutBucketSettings(ctx, sp)
}, h.putBucketSettingsRetryer())
if err != nil {
h.logAndSendError(w, "couldn't save bucket settings", reqInfo, err,
zap.String("container_id", bktInfo.CID.EncodeToString()))
return
}
middleware.WriteSuccessResponseHeadersOnly(w)
if err = middleware.WriteSuccessResponseHeadersOnly(w); err != nil {
h.logAndSendError(w, "write response", reqInfo, err)
return
}
}
func (h *handler) createBucketHandlerACL(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
reqInfo := middleware.GetReqInfo(ctx)
func (h *handler) putBucketSettingsRetryer() aws.RetryerV2 {
return retry.NewStandard(func(options *retry.StandardOptions) {
options.MaxAttempts = h.cfg.RetryMaxAttempts()
options.MaxBackoff = h.cfg.RetryMaxBackoff()
if h.cfg.RetryStrategy() == RetryStrategyExponential {
options.Backoff = retry.NewExponentialJitterBackoff(options.MaxBackoff)
} else {
options.Backoff = retry.BackoffDelayerFunc(func(int, error) (time.Duration, error) {
return options.MaxBackoff, nil
})
}
boxData, err := middleware.GetBoxData(ctx)
if err != nil {
h.logAndSendError(w, "get access box from request", reqInfo, err)
return
}
key, p, err := h.parseCommonCreateBucketParams(reqInfo, boxData, r)
if err != nil {
h.logAndSendError(w, "parse create bucket params", reqInfo, err)
return
}
aclPrm := &layer.PutBucketACLParams{SessionToken: boxData.Gate.SessionTokenForSetEACL()}
if aclPrm.SessionToken == nil {
h.logAndSendError(w, "couldn't find session token for setEACL", reqInfo, errors.GetAPIError(errors.ErrAccessDenied))
return
}
bktACL, err := parseACLHeaders(r.Header, key)
if err != nil {
h.logAndSendError(w, "could not parse bucket acl", reqInfo, err)
return
}
resInfo := &resourceInfo{Bucket: reqInfo.BucketName}
aclPrm.EACL, err = bucketACLToTable(bktACL, resInfo)
if err != nil {
h.logAndSendError(w, "could translate bucket acl to eacl", reqInfo, err)
return
}
bktInfo, err := h.obj.CreateBucket(ctx, p)
if err != nil {
h.logAndSendError(w, "could not create bucket", reqInfo, err)
return
}
h.reqLogger(ctx).Info(logs.BucketIsCreated, zap.Stringer("container_id", bktInfo.CID))
aclPrm.BktInfo = bktInfo
if err = h.obj.PutBucketACL(r.Context(), aclPrm); err != nil {
h.logAndSendError(w, "could not put bucket e/ACL", reqInfo, err)
return
}
sp := &layer.PutSettingsParams{
BktInfo: bktInfo,
Settings: &data.BucketSettings{
OwnerKey: key,
Versioning: data.VersioningUnversioned,
},
}
if p.ObjectLockEnabled {
sp.Settings.Versioning = data.VersioningEnabled
}
if err = h.obj.PutBucketSettings(ctx, sp); err != nil {
h.logAndSendError(w, "couldn't save bucket settings", reqInfo, err,
zap.String("container_id", bktInfo.CID.EncodeToString()))
return
}
middleware.WriteSuccessResponseHeadersOnly(w)
options.Retryables = []retry.IsErrorRetryable{retry.IsErrorRetryableFunc(func(err error) aws.Ternary {
if stderrors.Is(err, tree.ErrNodeAccessDenied) {
return aws.TrueTernary
}
return aws.FalseTernary
})}
})
}
const s3ActionPrefix = "s3:"
@ -983,49 +851,22 @@ var (
}
)
func bucketCannedACLToAPERules(cannedACL string, reqInfo *middleware.ReqInfo, key *keys.PublicKey, cnrID cid.ID) []*chain.Chain {
func bucketCannedACLToAPERules(cannedACL string, reqInfo *middleware.ReqInfo, cnrID cid.ID) []*chain.Chain {
cnrIDStr := cnrID.EncodeToString()
chains := []*chain.Chain{
{
ID: getBucketCannedChainID(chain.S3, cnrID),
Rules: []chain.Rule{{
Status: chain.Allow,
Actions: chain.Actions{Names: []string{"s3:*"}},
Resources: chain.Resources{Names: []string{
fmt.Sprintf(s3.ResourceFormatS3Bucket, reqInfo.BucketName),
fmt.Sprintf(s3.ResourceFormatS3BucketObjects, reqInfo.BucketName),
}},
Condition: []chain.Condition{{
Op: chain.CondStringEquals,
Object: chain.ObjectRequest,
Key: s3.PropertyKeyOwner,
Value: key.Address(),
}},
}}},
ID: getBucketCannedChainID(chain.S3, cnrID),
Rules: []chain.Rule{},
},
{
ID: getBucketCannedChainID(chain.Ingress, cnrID),
Rules: []chain.Rule{{
Status: chain.Allow,
Actions: chain.Actions{Names: []string{"*"}},
Resources: chain.Resources{Names: []string{
fmt.Sprintf(native.ResourceFormatNamespaceContainer, reqInfo.Namespace, cnrIDStr),
fmt.Sprintf(native.ResourceFormatNamespaceContainerObjects, reqInfo.Namespace, cnrIDStr),
}},
Condition: []chain.Condition{{
Op: chain.CondStringEquals,
Object: chain.ObjectRequest,
Key: native.PropertyKeyActorPublicKey,
Value: hex.EncodeToString(key.Bytes()),
}},
}},
ID: getBucketCannedChainID(chain.Ingress, cnrID),
Rules: []chain.Rule{},
},
}
switch cannedACL {
case basicACLPrivate:
case cannedACLAuthRead:
fallthrough
case basicACLReadOnly:
chains[0].Rules = append(chains[0].Rules, chain.Rule{
Status: chain.Allow,
@ -1151,7 +992,7 @@ func (h *handler) parseLocationConstraint(r *http.Request) (*createBucketParams,
params := new(createBucketParams)
if err := h.cfg.NewXMLDecoder(r.Body).Decode(params); err != nil {
return nil, errors.GetAPIError(errors.ErrMalformedXML)
return nil, fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrMalformedXML), err.Error())
}
return params, nil
}

View file

@ -351,17 +351,19 @@ func getChunkedRequest(ctx context.Context, t *testing.T, bktName, objName strin
req.Body = io.NopCloser(reqBody)
w := httptest.NewRecorder()
reqInfo := middleware.NewReqInfo(w, req, middleware.ObjectRequest{Bucket: bktName, Object: objName})
reqInfo := middleware.NewReqInfo(w, req, middleware.ObjectRequest{Bucket: bktName, Object: objName}, "")
req = req.WithContext(middleware.SetReqInfo(ctx, reqInfo))
req = req.WithContext(middleware.SetClientTime(req.Context(), signTime))
req = req.WithContext(middleware.SetAuthHeaders(req.Context(), &middleware.AuthHeader{
AccessKeyID: AWSAccessKeyID,
SignatureV4: "4f232c4386841ef735655705268965c44a0e4690baa4adea153f7db9fa80a0a9",
Region: "us-east-1",
}))
req = req.WithContext(middleware.SetBoxData(req.Context(), &accessbox.Box{
Gate: &accessbox.GateData{
SecretKey: AWSSecretAccessKey,
req = req.WithContext(middleware.SetBox(req.Context(), &middleware.Box{
ClientTime: signTime,
AuthHeaders: &middleware.AuthHeader{
AccessKeyID: AWSAccessKeyID,
SignatureV4: "4f232c4386841ef735655705268965c44a0e4690baa4adea153f7db9fa80a0a9",
Region: "us-east-1",
},
AccessBox: &accessbox.Box{
Gate: &accessbox.GateData{
SecretKey: AWSSecretAccessKey,
},
},
}))
@ -379,21 +381,6 @@ func TestCreateBucket(t *testing.T) {
createBucketAssertS3Error(hc, bktName, box2, s3errors.ErrBucketAlreadyExists)
}
func TestCreateOldBucketPutVersioning(t *testing.T) {
hc := prepareHandlerContext(t)
hc.config.aclEnabled = true
bktName := "bkt-name"
info := createBucket(hc, bktName)
settings, err := hc.tree.GetSettingsNode(hc.Context(), info.BktInfo)
require.NoError(t, err)
settings.OwnerKey = nil
err = hc.tree.PutSettingsNode(hc.Context(), info.BktInfo, settings)
require.NoError(t, err)
putBucketVersioning(t, hc, bktName, true)
}
func TestCreateNamespacedBucket(t *testing.T) {
hc := prepareHandlerContext(t)
bktName := "bkt-name"
@ -401,7 +388,7 @@ func TestCreateNamespacedBucket(t *testing.T) {
box, _ := createAccessBox(t)
w, r := prepareTestRequest(hc, bktName, "", nil)
ctx := middleware.SetBoxData(r.Context(), box)
ctx := middleware.SetBox(r.Context(), &middleware.Box{AccessBox: box})
reqInfo := middleware.GetReqInfo(ctx)
reqInfo.Namespace = namespace
r = r.WithContext(middleware.SetReqInfo(ctx, reqInfo))
@ -451,14 +438,14 @@ func getObjectAttribute(obj *object.Object, attrName string) string {
func TestPutObjectWithContentLanguage(t *testing.T) {
tc := prepareHandlerContext(t)
exceptedContentLanguage := "en"
expectedContentLanguage := "en"
bktName, objName := "bucket-1", "object-1"
createTestBucket(tc, bktName)
w, r := prepareTestRequest(tc, bktName, objName, nil)
r.Header.Set(api.ContentLanguage, exceptedContentLanguage)
r.Header.Set(api.ContentLanguage, expectedContentLanguage)
tc.Handler().PutObjectHandler(w, r)
tc.Handler().HeadObjectHandler(w, r)
require.Equal(t, exceptedContentLanguage, w.Header().Get(api.ContentLanguage))
require.Equal(t, expectedContentLanguage, w.Header().Get(api.ContentLanguage))
}

View file

@ -55,6 +55,19 @@ type Bucket struct {
CreationDate string // time string of format "2006-01-02T15:04:05.000Z"
}
// PolicyStatus contains status of bucket policy.
type PolicyStatus struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ PolicyStatus" json:"-"`
IsPublic PolicyStatusIsPublic `xml:"IsPublic"`
}
type PolicyStatusIsPublic string
const (
PolicyStatusIsPublicFalse = "FALSE"
PolicyStatusIsPublicTrue = "TRUE"
)
// AccessControlPolicy contains ACL.
type AccessControlPolicy struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ AccessControlPolicy" json:"-"`
@ -172,12 +185,6 @@ type VersioningConfiguration struct {
MfaDelete string `xml:"MfaDelete,omitempty"`
}
// Tagging contains tag set.
type Tagging struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ Tagging"`
TagSet []Tag `xml:"TagSet>Tag"`
}
// PostResponse contains result of posting object.
type PostResponse struct {
Bucket string `xml:"Bucket"`
@ -185,12 +192,6 @@ type PostResponse struct {
ETag string `xml:"Etag"`
}
// Tag is an AWS key-value tag.
type Tag struct {
Key string
Value string
}
// MarshalXML -- StringMap marshals into XML.
func (s StringMap) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
tokens := []xml.Token{start}

View file

@ -1,7 +1,6 @@
package handler
import (
"io"
"net/http"
"sort"
"strings"
@ -10,10 +9,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"go.uber.org/zap"
)
const (
@ -28,7 +24,7 @@ func (h *handler) PutObjectTaggingHandler(w http.ResponseWriter, r *http.Request
ctx := r.Context()
reqInfo := middleware.GetReqInfo(ctx)
tagSet, err := h.readTagSet(r.Body)
tagSet, err := h.readTagSet(reqInfo.Tagging)
if err != nil {
h.logAndSendError(w, "could not read tag set", reqInfo, err)
return
@ -40,35 +36,20 @@ func (h *handler) PutObjectTaggingHandler(w http.ResponseWriter, r *http.Request
return
}
tagPrm := &layer.PutObjectTaggingParams{
ObjectVersion: &layer.ObjectVersion{
tagPrm := &data.PutObjectTaggingParams{
ObjectVersion: &data.ObjectVersion{
BktInfo: bktInfo,
ObjectName: reqInfo.ObjectName,
VersionID: reqInfo.URL.Query().Get(api.QueryVersionID),
},
TagSet: tagSet,
}
nodeVersion, err := h.obj.PutObjectTagging(ctx, tagPrm)
if err != nil {
if err = h.obj.PutObjectTagging(ctx, tagPrm); err != nil {
h.logAndSendError(w, "could not put object tagging", reqInfo, err)
return
}
s := &SendNotificationParams{
Event: EventObjectTaggingPut,
NotificationInfo: &data.NotificationInfo{
Name: nodeVersion.FilePath,
Size: nodeVersion.Size,
Version: nodeVersion.OID.EncodeToString(),
HashSum: nodeVersion.ETag,
},
BktInfo: bktInfo,
ReqInfo: reqInfo,
}
if err = h.sendNotifications(ctx, s); err != nil {
h.reqLogger(ctx).Error(logs.CouldntSendNotification, zap.Error(err))
}
w.WriteHeader(http.StatusOK)
}
@ -87,8 +68,8 @@ func (h *handler) GetObjectTaggingHandler(w http.ResponseWriter, r *http.Request
return
}
tagPrm := &layer.GetObjectTaggingParams{
ObjectVersion: &layer.ObjectVersion{
tagPrm := &data.GetObjectTaggingParams{
ObjectVersion: &data.ObjectVersion{
BktInfo: bktInfo,
ObjectName: reqInfo.ObjectName,
VersionID: reqInfo.URL.Query().Get(api.QueryVersionID),
@ -119,40 +100,24 @@ func (h *handler) DeleteObjectTaggingHandler(w http.ResponseWriter, r *http.Requ
return
}
p := &layer.ObjectVersion{
p := &data.ObjectVersion{
BktInfo: bktInfo,
ObjectName: reqInfo.ObjectName,
VersionID: reqInfo.URL.Query().Get(api.QueryVersionID),
}
nodeVersion, err := h.obj.DeleteObjectTagging(ctx, p)
if err != nil {
if err = h.obj.DeleteObjectTagging(ctx, p); err != nil {
h.logAndSendError(w, "could not delete object tagging", reqInfo, err)
return
}
s := &SendNotificationParams{
Event: EventObjectTaggingDelete,
NotificationInfo: &data.NotificationInfo{
Name: nodeVersion.FilePath,
Size: nodeVersion.Size,
Version: nodeVersion.OID.EncodeToString(),
HashSum: nodeVersion.ETag,
},
BktInfo: bktInfo,
ReqInfo: reqInfo,
}
if err = h.sendNotifications(ctx, s); err != nil {
h.reqLogger(ctx).Error(logs.CouldntSendNotification, zap.Error(err))
}
w.WriteHeader(http.StatusNoContent)
}
func (h *handler) PutBucketTaggingHandler(w http.ResponseWriter, r *http.Request) {
reqInfo := middleware.GetReqInfo(r.Context())
tagSet, err := h.readTagSet(r.Body)
tagSet, err := h.readTagSet(reqInfo.Tagging)
if err != nil {
h.logAndSendError(w, "could not read tag set", reqInfo, err)
return
@ -207,12 +172,7 @@ func (h *handler) DeleteBucketTaggingHandler(w http.ResponseWriter, r *http.Requ
w.WriteHeader(http.StatusNoContent)
}
func (h *handler) readTagSet(reader io.Reader) (map[string]string, error) {
tagging := new(Tagging)
if err := h.cfg.NewXMLDecoder(reader).Decode(tagging); err != nil {
return nil, errors.GetAPIError(errors.ErrMalformedXML)
}
func (h *handler) readTagSet(tagging *data.Tagging) (map[string]string, error) {
if err := checkTagSet(tagging.TagSet); err != nil {
return nil, err
}
@ -228,10 +188,10 @@ func (h *handler) readTagSet(reader io.Reader) (map[string]string, error) {
return tagSet, nil
}
func encodeTagging(tagSet map[string]string) *Tagging {
tagging := &Tagging{}
func encodeTagging(tagSet map[string]string) *data.Tagging {
tagging := &data.Tagging{}
for k, v := range tagSet {
tagging.TagSet = append(tagging.TagSet, Tag{Key: k, Value: v})
tagging.TagSet = append(tagging.TagSet, data.Tag{Key: k, Value: v})
}
sort.Slice(tagging.TagSet, func(i, j int) bool {
return tagging.TagSet[i].Key < tagging.TagSet[j].Key
@ -240,7 +200,7 @@ func encodeTagging(tagSet map[string]string) *Tagging {
return tagging
}
func checkTagSet(tagSet []Tag) error {
func checkTagSet(tagSet []data.Tag) error {
if len(tagSet) > maxTags {
return errors.GetAPIError(errors.ErrInvalidTagsSizeExceed)
}
@ -254,7 +214,7 @@ func checkTagSet(tagSet []Tag) error {
return nil
}
func checkTag(tag Tag) error {
func checkTag(tag data.Tag) error {
if len(tag.Key) < 1 || len(tag.Key) > keyTagMaxLength {
return errors.GetAPIError(errors.ErrInvalidTagKey)
}

View file

@ -5,7 +5,9 @@ import (
"strings"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
apiErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"github.com/stretchr/testify/require"
)
@ -20,23 +22,23 @@ func TestTagsValidity(t *testing.T) {
}
for _, tc := range []struct {
tag Tag
tag data.Tag
valid bool
}{
{tag: Tag{}, valid: false},
{tag: Tag{Key: "", Value: "1"}, valid: false},
{tag: Tag{Key: "aws:key", Value: "val"}, valid: false},
{tag: Tag{Key: "key~", Value: "val"}, valid: false},
{tag: Tag{Key: "key\\", Value: "val"}, valid: false},
{tag: Tag{Key: "key?", Value: "val"}, valid: false},
{tag: Tag{Key: sbKey.String() + "b", Value: "val"}, valid: false},
{tag: Tag{Key: "key", Value: sbValue.String() + "b"}, valid: false},
{tag: data.Tag{}, valid: false},
{tag: data.Tag{Key: "", Value: "1"}, valid: false},
{tag: data.Tag{Key: "aws:key", Value: "val"}, valid: false},
{tag: data.Tag{Key: "key~", Value: "val"}, valid: false},
{tag: data.Tag{Key: "key\\", Value: "val"}, valid: false},
{tag: data.Tag{Key: "key?", Value: "val"}, valid: false},
{tag: data.Tag{Key: sbKey.String() + "b", Value: "val"}, valid: false},
{tag: data.Tag{Key: "key", Value: sbValue.String() + "b"}, valid: false},
{tag: Tag{Key: sbKey.String(), Value: "val"}, valid: true},
{tag: Tag{Key: "key", Value: sbValue.String()}, valid: true},
{tag: Tag{Key: "k e y", Value: "v a l"}, valid: true},
{tag: Tag{Key: "12345", Value: "1234"}, valid: true},
{tag: Tag{Key: allowedTagChars, Value: allowedTagChars}, valid: true},
{tag: data.Tag{Key: sbKey.String(), Value: "val"}, valid: true},
{tag: data.Tag{Key: "key", Value: sbValue.String()}, valid: true},
{tag: data.Tag{Key: "k e y", Value: "v a l"}, valid: true},
{tag: data.Tag{Key: "12345", Value: "1234"}, valid: true},
{tag: data.Tag{Key: allowedTagChars, Value: allowedTagChars}, valid: true},
} {
err := checkTag(tc.tag)
if tc.valid {
@ -55,13 +57,13 @@ func TestPutObjectTaggingCheckUniqueness(t *testing.T) {
for _, tc := range []struct {
name string
body *Tagging
body *data.Tagging
error bool
}{
{
name: "Two tags with unique keys",
body: &Tagging{
TagSet: []Tag{
body: &data.Tagging{
TagSet: []data.Tag{
{
Key: "key-1",
Value: "val-1",
@ -76,8 +78,8 @@ func TestPutObjectTaggingCheckUniqueness(t *testing.T) {
},
{
name: "Two tags with the same keys",
body: &Tagging{
TagSet: []Tag{
body: &data.Tagging{
TagSet: []data.Tag{
{
Key: "key-1",
Value: "val-1",
@ -93,6 +95,7 @@ func TestPutObjectTaggingCheckUniqueness(t *testing.T) {
} {
t.Run(tc.name, func(t *testing.T) {
w, r := prepareTestRequest(hc, bktName, objName, tc.body)
middleware.GetReqInfo(r.Context()).Tagging = tc.body
hc.Handler().PutObjectTaggingHandler(w, r)
if tc.error {
assertS3Error(t, w, apiErrors.GetAPIError(apiErrors.ErrInvalidTagKeyUniqueness))

View file

@ -58,3 +58,11 @@ func (h *handler) PutBucketLifecycleHandler(w http.ResponseWriter, r *http.Reque
func (h *handler) PutBucketEncryptionHandler(w http.ResponseWriter, r *http.Request) {
h.logAndSendError(w, "not implemented", middleware.GetReqInfo(r.Context()), errors.GetAPIError(errors.ErrNotImplemented))
}
func (h *handler) PutBucketNotificationHandler(w http.ResponseWriter, r *http.Request) {
h.logAndSendError(w, "not implemented", middleware.GetReqInfo(r.Context()), errors.GetAPIError(errors.ErrNotImplemented))
}
func (h *handler) GetBucketNotificationHandler(w http.ResponseWriter, r *http.Request) {
h.logAndSendError(w, "not implemented", middleware.GetReqInfo(r.Context()), errors.GetAPIError(errors.ErrNotImplemented))
}

View file

@ -16,7 +16,6 @@ import (
frosterrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
"go.opentelemetry.io/otel/trace"
"go.uber.org/zap"
)
@ -31,14 +30,18 @@ func (h *handler) reqLogger(ctx context.Context) *zap.Logger {
func (h *handler) logAndSendError(w http.ResponseWriter, logText string, reqInfo *middleware.ReqInfo, err error, additional ...zap.Field) {
err = handleDeleteMarker(w, err)
code := middleware.WriteErrorResponse(w, reqInfo, transformToS3Error(err))
if code, wrErr := middleware.WriteErrorResponse(w, reqInfo, transformToS3Error(err)); wrErr != nil {
additional = append(additional, zap.NamedError("write_response_error", wrErr))
} else {
additional = append(additional, zap.Int("status", code))
}
fields := []zap.Field{
zap.Int("status", code),
zap.String("request_id", reqInfo.RequestID),
zap.String("method", reqInfo.API),
zap.String("bucket", reqInfo.BucketName),
zap.String("object", reqInfo.ObjectName),
zap.String("description", logText),
zap.String("user", reqInfo.User),
zap.Error(err)}
fields = append(fields, additional...)
if traceID, err := trace.TraceIDFromHex(reqInfo.TraceID); err == nil && traceID.IsValid() {
@ -138,16 +141,3 @@ func parseRange(s string) (*layer.RangeParams, error) {
End: values[1],
}, nil
}
func getSessionTokenSetEACL(ctx context.Context) (*session.Container, error) {
boxData, err := middleware.GetBoxData(ctx)
if err != nil {
return nil, err
}
sessionToken := boxData.Gate.SessionTokenForSetEACL()
if sessionToken == nil {
return nil, s3errors.GetAPIError(s3errors.ErrAccessDenied)
}
return sessionToken, nil
}

View file

@ -257,24 +257,3 @@ func (c *Cache) PutCORS(owner user.ID, bkt *data.BucketInfo, cors *data.CORSConf
func (c *Cache) DeleteCORS(bktInfo *data.BucketInfo) {
c.systemCache.Delete(bktInfo.Name + bktInfo.CORSObjectName())
}
func (c *Cache) GetNotificationConfiguration(owner user.ID, bktInfo *data.BucketInfo) *data.NotificationConfiguration {
key := bktInfo.Name + bktInfo.NotificationConfigurationObjectName()
if !c.accessCache.Get(owner, key) {
return nil
}
return c.systemCache.GetNotificationConfiguration(key)
}
func (c *Cache) PutNotificationConfiguration(owner user.ID, bktInfo *data.BucketInfo, configuration *data.NotificationConfiguration) {
key := bktInfo.Name + bktInfo.NotificationConfigurationObjectName()
if err := c.systemCache.PutNotificationConfiguration(key, configuration); err != nil {
c.logger.Warn(logs.CouldntCacheNotificationConfiguration, zap.String("bucket", bktInfo.Name), zap.Error(err))
}
if err := c.accessCache.Put(owner, key); err != nil {
c.logger.Warn(logs.CouldntCacheAccessControlOperation, zap.Error(err))
}
}

View file

@ -9,7 +9,7 @@ import (
s3errors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
)
func (n *layer) GetObjectTaggingAndLock(ctx context.Context, objVersion *ObjectVersion, nodeVersion *data.NodeVersion) (map[string]string, data.LockInfo, error) {
func (n *Layer) GetObjectTaggingAndLock(ctx context.Context, objVersion *data.ObjectVersion, nodeVersion *data.NodeVersion) (map[string]string, data.LockInfo, error) {
var err error
owner := n.BearerOwner(ctx)

View file

@ -12,27 +12,15 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
"go.uber.org/zap"
)
type (
// BucketACL extends BucketInfo by eacl.Table.
BucketACL struct {
Info *data.BucketInfo
EACL *eacl.Table
}
)
const (
attributeLocationConstraint = ".s3-location-constraint"
AttributeLockEnabled = "LockEnabled"
)
func (n *layer) containerInfo(ctx context.Context, prm PrmContainer) (*data.BucketInfo, error) {
func (n *Layer) containerInfo(ctx context.Context, prm PrmContainer) (*data.BucketInfo, error) {
var (
err error
res *container.Container
@ -64,7 +52,6 @@ func (n *layer) containerInfo(ctx context.Context, prm PrmContainer) (*data.Buck
info.Created = container.CreatedAt(cnr)
info.LocationConstraint = cnr.Attribute(attributeLocationConstraint)
info.HomomorphicHashDisabled = container.IsHomomorphicHashingDisabled(cnr)
info.APEEnabled = cnr.BasicACL().Bits() == 0
attrLockEnabled := cnr.Attribute(AttributeLockEnabled)
if len(attrLockEnabled) > 0 {
@ -87,7 +74,7 @@ func (n *layer) containerInfo(ctx context.Context, prm PrmContainer) (*data.Buck
return info, nil
}
func (n *layer) containerList(ctx context.Context) ([]*data.BucketInfo, error) {
func (n *Layer) containerList(ctx context.Context) ([]*data.BucketInfo, error) {
stoken := n.SessionTokenForRead(ctx)
prm := PrmUserContainers{
@ -119,7 +106,7 @@ func (n *layer) containerList(ctx context.Context) ([]*data.BucketInfo, error) {
return list, nil
}
func (n *layer) createContainer(ctx context.Context, p *CreateBucketParams) (*data.BucketInfo, error) {
func (n *Layer) createContainer(ctx context.Context, p *CreateBucketParams) (*data.BucketInfo, error) {
if p.LocationConstraint == "" {
p.LocationConstraint = api.DefaultLocationConstraint // s3tests_boto3.functional.test_s3:test_bucket_get_location
}
@ -133,7 +120,6 @@ func (n *layer) createContainer(ctx context.Context, p *CreateBucketParams) (*da
Created: TimeNow(ctx),
LocationConstraint: p.LocationConstraint,
ObjectLockEnabled: p.ObjectLockEnabled,
APEEnabled: p.APEEnabled,
}
attributes := [][2]string{
@ -146,11 +132,6 @@ func (n *layer) createContainer(ctx context.Context, p *CreateBucketParams) (*da
})
}
basicACL := acl.PublicRWExtended
if p.APEEnabled {
basicACL = 0
}
res, err := n.frostFS.CreateContainer(ctx, PrmContainerCreate{
Creator: bktInfo.Owner,
Policy: p.Policy,
@ -159,7 +140,7 @@ func (n *layer) createContainer(ctx context.Context, p *CreateBucketParams) (*da
SessionToken: p.SessionContainerCreation,
CreationTime: bktInfo.Created,
AdditionalAttributes: attributes,
BasicACL: basicACL,
BasicACL: 0, // means APE
})
if err != nil {
return nil, fmt.Errorf("create container: %w", err)
@ -172,17 +153,3 @@ func (n *layer) createContainer(ctx context.Context, p *CreateBucketParams) (*da
return bktInfo, nil
}
func (n *layer) setContainerEACLTable(ctx context.Context, idCnr cid.ID, table *eacl.Table, sessionToken *session.Container) error {
table.SetCID(idCnr)
return n.frostFS.SetContainerEACL(ctx, *table, sessionToken)
}
func (n *layer) GetContainerEACL(ctx context.Context, cnrID cid.ID) (*eacl.Table, error) {
prm := PrmContainerEACL{
ContainerID: cnrID,
SessionToken: n.SessionTokenForRead(ctx),
}
return n.frostFS.ContainerEACL(ctx, prm)
}

View file

@ -17,7 +17,7 @@ const wildcard = "*"
var supportedMethods = map[string]struct{}{"GET": {}, "HEAD": {}, "POST": {}, "PUT": {}, "DELETE": {}}
func (n *layer) PutBucketCORS(ctx context.Context, p *PutCORSParams) error {
func (n *Layer) PutBucketCORS(ctx context.Context, p *PutCORSParams) error {
var (
buf bytes.Buffer
tee = io.TeeReader(p.Reader, &buf)
@ -68,7 +68,7 @@ func (n *layer) PutBucketCORS(ctx context.Context, p *PutCORSParams) error {
return nil
}
func (n *layer) GetBucketCORS(ctx context.Context, bktInfo *data.BucketInfo) (*data.CORSConfiguration, error) {
func (n *Layer) GetBucketCORS(ctx context.Context, bktInfo *data.BucketInfo) (*data.CORSConfiguration, error) {
cors, err := n.getCORS(ctx, bktInfo)
if err != nil {
if errorsStd.Is(err, ErrNodeNotFound) {
@ -80,7 +80,7 @@ func (n *layer) GetBucketCORS(ctx context.Context, bktInfo *data.BucketInfo) (*d
return cors, nil
}
func (n *layer) DeleteBucketCORS(ctx context.Context, bktInfo *data.BucketInfo) error {
func (n *Layer) DeleteBucketCORS(ctx context.Context, bktInfo *data.BucketInfo) error {
objID, err := n.treeService.DeleteBucketCORS(ctx, bktInfo)
objIDNotFound := errorsStd.Is(err, ErrNoNodeToRemove)
if err != nil && !objIDNotFound {

View file

@ -11,7 +11,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
@ -64,15 +63,6 @@ type PrmUserContainers struct {
SessionToken *session.Container
}
// PrmContainerEACL groups parameters of FrostFS.ContainerEACL operation.
type PrmContainerEACL struct {
// Container identifier.
ContainerID cid.ID
// Token of the container's creation session. Nil means session absence.
SessionToken *session.Container
}
// ContainerCreateResult is a result parameter of FrostFS.CreateContainer operation.
type ContainerCreateResult struct {
ContainerID cid.ID
@ -216,18 +206,6 @@ type FrostFS interface {
// prevented the containers from being listed.
UserContainers(context.Context, PrmUserContainers) ([]cid.ID, error)
// SetContainerEACL saves the eACL table of the container in FrostFS. The
// extended ACL is modified within session if session token is not nil.
//
// It returns any error encountered which prevented the eACL from being saved.
SetContainerEACL(context.Context, eacl.Table, *session.Container) error
// ContainerEACL reads the container eACL from FrostFS by the container ID.
//
// It returns exactly one non-nil value. It returns any error encountered which
// prevented the eACL from being read.
ContainerEACL(context.Context, PrmContainerEACL) (*eacl.Table, error)
// DeleteContainer marks the container to be removed from FrostFS by ID.
// Request is sent within session if the session token is specified.
// Successful return does not guarantee actual removal.

View file

@ -5,9 +5,9 @@ import (
"context"
"crypto/rand"
"crypto/sha256"
"errors"
"fmt"
"io"
"strings"
"time"
v2container "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/container"
@ -18,7 +18,6 @@ import (
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
@ -66,7 +65,6 @@ type TestFrostFS struct {
objectErrors map[string]error
objectPutErrors map[string]error
containers map[string]*container.Container
eaclTables map[string]*eacl.Table
currentEpoch uint64
key *keys.PrivateKey
}
@ -77,7 +75,6 @@ func NewTestFrostFS(key *keys.PrivateKey) *TestFrostFS {
objectErrors: make(map[string]error),
objectPutErrors: make(map[string]error),
containers: make(map[string]*container.Container),
eaclTables: make(map[string]*eacl.Table),
key: key,
}
}
@ -220,7 +217,7 @@ func (t *TestFrostFS) ReadObject(ctx context.Context, prm PrmObjectRead) (*Objec
if obj, ok := t.objects[sAddr]; ok {
owner := getBearerOwner(ctx)
if !t.checkAccess(prm.Container, owner, eacl.OperationGet) {
if !t.checkAccess(prm.Container, owner) {
return nil, ErrAccessDenied
}
@ -324,7 +321,7 @@ func (t *TestFrostFS) DeleteObject(ctx context.Context, prm PrmObjectDelete) err
if _, ok := t.objects[addr.EncodeToString()]; ok {
owner := getBearerOwner(ctx)
if !t.checkAccess(prm.Container, owner, eacl.OperationDelete) {
if !t.checkAccess(prm.Container, owner) {
return ErrAccessDenied
}
@ -352,31 +349,44 @@ func (t *TestFrostFS) AllObjects(cnrID cid.ID) []oid.ID {
return result
}
func (t *TestFrostFS) SetContainerEACL(_ context.Context, table eacl.Table, _ *session.Container) error {
cnrID, ok := table.CID()
if !ok {
return errors.New("invalid cid")
func (t *TestFrostFS) SearchObjects(_ context.Context, prm PrmObjectSearch) ([]oid.ID, error) {
filters := object.NewSearchFilters()
filters.AddRootFilter()
if prm.ExactAttribute[0] != "" {
filters.AddFilter(prm.ExactAttribute[0], prm.ExactAttribute[1], object.MatchStringEqual)
}
if _, ok = t.containers[cnrID.EncodeToString()]; !ok {
return errors.New("not found")
cidStr := prm.Container.EncodeToString()
var res []oid.ID
if len(filters) == 1 {
for k, v := range t.objects {
if strings.Contains(k, cidStr) {
id, _ := v.ID()
res = append(res, id)
}
}
return res, nil
}
t.eaclTables[cnrID.EncodeToString()] = &table
filter := filters[1]
if len(filters) != 2 || filter.Operation() != object.MatchStringEqual {
return nil, fmt.Errorf("usupported filters")
}
return nil
for k, v := range t.objects {
if strings.Contains(k, cidStr) && isMatched(v.Attributes(), filter) {
id, _ := v.ID()
res = append(res, id)
}
}
return res, nil
}
func (t *TestFrostFS) ContainerEACL(_ context.Context, prm PrmContainerEACL) (*eacl.Table, error) {
table, ok := t.eaclTables[prm.ContainerID.EncodeToString()]
if !ok {
return nil, errors.New("not found")
}
return table, nil
}
func (t *TestFrostFS) checkAccess(cnrID cid.ID, owner user.ID, op eacl.Operation) bool {
func (t *TestFrostFS) checkAccess(cnrID cid.ID, owner user.ID) bool {
cnr, ok := t.containers[cnrID.EncodeToString()]
if !ok {
return false
@ -386,28 +396,6 @@ func (t *TestFrostFS) checkAccess(cnrID cid.ID, owner user.ID, op eacl.Operation
return cnr.Owner().Equals(owner)
}
table, ok := t.eaclTables[cnrID.EncodeToString()]
if !ok {
return true
}
for _, rec := range table.Records() {
if rec.Operation() == op && len(rec.Filters()) == 0 {
for _, trgt := range rec.Targets() {
if trgt.Role() == eacl.RoleOthers {
return rec.Action() == eacl.ActionAllow
}
var targetOwner user.ID
for _, pk := range eacl.TargetECDSAKeys(&trgt) {
user.IDFromKey(&targetOwner, *pk)
if targetOwner.Equals(owner) {
return rec.Action() == eacl.ActionAllow
}
}
}
}
}
return true
}
@ -418,3 +406,12 @@ func getBearerOwner(ctx context.Context) user.ID {
return user.ID{}
}
func isMatched(attributes []object.Attribute, filter object.SearchFilter) bool {
for _, attr := range attributes {
if attr.Key() == filter.Header() && attr.Value() == filter.Value() {
return true
}
}
return false
}

View file

@ -4,10 +4,13 @@ import (
"context"
"crypto/ecdsa"
"crypto/rand"
"encoding/json"
"encoding/xml"
stderrors "errors"
"fmt"
"io"
"net/url"
"sort"
"strconv"
"strings"
"time"
@ -15,34 +18,22 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
s3errors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer/encryption"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/nats-io/nats.go"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"go.uber.org/zap"
)
type (
EventListener interface {
Subscribe(context.Context, string, MsgHandler) error
Listen(context.Context)
}
MsgHandler interface {
HandleMessage(context.Context, *nats.Msg) error
}
MsgHandlerFunc func(context.Context, *nats.Msg) error
BucketResolver interface {
Resolve(ctx context.Context, name string) (cid.ID, error)
}
@ -54,13 +45,12 @@ type (
FormContainerZone(ns string) (zone string, isDefault bool)
}
layer struct {
Layer struct {
frostFS FrostFS
gateOwner user.ID
log *zap.Logger
anonKey AnonymousKey
resolver BucketResolver
ncontroller EventListener
cache *Cache
treeService TreeService
features FeatureSettings
@ -97,14 +87,6 @@ type (
VersionID string
}
// ObjectVersion stores object version info.
ObjectVersion struct {
BktInfo *data.BucketInfo
ObjectName string
VersionID string
NoErrorOnDeleteMarker bool
}
// RangeParams stores range header request parameters.
RangeParams struct {
Start uint64
@ -179,13 +161,6 @@ type (
SessionContainerCreation *session.Container
LocationConstraint string
ObjectLockEnabled bool
APEEnabled bool
}
// PutBucketACLParams stores put bucket acl request parameters.
PutBucketACLParams struct {
BktInfo *data.BucketInfo
EACL *eacl.Table
SessionToken *session.Container
}
// DeleteBucketParams stores delete bucket request parameters.
DeleteBucketParams struct {
@ -219,68 +194,6 @@ type (
encrypted bool
decryptedLen uint64
}
// Client provides S3 API client interface.
Client interface {
Initialize(ctx context.Context, c EventListener) error
EphemeralKey() *keys.PublicKey
GetBucketSettings(ctx context.Context, bktInfo *data.BucketInfo) (*data.BucketSettings, error)
PutBucketSettings(ctx context.Context, p *PutSettingsParams) error
PutBucketCORS(ctx context.Context, p *PutCORSParams) error
GetBucketCORS(ctx context.Context, bktInfo *data.BucketInfo) (*data.CORSConfiguration, error)
DeleteBucketCORS(ctx context.Context, bktInfo *data.BucketInfo) error
ListBuckets(ctx context.Context) ([]*data.BucketInfo, error)
GetBucketInfo(ctx context.Context, name string) (*data.BucketInfo, error)
ResolveCID(ctx context.Context, name string) (cid.ID, error)
GetBucketACL(ctx context.Context, bktInfo *data.BucketInfo) (*BucketACL, error)
PutBucketACL(ctx context.Context, p *PutBucketACLParams) error
CreateBucket(ctx context.Context, p *CreateBucketParams) (*data.BucketInfo, error)
DeleteBucket(ctx context.Context, p *DeleteBucketParams) error
GetObject(ctx context.Context, p *GetObjectParams) (*ObjectPayload, error)
GetObjectInfo(ctx context.Context, p *HeadObjectParams) (*data.ObjectInfo, error)
GetExtendedObjectInfo(ctx context.Context, p *HeadObjectParams) (*data.ExtendedObjectInfo, error)
GetLockInfo(ctx context.Context, obj *ObjectVersion) (*data.LockInfo, error)
PutLockInfo(ctx context.Context, p *PutLockInfoParams) error
GetBucketTagging(ctx context.Context, bktInfo *data.BucketInfo) (map[string]string, error)
PutBucketTagging(ctx context.Context, bktInfo *data.BucketInfo, tagSet map[string]string) error
DeleteBucketTagging(ctx context.Context, bktInfo *data.BucketInfo) error
GetObjectTagging(ctx context.Context, p *GetObjectTaggingParams) (string, map[string]string, error)
PutObjectTagging(ctx context.Context, p *PutObjectTaggingParams) (*data.NodeVersion, error)
DeleteObjectTagging(ctx context.Context, p *ObjectVersion) (*data.NodeVersion, error)
PutObject(ctx context.Context, p *PutObjectParams) (*data.ExtendedObjectInfo, error)
CopyObject(ctx context.Context, p *CopyObjectParams) (*data.ExtendedObjectInfo, error)
ListObjectsV1(ctx context.Context, p *ListObjectsParamsV1) (*ListObjectsInfoV1, error)
ListObjectsV2(ctx context.Context, p *ListObjectsParamsV2) (*ListObjectsInfoV2, error)
ListObjectVersions(ctx context.Context, p *ListObjectVersionsParams) (*ListObjectVersionsInfo, error)
DeleteObjects(ctx context.Context, p *DeleteObjectParams) []*VersionedObject
CreateMultipartUpload(ctx context.Context, p *CreateMultipartParams) error
CompleteMultipartUpload(ctx context.Context, p *CompleteMultipartParams) (*UploadData, *data.ExtendedObjectInfo, error)
UploadPart(ctx context.Context, p *UploadPartParams) (string, error)
UploadPartCopy(ctx context.Context, p *UploadCopyParams) (*data.ObjectInfo, error)
ListMultipartUploads(ctx context.Context, p *ListMultipartUploadsParams) (*ListMultipartUploadsInfo, error)
AbortMultipartUpload(ctx context.Context, p *UploadInfoParams) error
ListParts(ctx context.Context, p *ListPartsParams) (*ListPartsInfo, error)
PutBucketNotificationConfiguration(ctx context.Context, p *PutBucketNotificationConfigurationParams) error
GetBucketNotificationConfiguration(ctx context.Context, bktInfo *data.BucketInfo) (*data.NotificationConfiguration, error)
// Compound methods for optimizations
// GetObjectTaggingAndLock unifies GetObjectTagging and GetLock methods in single tree service invocation.
GetObjectTaggingAndLock(ctx context.Context, p *ObjectVersion, nodeVersion *data.NodeVersion) (map[string]string, data.LockInfo, error)
}
)
const (
@ -307,18 +220,14 @@ func (t *VersionedObject) String() string {
return t.Name + ":" + t.VersionID
}
func (f MsgHandlerFunc) HandleMessage(ctx context.Context, msg *nats.Msg) error {
return f(ctx, msg)
}
func (p HeadObjectParams) Versioned() bool {
return len(p.VersionID) > 0
}
// NewLayer creates an instance of a layer. It checks credentials
// NewLayer creates an instance of a Layer. It checks credentials
// and establishes gRPC connection with the node.
func NewLayer(log *zap.Logger, frostFS FrostFS, config *Config) Client {
return &layer{
func NewLayer(log *zap.Logger, frostFS FrostFS, config *Config) *Layer {
return &Layer{
frostFS: frostFS,
log: log,
gateOwner: config.GateOwner,
@ -330,27 +239,10 @@ func NewLayer(log *zap.Logger, frostFS FrostFS, config *Config) Client {
}
}
func (n *layer) EphemeralKey() *keys.PublicKey {
func (n *Layer) EphemeralKey() *keys.PublicKey {
return n.anonKey.Key.PublicKey()
}
func (n *layer) Initialize(ctx context.Context, c EventListener) error {
if n.IsNotificationEnabled() {
return fmt.Errorf("already initialized")
}
// todo add notification handlers (e.g. for lifecycles)
c.Listen(ctx)
n.ncontroller = c
return nil
}
func (n *layer) IsNotificationEnabled() bool {
return n.ncontroller != nil
}
// IsAuthenticatedRequest checks if access box exists in the current request.
func IsAuthenticatedRequest(ctx context.Context) bool {
_, err := middleware.GetBoxData(ctx)
@ -367,7 +259,7 @@ func TimeNow(ctx context.Context) time.Time {
}
// BearerOwner returns owner id from BearerToken (context) or from client owner.
func (n *layer) BearerOwner(ctx context.Context) user.ID {
func (n *Layer) BearerOwner(ctx context.Context) user.ID {
if bd, err := middleware.GetBoxData(ctx); err == nil && bd.Gate.BearerToken != nil {
return bearer.ResolveIssuer(*bd.Gate.BearerToken)
}
@ -379,7 +271,7 @@ func (n *layer) BearerOwner(ctx context.Context) user.ID {
}
// SessionTokenForRead returns session container token.
func (n *layer) SessionTokenForRead(ctx context.Context) *session.Container {
func (n *Layer) SessionTokenForRead(ctx context.Context) *session.Container {
if bd, err := middleware.GetBoxData(ctx); err == nil && bd.Gate != nil {
return bd.Gate.SessionToken()
}
@ -387,7 +279,7 @@ func (n *layer) SessionTokenForRead(ctx context.Context) *session.Container {
return nil
}
func (n *layer) reqLogger(ctx context.Context) *zap.Logger {
func (n *Layer) reqLogger(ctx context.Context) *zap.Logger {
reqLogger := middleware.GetReqLog(ctx)
if reqLogger != nil {
return reqLogger
@ -395,7 +287,7 @@ func (n *layer) reqLogger(ctx context.Context) *zap.Logger {
return n.log
}
func (n *layer) prepareAuthParameters(ctx context.Context, prm *PrmAuth, bktOwner user.ID) {
func (n *Layer) prepareAuthParameters(ctx context.Context, prm *PrmAuth, bktOwner user.ID) {
if bd, err := middleware.GetBoxData(ctx); err == nil && bd.Gate.BearerToken != nil {
if bd.Gate.BearerToken.Impersonate() || bktOwner.Equals(bearer.ResolveIssuer(*bd.Gate.BearerToken)) {
prm.BearerToken = bd.Gate.BearerToken
@ -407,7 +299,7 @@ func (n *layer) prepareAuthParameters(ctx context.Context, prm *PrmAuth, bktOwne
}
// GetBucketInfo returns bucket info by name.
func (n *layer) GetBucketInfo(ctx context.Context, name string) (*data.BucketInfo, error) {
func (n *Layer) GetBucketInfo(ctx context.Context, name string) (*data.BucketInfo, error) {
name, err := url.QueryUnescape(name)
if err != nil {
return nil, fmt.Errorf("unescape bucket name: %w", err)
@ -437,7 +329,7 @@ func (n *layer) GetBucketInfo(ctx context.Context, name string) (*data.BucketInf
}
// ResolveCID returns container id by name.
func (n *layer) ResolveCID(ctx context.Context, name string) (cid.ID, error) {
func (n *Layer) ResolveCID(ctx context.Context, name string) (cid.ID, error) {
name, err := url.QueryUnescape(name)
if err != nil {
return cid.ID{}, fmt.Errorf("unescape bucket name: %w", err)
@ -453,32 +345,14 @@ func (n *layer) ResolveCID(ctx context.Context, name string) (cid.ID, error) {
return n.ResolveBucket(ctx, name)
}
// GetBucketACL returns bucket acl info by name.
func (n *layer) GetBucketACL(ctx context.Context, bktInfo *data.BucketInfo) (*BucketACL, error) {
eACL, err := n.GetContainerEACL(ctx, bktInfo.CID)
if err != nil {
return nil, fmt.Errorf("get container eacl: %w", err)
}
return &BucketACL{
Info: bktInfo,
EACL: eACL,
}, nil
}
// PutBucketACL puts bucket acl by name.
func (n *layer) PutBucketACL(ctx context.Context, param *PutBucketACLParams) error {
return n.setContainerEACLTable(ctx, param.BktInfo.CID, param.EACL, param.SessionToken)
}
// ListBuckets returns all user containers. The name of the bucket is a container
// id. Timestamp is omitted since it is not saved in frostfs container.
func (n *layer) ListBuckets(ctx context.Context) ([]*data.BucketInfo, error) {
func (n *Layer) ListBuckets(ctx context.Context) ([]*data.BucketInfo, error) {
return n.containerList(ctx)
}
// GetObject from storage.
func (n *layer) GetObject(ctx context.Context, p *GetObjectParams) (*ObjectPayload, error) {
func (n *Layer) GetObject(ctx context.Context, p *GetObjectParams) (*ObjectPayload, error) {
var params getParams
params.objInfo = p.ObjectInfo
@ -592,7 +466,7 @@ func getDecrypter(p *GetObjectParams) (*encryption.Decrypter, error) {
}
// GetObjectInfo returns meta information about the object.
func (n *layer) GetObjectInfo(ctx context.Context, p *HeadObjectParams) (*data.ObjectInfo, error) {
func (n *Layer) GetObjectInfo(ctx context.Context, p *HeadObjectParams) (*data.ObjectInfo, error) {
extendedObjectInfo, err := n.GetExtendedObjectInfo(ctx, p)
if err != nil {
return nil, err
@ -602,7 +476,7 @@ func (n *layer) GetObjectInfo(ctx context.Context, p *HeadObjectParams) (*data.O
}
// GetExtendedObjectInfo returns meta information and corresponding info from the tree service about the object.
func (n *layer) GetExtendedObjectInfo(ctx context.Context, p *HeadObjectParams) (*data.ExtendedObjectInfo, error) {
func (n *Layer) GetExtendedObjectInfo(ctx context.Context, p *HeadObjectParams) (*data.ExtendedObjectInfo, error) {
var objInfo *data.ExtendedObjectInfo
var err error
@ -623,7 +497,7 @@ func (n *layer) GetExtendedObjectInfo(ctx context.Context, p *HeadObjectParams)
}
// CopyObject from one bucket into another bucket.
func (n *layer) CopyObject(ctx context.Context, p *CopyObjectParams) (*data.ExtendedObjectInfo, error) {
func (n *Layer) CopyObject(ctx context.Context, p *CopyObjectParams) (*data.ExtendedObjectInfo, error) {
objPayload, err := n.GetObject(ctx, &GetObjectParams{
ObjectInfo: p.SrcObject,
Versioned: p.SrcVersioned,
@ -657,19 +531,28 @@ func getRandomOID() (oid.ID, error) {
return objID, nil
}
func (n *layer) deleteObject(ctx context.Context, bkt *data.BucketInfo, settings *data.BucketSettings, obj *VersionedObject) *VersionedObject {
func (n *Layer) deleteObject(ctx context.Context, bkt *data.BucketInfo, settings *data.BucketSettings, obj *VersionedObject) *VersionedObject {
if len(obj.VersionID) != 0 || settings.Unversioned() {
var nodeVersion *data.NodeVersion
if nodeVersion, obj.Error = n.getNodeVersionToDelete(ctx, bkt, obj); obj.Error != nil {
var nodeVersions []*data.NodeVersion
if nodeVersions, obj.Error = n.getNodeVersionsToDelete(ctx, bkt, obj); obj.Error != nil {
return n.handleNotFoundError(bkt, obj)
}
if obj.DeleteMarkVersion, obj.Error = n.removeOldVersion(ctx, bkt, nodeVersion, obj); obj.Error != nil {
return n.handleObjectDeleteErrors(ctx, bkt, obj, nodeVersion.ID)
for _, nodeVersion := range nodeVersions {
if obj.DeleteMarkVersion, obj.Error = n.removeOldVersion(ctx, bkt, nodeVersion, obj); obj.Error != nil {
if !client.IsErrObjectAlreadyRemoved(obj.Error) && !client.IsErrObjectNotFound(obj.Error) {
return obj
}
n.reqLogger(ctx).Debug(logs.CouldntDeleteObjectFromStorageContinueDeleting,
zap.Stringer("cid", bkt.CID), zap.String("oid", obj.VersionID), zap.Error(obj.Error))
}
if obj.Error = n.treeService.RemoveVersion(ctx, bkt, nodeVersion.ID); obj.Error != nil {
return obj
}
}
obj.Error = n.treeService.RemoveVersion(ctx, bkt, nodeVersion.ID)
n.cache.CleanListCacheEntriesContainingObject(obj.Name, bkt.CID)
n.cache.DeleteObjectName(bkt.CID, bkt.Name, obj.Name)
return obj
}
@ -682,20 +565,30 @@ func (n *layer) deleteObject(ctx context.Context, bkt *data.BucketInfo, settings
if settings.VersioningSuspended() {
obj.VersionID = data.UnversionedObjectVersionID
var nullVersionToDelete *data.NodeVersion
if lastVersion.IsUnversioned {
if !lastVersion.IsDeleteMarker {
nullVersionToDelete = lastVersion
}
} else if nullVersionToDelete, obj.Error = n.getNodeVersionToDelete(ctx, bkt, obj); obj.Error != nil {
var nodeVersions []*data.NodeVersion
if nodeVersions, obj.Error = n.getNodeVersionsToDelete(ctx, bkt, obj); obj.Error != nil {
if !isNotFoundError(obj.Error) {
return obj
}
}
if nullVersionToDelete != nil {
if obj.DeleteMarkVersion, obj.Error = n.removeOldVersion(ctx, bkt, nullVersionToDelete, obj); obj.Error != nil {
return n.handleObjectDeleteErrors(ctx, bkt, obj, nullVersionToDelete.ID)
for _, nodeVersion := range nodeVersions {
if nodeVersion.ID == lastVersion.ID && nodeVersion.IsDeleteMarker {
continue
}
if !nodeVersion.IsDeleteMarker {
if obj.DeleteMarkVersion, obj.Error = n.removeOldVersion(ctx, bkt, nodeVersion, obj); obj.Error != nil {
if !client.IsErrObjectAlreadyRemoved(obj.Error) && !client.IsErrObjectNotFound(obj.Error) {
return obj
}
n.reqLogger(ctx).Debug(logs.CouldntDeleteObjectFromStorageContinueDeleting,
zap.Stringer("cid", bkt.CID), zap.String("oid", obj.VersionID), zap.Error(obj.Error))
}
}
if obj.Error = n.treeService.RemoveVersion(ctx, bkt, nodeVersion.ID); obj.Error != nil {
return obj
}
}
}
@ -733,7 +626,7 @@ func (n *layer) deleteObject(ctx context.Context, bkt *data.BucketInfo, settings
return obj
}
func (n *layer) handleNotFoundError(bkt *data.BucketInfo, obj *VersionedObject) *VersionedObject {
func (n *Layer) handleNotFoundError(bkt *data.BucketInfo, obj *VersionedObject) *VersionedObject {
if isNotFoundError(obj.Error) {
obj.Error = nil
n.cache.CleanListCacheEntriesContainingObject(obj.Name, bkt.CID)
@ -743,40 +636,74 @@ func (n *layer) handleNotFoundError(bkt *data.BucketInfo, obj *VersionedObject)
return obj
}
func (n *layer) handleObjectDeleteErrors(ctx context.Context, bkt *data.BucketInfo, obj *VersionedObject, nodeID uint64) *VersionedObject {
if !client.IsErrObjectAlreadyRemoved(obj.Error) && !client.IsErrObjectNotFound(obj.Error) {
return obj
}
n.reqLogger(ctx).Debug(logs.CouldntDeleteObjectFromStorageContinueDeleting,
zap.Stringer("cid", bkt.CID), zap.String("oid", obj.VersionID), zap.Error(obj.Error))
obj.Error = n.treeService.RemoveVersion(ctx, bkt, nodeID)
if obj.Error == nil {
n.cache.DeleteObjectName(bkt.CID, bkt.Name, obj.Name)
}
return obj
}
func isNotFoundError(err error) bool {
return errors.IsS3Error(err, errors.ErrNoSuchKey) ||
errors.IsS3Error(err, errors.ErrNoSuchVersion)
}
func (n *layer) getNodeVersionToDelete(ctx context.Context, bkt *data.BucketInfo, obj *VersionedObject) (*data.NodeVersion, error) {
objVersion := &ObjectVersion{
BktInfo: bkt,
ObjectName: obj.Name,
VersionID: obj.VersionID,
NoErrorOnDeleteMarker: true,
func (n *Layer) getNodeVersionsToDelete(ctx context.Context, bkt *data.BucketInfo, obj *VersionedObject) ([]*data.NodeVersion, error) {
var versionsToDelete []*data.NodeVersion
versions, err := n.treeService.GetVersions(ctx, bkt, obj.Name)
if err != nil {
if stderrors.Is(err, ErrNodeNotFound) {
return nil, fmt.Errorf("%w: %s", s3errors.GetAPIError(s3errors.ErrNoSuchKey), err.Error())
}
return nil, err
}
return n.getNodeVersion(ctx, objVersion)
if len(versions) == 0 {
return nil, fmt.Errorf("%w: there isn't tree node with requested version id", s3errors.GetAPIError(s3errors.ErrNoSuchVersion))
}
sort.Slice(versions, func(i, j int) bool {
return versions[i].Timestamp < versions[j].Timestamp
})
var matchFn func(nv *data.NodeVersion) bool
switch {
case obj.VersionID == data.UnversionedObjectVersionID:
matchFn = func(nv *data.NodeVersion) bool {
return nv.IsUnversioned
}
case len(obj.VersionID) == 0:
latest := versions[len(versions)-1]
if latest.IsUnversioned {
matchFn = func(nv *data.NodeVersion) bool {
return nv.IsUnversioned
}
} else {
matchFn = func(nv *data.NodeVersion) bool {
return nv.ID == latest.ID
}
}
default:
matchFn = func(nv *data.NodeVersion) bool {
return nv.OID.EncodeToString() == obj.VersionID
}
}
var oids []string
for _, v := range versions {
if matchFn(v) {
versionsToDelete = append(versionsToDelete, v)
if !v.IsDeleteMarker {
oids = append(oids, v.OID.EncodeToString())
}
}
}
if len(versionsToDelete) == 0 {
return nil, fmt.Errorf("%w: there isn't tree node with requested version id", s3errors.GetAPIError(s3errors.ErrNoSuchVersion))
}
n.reqLogger(ctx).Debug(logs.GetTreeNodeToDelete, zap.Stringer("cid", bkt.CID), zap.Strings("oids", oids))
return versionsToDelete, nil
}
func (n *layer) getLastNodeVersion(ctx context.Context, bkt *data.BucketInfo, obj *VersionedObject) (*data.NodeVersion, error) {
objVersion := &ObjectVersion{
func (n *Layer) getLastNodeVersion(ctx context.Context, bkt *data.BucketInfo, obj *VersionedObject) (*data.NodeVersion, error) {
objVersion := &data.ObjectVersion{
BktInfo: bkt,
ObjectName: obj.Name,
VersionID: "",
@ -786,16 +713,47 @@ func (n *layer) getLastNodeVersion(ctx context.Context, bkt *data.BucketInfo, ob
return n.getNodeVersion(ctx, objVersion)
}
func (n *layer) removeOldVersion(ctx context.Context, bkt *data.BucketInfo, nodeVersion *data.NodeVersion, obj *VersionedObject) (string, error) {
func (n *Layer) removeOldVersion(ctx context.Context, bkt *data.BucketInfo, nodeVersion *data.NodeVersion, obj *VersionedObject) (string, error) {
if nodeVersion.IsDeleteMarker {
return obj.VersionID, nil
}
if nodeVersion.IsCombined {
return "", n.removeCombinedObject(ctx, bkt, nodeVersion)
}
return "", n.objectDelete(ctx, bkt, nodeVersion.OID)
}
func (n *Layer) removeCombinedObject(ctx context.Context, bkt *data.BucketInfo, nodeVersion *data.NodeVersion) error {
combinedObj, err := n.objectGet(ctx, bkt, nodeVersion.OID)
if err != nil {
return fmt.Errorf("get combined object '%s': %w", nodeVersion.OID.EncodeToString(), err)
}
var parts []*data.PartInfo
if err = json.Unmarshal(combinedObj.Payload(), &parts); err != nil {
return fmt.Errorf("unmarshal combined object parts: %w", err)
}
for _, part := range parts {
if err = n.objectDelete(ctx, bkt, part.OID); err == nil {
continue
}
if !client.IsErrObjectAlreadyRemoved(err) && !client.IsErrObjectNotFound(err) {
return fmt.Errorf("couldn't delete part '%s': %w", part.OID.EncodeToString(), err)
}
n.reqLogger(ctx).Warn(logs.CouldntDeletePart, zap.String("cid", bkt.CID.EncodeToString()),
zap.String("oid", part.OID.EncodeToString()), zap.Int("part number", part.Number), zap.Error(err))
}
return n.objectDelete(ctx, bkt, nodeVersion.OID)
}
// DeleteObjects from the storage.
func (n *layer) DeleteObjects(ctx context.Context, p *DeleteObjectParams) []*VersionedObject {
func (n *Layer) DeleteObjects(ctx context.Context, p *DeleteObjectParams) []*VersionedObject {
for i, obj := range p.Objects {
p.Objects[i] = n.deleteObject(ctx, p.BktInfo, p.Settings, obj)
if p.IsMultiple && p.Objects[i].Error != nil {
@ -806,7 +764,7 @@ func (n *layer) DeleteObjects(ctx context.Context, p *DeleteObjectParams) []*Ver
return p.Objects
}
func (n *layer) CreateBucket(ctx context.Context, p *CreateBucketParams) (*data.BucketInfo, error) {
func (n *Layer) CreateBucket(ctx context.Context, p *CreateBucketParams) (*data.BucketInfo, error) {
bktInfo, err := n.GetBucketInfo(ctx, p.Name)
if err != nil {
if errors.IsS3Error(err, errors.ErrNoSuchBucket) {
@ -822,7 +780,7 @@ func (n *layer) CreateBucket(ctx context.Context, p *CreateBucketParams) (*data.
return nil, errors.GetAPIError(errors.ErrBucketAlreadyExists)
}
func (n *layer) ResolveBucket(ctx context.Context, name string) (cid.ID, error) {
func (n *Layer) ResolveBucket(ctx context.Context, name string) (cid.ID, error) {
var cnrID cid.ID
if err := cnrID.DecodeString(name); err != nil {
if cnrID, err = n.resolver.Resolve(ctx, name); err != nil {
@ -835,7 +793,7 @@ func (n *layer) ResolveBucket(ctx context.Context, name string) (cid.ID, error)
return cnrID, nil
}
func (n *layer) DeleteBucket(ctx context.Context, p *DeleteBucketParams) error {
func (n *Layer) DeleteBucket(ctx context.Context, p *DeleteBucketParams) error {
res, _, err := n.getAllObjectsVersions(ctx, commonVersionsListingParams{
BktInfo: p.BktInfo,
MaxKeys: 1,

View file

@ -96,7 +96,7 @@ const (
)
// ListObjectsV1 returns objects in a bucket for requests of Version 1.
func (n *layer) ListObjectsV1(ctx context.Context, p *ListObjectsParamsV1) (*ListObjectsInfoV1, error) {
func (n *Layer) ListObjectsV1(ctx context.Context, p *ListObjectsParamsV1) (*ListObjectsInfoV1, error) {
var result ListObjectsInfoV1
prm := commonLatestVersionsListingParams{
@ -127,7 +127,7 @@ func (n *layer) ListObjectsV1(ctx context.Context, p *ListObjectsParamsV1) (*Lis
}
// ListObjectsV2 returns objects in a bucket for requests of Version 2.
func (n *layer) ListObjectsV2(ctx context.Context, p *ListObjectsParamsV2) (*ListObjectsInfoV2, error) {
func (n *Layer) ListObjectsV2(ctx context.Context, p *ListObjectsParamsV2) (*ListObjectsInfoV2, error) {
var result ListObjectsInfoV2
prm := commonLatestVersionsListingParams{
@ -157,7 +157,7 @@ func (n *layer) ListObjectsV2(ctx context.Context, p *ListObjectsParamsV2) (*Lis
return &result, nil
}
func (n *layer) ListObjectVersions(ctx context.Context, p *ListObjectVersionsParams) (*ListObjectVersionsInfo, error) {
func (n *Layer) ListObjectVersions(ctx context.Context, p *ListObjectVersionsParams) (*ListObjectVersionsInfo, error) {
prm := commonVersionsListingParams{
BktInfo: p.BktInfo,
Delimiter: p.Delimiter,
@ -188,7 +188,7 @@ func (n *layer) ListObjectVersions(ctx context.Context, p *ListObjectVersionsPar
return res, nil
}
func (n *layer) getLatestObjectsVersions(ctx context.Context, p commonLatestVersionsListingParams) (objects []*data.ExtendedNodeVersion, next *data.ExtendedNodeVersion, err error) {
func (n *Layer) getLatestObjectsVersions(ctx context.Context, p commonLatestVersionsListingParams) (objects []*data.ExtendedNodeVersion, next *data.ExtendedNodeVersion, err error) {
if p.MaxKeys == 0 {
return nil, nil, nil
}
@ -225,7 +225,7 @@ func (n *layer) getLatestObjectsVersions(ctx context.Context, p commonLatestVers
return
}
func (n *layer) getAllObjectsVersions(ctx context.Context, p commonVersionsListingParams) ([]*data.ExtendedNodeVersion, bool, error) {
func (n *Layer) getAllObjectsVersions(ctx context.Context, p commonVersionsListingParams) ([]*data.ExtendedNodeVersion, bool, error) {
if p.MaxKeys == 0 {
return nil, false, nil
}
@ -301,15 +301,15 @@ func formVersionsListRow(objects []*data.ExtendedNodeVersion, rowStartIndex int,
}
}
func (n *layer) getListLatestVersionsSession(ctx context.Context, p commonLatestVersionsListingParams) (*data.ListSession, error) {
func (n *Layer) getListLatestVersionsSession(ctx context.Context, p commonLatestVersionsListingParams) (*data.ListSession, error) {
return n.getListVersionsSession(ctx, p.commonVersionsListingParams, true)
}
func (n *layer) getListAllVersionsSession(ctx context.Context, p commonVersionsListingParams) (*data.ListSession, error) {
func (n *Layer) getListAllVersionsSession(ctx context.Context, p commonVersionsListingParams) (*data.ListSession, error) {
return n.getListVersionsSession(ctx, p, false)
}
func (n *layer) getListVersionsSession(ctx context.Context, p commonVersionsListingParams, latestOnly bool) (*data.ListSession, error) {
func (n *Layer) getListVersionsSession(ctx context.Context, p commonVersionsListingParams, latestOnly bool) (*data.ListSession, error) {
owner := n.BearerOwner(ctx)
cacheKey := cache.CreateListSessionCacheKey(p.BktInfo.CID, p.Prefix, p.Bookmark)
@ -329,12 +329,12 @@ func (n *layer) getListVersionsSession(ctx context.Context, p commonVersionsList
return session, nil
}
func (n *layer) initNewVersionsByPrefixSession(ctx context.Context, p commonVersionsListingParams, latestOnly bool) (session *data.ListSession, err error) {
func (n *Layer) initNewVersionsByPrefixSession(ctx context.Context, p commonVersionsListingParams, latestOnly bool) (session *data.ListSession, err error) {
session = &data.ListSession{NamesMap: make(map[string]struct{})}
session.Context, session.Cancel = context.WithCancel(context.Background())
if bd, err := middleware.GetBoxData(ctx); err == nil {
session.Context = middleware.SetBoxData(session.Context, bd)
session.Context = middleware.SetBox(session.Context, &middleware.Box{AccessBox: bd})
}
session.Stream, err = n.treeService.InitVersionsByPrefixStream(session.Context, p.BktInfo, p.Prefix, latestOnly)
@ -345,7 +345,7 @@ func (n *layer) initNewVersionsByPrefixSession(ctx context.Context, p commonVers
return session, nil
}
func (n *layer) putListLatestVersionsSession(ctx context.Context, p commonLatestVersionsListingParams, session *data.ListSession, allObjects []*data.ExtendedNodeVersion) {
func (n *Layer) putListLatestVersionsSession(ctx context.Context, p commonLatestVersionsListingParams, session *data.ListSession, allObjects []*data.ExtendedNodeVersion) {
if len(allObjects) <= p.MaxKeys {
return
}
@ -366,7 +366,7 @@ func (n *layer) putListLatestVersionsSession(ctx context.Context, p commonLatest
n.cache.PutListSession(n.BearerOwner(ctx), cacheKey, session)
}
func (n *layer) putListAllVersionsSession(ctx context.Context, p commonVersionsListingParams, session *data.ListSession, allObjects []*data.ExtendedNodeVersion) {
func (n *Layer) putListAllVersionsSession(ctx context.Context, p commonVersionsListingParams, session *data.ListSession, allObjects []*data.ExtendedNodeVersion) {
if len(allObjects) <= p.MaxKeys {
return
}
@ -498,7 +498,7 @@ func nodesGeneratorVersions(ctx context.Context, p commonVersionsListingParams,
return nodeCh, errCh
}
func (n *layer) initWorkerPool(ctx context.Context, size int, p commonVersionsListingParams, input <-chan *data.ExtendedNodeVersion) (<-chan *data.ExtendedNodeVersion, error) {
func (n *Layer) initWorkerPool(ctx context.Context, size int, p commonVersionsListingParams, input <-chan *data.ExtendedNodeVersion) (<-chan *data.ExtendedNodeVersion, error) {
reqLog := n.reqLogger(ctx)
pool, err := ants.NewPool(size, ants.WithLogger(&logWrapper{reqLog}))
if err != nil {
@ -637,7 +637,7 @@ func triageExtendedObjects(allObjects []*data.ExtendedNodeVersion) (prefixes []s
return
}
func (n *layer) objectInfoFromObjectsCacheOrFrostFS(ctx context.Context, bktInfo *data.BucketInfo, node *data.NodeVersion) (oi *data.ObjectInfo) {
func (n *Layer) objectInfoFromObjectsCacheOrFrostFS(ctx context.Context, bktInfo *data.BucketInfo, node *data.NodeVersion) (oi *data.ObjectInfo) {
owner := n.BearerOwner(ctx)
if extInfo := n.cache.GetObject(owner, newAddress(bktInfo.CID, node.OID)); extInfo != nil {
return extInfo.ObjectInfo

View file

@ -20,7 +20,7 @@ func TestObjectLockAttributes(t *testing.T) {
obj := tc.putObject([]byte("content obj1 v1"))
p := &PutLockInfoParams{
ObjVersion: &ObjectVersion{
ObjVersion: &data.ObjectVersion{
BktInfo: tc.bktInfo,
ObjectName: obj.Name,
VersionID: obj.VersionID(),

View file

@ -36,7 +36,6 @@ const (
MultipartObjectSize = "S3-Multipart-Object-Size"
metaPrefix = "meta-"
aclPrefix = "acl-"
MaxSizeUploadsList = 1000
MaxSizePartsList = 1000
@ -62,8 +61,7 @@ type (
}
UploadData struct {
TagSet map[string]string
ACLHeaders map[string]string
TagSet map[string]string
}
UploadPartParams struct {
@ -146,10 +144,9 @@ type (
}
)
func (n *layer) CreateMultipartUpload(ctx context.Context, p *CreateMultipartParams) error {
func (n *Layer) CreateMultipartUpload(ctx context.Context, p *CreateMultipartParams) error {
metaSize := len(p.Header)
if p.Data != nil {
metaSize += len(p.Data.ACLHeaders)
metaSize += len(p.Data.TagSet)
}
@ -167,10 +164,6 @@ func (n *layer) CreateMultipartUpload(ctx context.Context, p *CreateMultipartPar
}
if p.Data != nil {
for key, val := range p.Data.ACLHeaders {
info.Meta[aclPrefix+key] = val
}
for key, val := range p.Data.TagSet {
info.Meta[tagPrefix+key] = val
}
@ -185,7 +178,7 @@ func (n *layer) CreateMultipartUpload(ctx context.Context, p *CreateMultipartPar
return n.treeService.CreateMultipartUpload(ctx, p.Info.Bkt, info)
}
func (n *layer) UploadPart(ctx context.Context, p *UploadPartParams) (string, error) {
func (n *Layer) UploadPart(ctx context.Context, p *UploadPartParams) (string, error) {
multipartInfo, err := n.treeService.GetMultipartUpload(ctx, p.Info.Bkt, p.Info.Key, p.Info.UploadID)
if err != nil {
if errors.Is(err, ErrNodeNotFound) {
@ -206,7 +199,7 @@ func (n *layer) UploadPart(ctx context.Context, p *UploadPartParams) (string, er
return objInfo.ETag(n.features.MD5Enabled()), nil
}
func (n *layer) uploadPart(ctx context.Context, multipartInfo *data.MultipartInfo, p *UploadPartParams) (*data.ObjectInfo, error) {
func (n *Layer) uploadPart(ctx context.Context, multipartInfo *data.MultipartInfo, p *UploadPartParams) (*data.ObjectInfo, error) {
encInfo := FormEncryptionInfo(multipartInfo.Meta)
if err := p.Info.Encryption.MatchObjectEncryption(encInfo); err != nil {
n.reqLogger(ctx).Warn(logs.MismatchedObjEncryptionInfo, zap.Error(err))
@ -319,7 +312,7 @@ func (n *layer) uploadPart(ctx context.Context, multipartInfo *data.MultipartInf
return objInfo, nil
}
func (n *layer) UploadPartCopy(ctx context.Context, p *UploadCopyParams) (*data.ObjectInfo, error) {
func (n *Layer) UploadPartCopy(ctx context.Context, p *UploadCopyParams) (*data.ObjectInfo, error) {
multipartInfo, err := n.treeService.GetMultipartUpload(ctx, p.Info.Bkt, p.Info.Key, p.Info.UploadID)
if err != nil {
if errors.Is(err, ErrNodeNotFound) {
@ -367,7 +360,7 @@ func (n *layer) UploadPartCopy(ctx context.Context, p *UploadCopyParams) (*data.
return n.uploadPart(ctx, multipartInfo, params)
}
func (n *layer) CompleteMultipartUpload(ctx context.Context, p *CompleteMultipartParams) (*UploadData, *data.ExtendedObjectInfo, error) {
func (n *Layer) CompleteMultipartUpload(ctx context.Context, p *CompleteMultipartParams) (*UploadData, *data.ExtendedObjectInfo, error) {
for i := 1; i < len(p.Parts); i++ {
if p.Parts[i].PartNumber <= p.Parts[i-1].PartNumber {
return nil, nil, s3errors.GetAPIError(s3errors.ErrInvalidPartOrder)
@ -432,16 +425,13 @@ func (n *layer) CompleteMultipartUpload(ctx context.Context, p *CompleteMultipar
initMetadata[MultipartObjectSize] = strconv.FormatUint(multipartObjetSize, 10)
uploadData := &UploadData{
TagSet: make(map[string]string),
ACLHeaders: make(map[string]string),
TagSet: make(map[string]string),
}
for key, val := range multipartInfo.Meta {
if strings.HasPrefix(key, metaPrefix) {
initMetadata[strings.TrimPrefix(key, metaPrefix)] = val
} else if strings.HasPrefix(key, tagPrefix) {
uploadData.TagSet[strings.TrimPrefix(key, tagPrefix)] = val
} else if strings.HasPrefix(key, aclPrefix) {
uploadData.ACLHeaders[strings.TrimPrefix(key, aclPrefix)] = val
}
}
@ -492,7 +482,7 @@ func (n *layer) CompleteMultipartUpload(ctx context.Context, p *CompleteMultipar
return uploadData, extObjInfo, n.treeService.DeleteMultipartUpload(ctx, p.Info.Bkt, multipartInfo)
}
func (n *layer) ListMultipartUploads(ctx context.Context, p *ListMultipartUploadsParams) (*ListMultipartUploadsInfo, error) {
func (n *Layer) ListMultipartUploads(ctx context.Context, p *ListMultipartUploadsParams) (*ListMultipartUploadsInfo, error) {
var result ListMultipartUploadsInfo
if p.MaxUploads == 0 {
return &result, nil
@ -552,7 +542,7 @@ func (n *layer) ListMultipartUploads(ctx context.Context, p *ListMultipartUpload
return &result, nil
}
func (n *layer) AbortMultipartUpload(ctx context.Context, p *UploadInfoParams) error {
func (n *Layer) AbortMultipartUpload(ctx context.Context, p *UploadInfoParams) error {
multipartInfo, parts, err := n.getUploadParts(ctx, p)
if err != nil {
return err
@ -568,7 +558,7 @@ func (n *layer) AbortMultipartUpload(ctx context.Context, p *UploadInfoParams) e
return n.treeService.DeleteMultipartUpload(ctx, p.Bkt, multipartInfo)
}
func (n *layer) ListParts(ctx context.Context, p *ListPartsParams) (*ListPartsInfo, error) {
func (n *Layer) ListParts(ctx context.Context, p *ListPartsParams) (*ListPartsInfo, error) {
var res ListPartsInfo
multipartInfo, partsInfo, err := n.getUploadParts(ctx, p.Info)
if err != nil {
@ -622,7 +612,7 @@ func (n *layer) ListParts(ctx context.Context, p *ListPartsParams) (*ListPartsIn
return &res, nil
}
func (n *layer) getUploadParts(ctx context.Context, p *UploadInfoParams) (*data.MultipartInfo, map[int]*data.PartInfo, error) {
func (n *Layer) getUploadParts(ctx context.Context, p *UploadInfoParams) (*data.MultipartInfo, map[int]*data.PartInfo, error) {
multipartInfo, err := n.treeService.GetMultipartUpload(ctx, p.Bkt, p.Key, p.UploadID)
if err != nil {
if errors.Is(err, ErrNodeNotFound) {

View file

@ -1,89 +0,0 @@
package layer
import (
"bytes"
"context"
"encoding/xml"
errorsStd "errors"
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"go.uber.org/zap"
)
type PutBucketNotificationConfigurationParams struct {
RequestInfo *middleware.ReqInfo
BktInfo *data.BucketInfo
Configuration *data.NotificationConfiguration
CopiesNumbers []uint32
}
func (n *layer) PutBucketNotificationConfiguration(ctx context.Context, p *PutBucketNotificationConfigurationParams) error {
confXML, err := xml.Marshal(p.Configuration)
if err != nil {
return fmt.Errorf("marshal notify configuration: %w", err)
}
prm := PrmObjectCreate{
Container: p.BktInfo.CID,
Payload: bytes.NewReader(confXML),
Filepath: p.BktInfo.NotificationConfigurationObjectName(),
CreationTime: TimeNow(ctx),
CopiesNumber: p.CopiesNumbers,
}
_, objID, _, _, err := n.objectPutAndHash(ctx, prm, p.BktInfo)
if err != nil {
return err
}
objIDToDelete, err := n.treeService.PutNotificationConfigurationNode(ctx, p.BktInfo, objID)
objIDToDeleteNotFound := errorsStd.Is(err, ErrNoNodeToRemove)
if err != nil && !objIDToDeleteNotFound {
return err
}
if !objIDToDeleteNotFound {
if err = n.objectDelete(ctx, p.BktInfo, objIDToDelete); err != nil {
n.reqLogger(ctx).Error(logs.CouldntDeleteNotificationConfigurationObject, zap.Error(err),
zap.String("cid", p.BktInfo.CID.EncodeToString()),
zap.String("oid", objIDToDelete.EncodeToString()))
}
}
n.cache.PutNotificationConfiguration(n.BearerOwner(ctx), p.BktInfo, p.Configuration)
return nil
}
func (n *layer) GetBucketNotificationConfiguration(ctx context.Context, bktInfo *data.BucketInfo) (*data.NotificationConfiguration, error) {
owner := n.BearerOwner(ctx)
if conf := n.cache.GetNotificationConfiguration(owner, bktInfo); conf != nil {
return conf, nil
}
objID, err := n.treeService.GetNotificationConfigurationNode(ctx, bktInfo)
objIDNotFound := errorsStd.Is(err, ErrNodeNotFound)
if err != nil && !objIDNotFound {
return nil, err
}
conf := &data.NotificationConfiguration{}
if !objIDNotFound {
obj, err := n.objectGet(ctx, bktInfo, objID)
if err != nil {
return nil, err
}
if err = xml.Unmarshal(obj.Payload(), &conf); err != nil {
return nil, fmt.Errorf("unmarshal notify configuration: %w", err)
}
}
n.cache.PutNotificationConfiguration(owner, bktInfo, conf)
return conf, nil
}

View file

@ -67,7 +67,7 @@ func newAddress(cnr cid.ID, obj oid.ID) oid.Address {
}
// objectHead returns all object's headers.
func (n *layer) objectHead(ctx context.Context, bktInfo *data.BucketInfo, idObj oid.ID) (*object.Object, error) {
func (n *Layer) objectHead(ctx context.Context, bktInfo *data.BucketInfo, idObj oid.ID) (*object.Object, error) {
prm := PrmObjectRead{
Container: bktInfo.CID,
Object: idObj,
@ -84,7 +84,7 @@ func (n *layer) objectHead(ctx context.Context, bktInfo *data.BucketInfo, idObj
return res.Head, nil
}
func (n *layer) initObjectPayloadReader(ctx context.Context, p getParams) (io.Reader, error) {
func (n *Layer) initObjectPayloadReader(ctx context.Context, p getParams) (io.Reader, error) {
if _, isCombined := p.objInfo.Headers[MultipartObjectSize]; !isCombined {
return n.initFrostFSObjectPayloadReader(ctx, getFrostFSParams{
off: p.off,
@ -131,7 +131,7 @@ func (n *layer) initObjectPayloadReader(ctx context.Context, p getParams) (io.Re
// initializes payload reader of the FrostFS object.
// Zero range corresponds to full payload (panics if only offset is set).
func (n *layer) initFrostFSObjectPayloadReader(ctx context.Context, p getFrostFSParams) (io.Reader, error) {
func (n *Layer) initFrostFSObjectPayloadReader(ctx context.Context, p getFrostFSParams) (io.Reader, error) {
prm := PrmObjectRead{
Container: p.bktInfo.CID,
Object: p.oid,
@ -150,7 +150,7 @@ func (n *layer) initFrostFSObjectPayloadReader(ctx context.Context, p getFrostFS
}
// objectGet returns an object with payload in the object.
func (n *layer) objectGet(ctx context.Context, bktInfo *data.BucketInfo, objID oid.ID) (*object.Object, error) {
func (n *Layer) objectGet(ctx context.Context, bktInfo *data.BucketInfo, objID oid.ID) (*object.Object, error) {
prm := PrmObjectRead{
Container: bktInfo.CID,
Object: objID,
@ -214,7 +214,7 @@ func ParseCompletedPartHeader(hdr string) (*Part, error) {
}
// PutObject stores object into FrostFS, took payload from io.Reader.
func (n *layer) PutObject(ctx context.Context, p *PutObjectParams) (*data.ExtendedObjectInfo, error) {
func (n *Layer) PutObject(ctx context.Context, p *PutObjectParams) (*data.ExtendedObjectInfo, error) {
bktSettings, err := n.GetBucketSettings(ctx, p.BktInfo)
if err != nil {
return nil, fmt.Errorf("couldn't get versioning settings object: %w", err)
@ -321,7 +321,7 @@ func (n *layer) PutObject(ctx context.Context, p *PutObjectParams) (*data.Extend
if p.Lock != nil && (p.Lock.Retention != nil || p.Lock.LegalHold != nil) {
putLockInfoPrms := &PutLockInfoParams{
ObjVersion: &ObjectVersion{
ObjVersion: &data.ObjectVersion{
BktInfo: p.BktInfo,
ObjectName: p.Object,
VersionID: id.EncodeToString(),
@ -363,7 +363,7 @@ func (n *layer) PutObject(ctx context.Context, p *PutObjectParams) (*data.Extend
return extendedObjInfo, nil
}
func (n *layer) headLastVersionIfNotDeleted(ctx context.Context, bkt *data.BucketInfo, objectName string) (*data.ExtendedObjectInfo, error) {
func (n *Layer) headLastVersionIfNotDeleted(ctx context.Context, bkt *data.BucketInfo, objectName string) (*data.ExtendedObjectInfo, error) {
owner := n.BearerOwner(ctx)
if extObjInfo := n.cache.GetLastObject(owner, bkt.Name, objectName); extObjInfo != nil {
return extObjInfo, nil
@ -384,7 +384,7 @@ func (n *layer) headLastVersionIfNotDeleted(ctx context.Context, bkt *data.Bucke
meta, err := n.objectHead(ctx, bkt, node.OID)
if err != nil {
if client.IsErrObjectNotFound(err) {
return nil, fmt.Errorf("%w: %s", apiErrors.GetAPIError(apiErrors.ErrNoSuchKey), err.Error())
return nil, fmt.Errorf("%w: %s; %s", apiErrors.GetAPIError(apiErrors.ErrNoSuchKey), err.Error(), node.OID.EncodeToString())
}
return nil, err
}
@ -401,7 +401,7 @@ func (n *layer) headLastVersionIfNotDeleted(ctx context.Context, bkt *data.Bucke
return extObjInfo, nil
}
func (n *layer) headVersion(ctx context.Context, bkt *data.BucketInfo, p *HeadObjectParams) (*data.ExtendedObjectInfo, error) {
func (n *Layer) headVersion(ctx context.Context, bkt *data.BucketInfo, p *HeadObjectParams) (*data.ExtendedObjectInfo, error) {
var err error
var foundVersion *data.NodeVersion
if p.VersionID == data.UnversionedObjectVersionID {
@ -459,7 +459,7 @@ func (n *layer) headVersion(ctx context.Context, bkt *data.BucketInfo, p *HeadOb
}
// objectDelete puts tombstone object into frostfs.
func (n *layer) objectDelete(ctx context.Context, bktInfo *data.BucketInfo, idObj oid.ID) error {
func (n *Layer) objectDelete(ctx context.Context, bktInfo *data.BucketInfo, idObj oid.ID) error {
prm := PrmObjectDelete{
Container: bktInfo.CID,
Object: idObj,
@ -474,7 +474,7 @@ func (n *layer) objectDelete(ctx context.Context, bktInfo *data.BucketInfo, idOb
// objectPutAndHash prepare auth parameters and invoke frostfs.CreateObject.
// Returns object ID and payload sha256 hash.
func (n *layer) objectPutAndHash(ctx context.Context, prm PrmObjectCreate, bktInfo *data.BucketInfo) (uint64, oid.ID, []byte, []byte, error) {
func (n *Layer) objectPutAndHash(ctx context.Context, prm PrmObjectCreate, bktInfo *data.BucketInfo) (uint64, oid.ID, []byte, []byte, error) {
n.prepareAuthParameters(ctx, &prm.PrmAuth, bktInfo.Owner)
prm.ClientCut = n.features.ClientCut()
prm.BufferMaxSize = n.features.BufferMaxSizeForPut()

View file

@ -31,8 +31,6 @@ func TestWrapReader(t *testing.T) {
func TestGoroutinesDontLeakInPutAndHash(t *testing.T) {
tc := prepareContext(t)
l, ok := tc.layer.(*layer)
require.True(t, ok)
content := make([]byte, 128*1024)
_, err := rand.Read(content)
@ -46,7 +44,7 @@ func TestGoroutinesDontLeakInPutAndHash(t *testing.T) {
expErr := errors.New("some error")
tc.testFrostFS.SetObjectPutError(tc.obj, expErr)
_, _, _, _, err = l.objectPutAndHash(tc.ctx, prm, tc.bktInfo)
_, _, _, _, err = tc.layer.objectPutAndHash(tc.ctx, prm, tc.bktInfo)
require.ErrorIs(t, err, expErr)
require.Empty(t, payload.Len(), "body must be read out otherwise goroutines can leak in wrapReader")
}

View file

@ -20,13 +20,13 @@ const (
)
type PutLockInfoParams struct {
ObjVersion *ObjectVersion
ObjVersion *data.ObjectVersion
NewLock *data.ObjectLock
CopiesNumbers []uint32
NodeVersion *data.NodeVersion // optional
}
func (n *layer) PutLockInfo(ctx context.Context, p *PutLockInfoParams) (err error) {
func (n *Layer) PutLockInfo(ctx context.Context, p *PutLockInfoParams) (err error) {
newLock := p.NewLock
versionNode := p.NodeVersion
// sometimes node version can be provided from executing context
@ -100,7 +100,7 @@ func (n *layer) PutLockInfo(ctx context.Context, p *PutLockInfoParams) (err erro
return nil
}
func (n *layer) getNodeVersionFromCacheOrFrostfs(ctx context.Context, objVersion *ObjectVersion) (nodeVersion *data.NodeVersion, err error) {
func (n *Layer) getNodeVersionFromCacheOrFrostfs(ctx context.Context, objVersion *data.ObjectVersion) (nodeVersion *data.NodeVersion, err error) {
// check cache if node version is stored inside extendedObjectVersion
nodeVersion = n.getNodeVersionFromCache(n.BearerOwner(ctx), objVersion)
if nodeVersion == nil {
@ -111,7 +111,7 @@ func (n *layer) getNodeVersionFromCacheOrFrostfs(ctx context.Context, objVersion
return nodeVersion, nil
}
func (n *layer) putLockObject(ctx context.Context, bktInfo *data.BucketInfo, objID oid.ID, lock *data.ObjectLock, copiesNumber []uint32) (oid.ID, error) {
func (n *Layer) putLockObject(ctx context.Context, bktInfo *data.BucketInfo, objID oid.ID, lock *data.ObjectLock, copiesNumber []uint32) (oid.ID, error) {
prm := PrmObjectCreate{
Container: bktInfo.CID,
Locks: []oid.ID{objID},
@ -129,7 +129,7 @@ func (n *layer) putLockObject(ctx context.Context, bktInfo *data.BucketInfo, obj
return id, err
}
func (n *layer) GetLockInfo(ctx context.Context, objVersion *ObjectVersion) (*data.LockInfo, error) {
func (n *Layer) GetLockInfo(ctx context.Context, objVersion *data.ObjectVersion) (*data.LockInfo, error) {
owner := n.BearerOwner(ctx)
if lockInfo := n.cache.GetLockInfo(owner, lockObjectKey(objVersion)); lockInfo != nil {
return lockInfo, nil
@ -153,7 +153,7 @@ func (n *layer) GetLockInfo(ctx context.Context, objVersion *ObjectVersion) (*da
return lockInfo, nil
}
func (n *layer) getCORS(ctx context.Context, bkt *data.BucketInfo) (*data.CORSConfiguration, error) {
func (n *Layer) getCORS(ctx context.Context, bkt *data.BucketInfo) (*data.CORSConfiguration, error) {
owner := n.BearerOwner(ctx)
if cors := n.cache.GetCORS(owner, bkt); cors != nil {
return cors, nil
@ -185,12 +185,12 @@ func (n *layer) getCORS(ctx context.Context, bkt *data.BucketInfo) (*data.CORSCo
return cors, nil
}
func lockObjectKey(objVersion *ObjectVersion) string {
func lockObjectKey(objVersion *data.ObjectVersion) string {
// todo reconsider forming name since versionID can be "null" or ""
return ".lock." + objVersion.BktInfo.CID.EncodeToString() + "." + objVersion.ObjectName + "." + objVersion.VersionID
}
func (n *layer) GetBucketSettings(ctx context.Context, bktInfo *data.BucketInfo) (*data.BucketSettings, error) {
func (n *Layer) GetBucketSettings(ctx context.Context, bktInfo *data.BucketInfo) (*data.BucketSettings, error) {
owner := n.BearerOwner(ctx)
if settings := n.cache.GetSettings(owner, bktInfo); settings != nil {
return settings, nil
@ -209,7 +209,7 @@ func (n *layer) GetBucketSettings(ctx context.Context, bktInfo *data.BucketInfo)
return settings, nil
}
func (n *layer) PutBucketSettings(ctx context.Context, p *PutSettingsParams) error {
func (n *Layer) PutBucketSettings(ctx context.Context, p *PutSettingsParams) error {
if err := n.treeService.PutSettingsNode(ctx, p.BktInfo, p.Settings); err != nil {
return fmt.Errorf("failed to get settings node: %w", err)
}
@ -219,7 +219,7 @@ func (n *layer) PutBucketSettings(ctx context.Context, p *PutSettingsParams) err
return nil
}
func (n *layer) attributesFromLock(ctx context.Context, lock *data.ObjectLock) ([][2]string, error) {
func (n *Layer) attributesFromLock(ctx context.Context, lock *data.ObjectLock) ([][2]string, error) {
var (
err error
expEpoch uint64

View file

@ -14,22 +14,7 @@ import (
"go.uber.org/zap"
)
type GetObjectTaggingParams struct {
ObjectVersion *ObjectVersion
// NodeVersion can be nil. If not nil we save one request to tree service.
NodeVersion *data.NodeVersion // optional
}
type PutObjectTaggingParams struct {
ObjectVersion *ObjectVersion
TagSet map[string]string
// NodeVersion can be nil. If not nil we save one request to tree service.
NodeVersion *data.NodeVersion // optional
}
func (n *layer) GetObjectTagging(ctx context.Context, p *GetObjectTaggingParams) (string, map[string]string, error) {
func (n *Layer) GetObjectTagging(ctx context.Context, p *data.GetObjectTaggingParams) (string, map[string]string, error) {
var err error
owner := n.BearerOwner(ctx)
@ -65,12 +50,12 @@ func (n *layer) GetObjectTagging(ctx context.Context, p *GetObjectTaggingParams)
return p.ObjectVersion.VersionID, tags, nil
}
func (n *layer) PutObjectTagging(ctx context.Context, p *PutObjectTaggingParams) (nodeVersion *data.NodeVersion, err error) {
nodeVersion = p.NodeVersion
func (n *Layer) PutObjectTagging(ctx context.Context, p *data.PutObjectTaggingParams) (err error) {
nodeVersion := p.NodeVersion
if nodeVersion == nil {
nodeVersion, err = n.getNodeVersionFromCacheOrFrostfs(ctx, p.ObjectVersion)
if err != nil {
return nil, err
return err
}
}
p.ObjectVersion.VersionID = nodeVersion.OID.EncodeToString()
@ -78,38 +63,38 @@ func (n *layer) PutObjectTagging(ctx context.Context, p *PutObjectTaggingParams)
err = n.treeService.PutObjectTagging(ctx, p.ObjectVersion.BktInfo, nodeVersion, p.TagSet)
if err != nil {
if errors.Is(err, ErrNodeNotFound) {
return nil, fmt.Errorf("%w: %s", s3errors.GetAPIError(s3errors.ErrNoSuchKey), err.Error())
return fmt.Errorf("%w: %s", s3errors.GetAPIError(s3errors.ErrNoSuchKey), err.Error())
}
return nil, err
return err
}
n.cache.PutTagging(n.BearerOwner(ctx), objectTaggingCacheKey(p.ObjectVersion), p.TagSet)
return nodeVersion, nil
return nil
}
func (n *layer) DeleteObjectTagging(ctx context.Context, p *ObjectVersion) (*data.NodeVersion, error) {
func (n *Layer) DeleteObjectTagging(ctx context.Context, p *data.ObjectVersion) error {
version, err := n.getNodeVersion(ctx, p)
if err != nil {
return nil, err
return err
}
err = n.treeService.DeleteObjectTagging(ctx, p.BktInfo, version)
if err != nil {
if errors.Is(err, ErrNodeNotFound) {
return nil, fmt.Errorf("%w: %s", s3errors.GetAPIError(s3errors.ErrNoSuchKey), err.Error())
return fmt.Errorf("%w: %s", s3errors.GetAPIError(s3errors.ErrNoSuchKey), err.Error())
}
return nil, err
return err
}
p.VersionID = version.OID.EncodeToString()
n.cache.DeleteTagging(objectTaggingCacheKey(p))
return version, nil
return nil
}
func (n *layer) GetBucketTagging(ctx context.Context, bktInfo *data.BucketInfo) (map[string]string, error) {
func (n *Layer) GetBucketTagging(ctx context.Context, bktInfo *data.BucketInfo) (map[string]string, error) {
owner := n.BearerOwner(ctx)
if tags := n.cache.GetTagging(owner, bucketTaggingCacheKey(bktInfo.CID)); tags != nil {
@ -126,7 +111,7 @@ func (n *layer) GetBucketTagging(ctx context.Context, bktInfo *data.BucketInfo)
return tags, nil
}
func (n *layer) PutBucketTagging(ctx context.Context, bktInfo *data.BucketInfo, tagSet map[string]string) error {
func (n *Layer) PutBucketTagging(ctx context.Context, bktInfo *data.BucketInfo, tagSet map[string]string) error {
if err := n.treeService.PutBucketTagging(ctx, bktInfo, tagSet); err != nil {
return err
}
@ -136,13 +121,13 @@ func (n *layer) PutBucketTagging(ctx context.Context, bktInfo *data.BucketInfo,
return nil
}
func (n *layer) DeleteBucketTagging(ctx context.Context, bktInfo *data.BucketInfo) error {
func (n *Layer) DeleteBucketTagging(ctx context.Context, bktInfo *data.BucketInfo) error {
n.cache.DeleteTagging(bucketTaggingCacheKey(bktInfo.CID))
return n.treeService.DeleteBucketTagging(ctx, bktInfo)
}
func objectTaggingCacheKey(p *ObjectVersion) string {
func objectTaggingCacheKey(p *data.ObjectVersion) string {
return ".tagset." + p.BktInfo.CID.EncodeToString() + "." + p.ObjectName + "." + p.VersionID
}
@ -150,7 +135,7 @@ func bucketTaggingCacheKey(cnrID cid.ID) string {
return ".tagset." + cnrID.EncodeToString()
}
func (n *layer) getNodeVersion(ctx context.Context, objVersion *ObjectVersion) (*data.NodeVersion, error) {
func (n *Layer) getNodeVersion(ctx context.Context, objVersion *data.ObjectVersion) (*data.NodeVersion, error) {
var err error
var version *data.NodeVersion
@ -188,7 +173,7 @@ func (n *layer) getNodeVersion(ctx context.Context, objVersion *ObjectVersion) (
return version, err
}
func (n *layer) getNodeVersionFromCache(owner user.ID, o *ObjectVersion) *data.NodeVersion {
func (n *Layer) getNodeVersionFromCache(owner user.ID, o *data.ObjectVersion) *data.NodeVersion {
if len(o.VersionID) == 0 || o.VersionID == data.UnversionedObjectVersionID {
return nil
}

View file

@ -110,14 +110,6 @@ func (t *TreeServiceMock) GetSettingsNode(_ context.Context, bktInfo *data.Bucke
return settings, nil
}
func (t *TreeServiceMock) GetNotificationConfigurationNode(context.Context, *data.BucketInfo) (oid.ID, error) {
panic("implement me")
}
func (t *TreeServiceMock) PutNotificationConfigurationNode(context.Context, *data.BucketInfo, oid.ID) (oid.ID, error) {
panic("implement me")
}
func (t *TreeServiceMock) GetBucketCORS(_ context.Context, bktInfo *data.BucketInfo) (oid.ID, error) {
systemMap, ok := t.system[bktInfo.CID.EncodeToString()]
if !ok {

View file

@ -18,17 +18,6 @@ type TreeService interface {
// If tree node is not found returns ErrNodeNotFound error.
GetSettingsNode(ctx context.Context, bktInfo *data.BucketInfo) (*data.BucketSettings, error)
// GetNotificationConfigurationNode gets an object id that corresponds to object with bucket CORS.
//
// If tree node is not found returns ErrNodeNotFound error.
GetNotificationConfigurationNode(ctx context.Context, bktInfo *data.BucketInfo) (oid.ID, error)
// PutNotificationConfigurationNode puts a node to a system tree
// and returns objectID of a previous notif config which must be deleted in FrostFS.
//
// If object id to remove is not found returns ErrNoNodeToRemove error.
PutNotificationConfigurationNode(ctx context.Context, bktInfo *data.BucketInfo, objID oid.ID) (oid.ID, error)
// GetBucketCORS gets an object id that corresponds to object with bucket CORS.
//
// If object id is not found returns ErrNodeNotFound error.

View file

@ -130,7 +130,7 @@ func (tc *testContext) getObjectByID(objID oid.ID) *object.Object {
type testContext struct {
t *testing.T
ctx context.Context
layer Client
layer *Layer
bktInfo *data.BucketInfo
obj string
testFrostFS *TestFrostFS
@ -145,12 +145,12 @@ func prepareContext(t *testing.T, cachesConfig ...*CachesConfig) *testContext {
bearerToken := bearertest.Token()
require.NoError(t, bearerToken.Sign(key.PrivateKey))
ctx := middleware.SetBoxData(context.Background(), &accessbox.Box{
ctx := middleware.SetBox(context.Background(), &middleware.Box{AccessBox: &accessbox.Box{
Gate: &accessbox.GateData{
BearerToken: &bearerToken,
GateKey: key.PublicKey(),
},
})
}})
tp := NewTestFrostFS(key)
bktName := "testbucket1"

View file

@ -13,6 +13,7 @@ import (
frostfsErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"go.uber.org/zap"
)
@ -23,6 +24,7 @@ type (
AccessBox *accessbox.Box
ClientTime time.Time
AuthHeaders *AuthHeader
Attributes []object.Attribute
}
// Center is a user authentication interface.
@ -59,15 +61,13 @@ func Auth(center Center, log *zap.Logger) Func {
if _, ok := err.(apiErrors.Error); !ok {
err = apiErrors.GetAPIError(apiErrors.ErrAccessDenied)
}
WriteErrorResponse(w, GetReqInfo(r.Context()), err)
if _, wrErr := WriteErrorResponse(w, GetReqInfo(r.Context()), err); wrErr != nil {
reqLogOrDefault(ctx, log).Error(logs.FailedToWriteResponse, zap.Error(wrErr))
}
return
}
} else {
ctx = SetBoxData(ctx, box.AccessBox)
if !box.ClientTime.IsZero() {
ctx = SetClientTime(ctx, box.ClientTime)
}
ctx = SetAuthHeaders(ctx, box.AuthHeaders)
ctx = SetBox(ctx, box)
if box.AccessBox.Gate.BearerToken != nil {
reqInfo.User = bearer.ResolveIssuer(*box.AccessBox.Gate.BearerToken).String()
@ -97,7 +97,9 @@ func FrostfsIDValidation(frostfsID FrostFSIDValidator, log *zap.Logger) Func {
if err = validateBearerToken(frostfsID, bd.Gate.BearerToken); err != nil {
reqLogOrDefault(ctx, log).Error(logs.FrostfsIDValidationFailed, zap.Error(err))
WriteErrorResponse(w, GetReqInfo(r.Context()), err)
if _, wrErr := WriteErrorResponse(w, GetReqInfo(r.Context()), err); wrErr != nil {
reqLogOrDefault(ctx, log).Error(logs.FailedToWriteResponse, zap.Error(wrErr))
}
return
}

View file

@ -5,10 +5,11 @@ const (
// bucket operations.
OptionsOperation = "Options"
OptionsBucketOperation = "OptionsBucket"
HeadBucketOperation = "HeadBucket"
ListMultipartUploadsOperation = "ListMultipartUploads"
GetBucketLocationOperation = "GetBucketLocation"
GetBucketPolicyStatusOperation = "GetBucketPolicyStatus"
GetBucketPolicyOperation = "GetBucketPolicy"
GetBucketLifecycleOperation = "GetBucketLifecycle"
GetBucketEncryptionOperation = "GetBucketEncryption"
@ -50,6 +51,7 @@ const (
// object operations.
OptionsObjectOperation = "OptionsObject"
HeadObjectOperation = "HeadObject"
ListPartsOperation = "ListParts"
GetObjectACLOperation = "GetObjectACL"
@ -77,6 +79,7 @@ const (
const (
UploadsQuery = "uploads"
LocationQuery = "location"
PolicyStatusQuery = "policyStatus"
PolicyQuery = "policy"
LifecycleQuery = "lifecycle"
EncryptionQuery = "encryption"

View file

@ -103,7 +103,7 @@ func stats(f http.HandlerFunc, resolveCID cidResolveFunc, appMetrics *metrics.Ap
func requestTypeFromAPI(api string) metrics.RequestType {
switch api {
case OptionsOperation, HeadObjectOperation, HeadBucketOperation:
case OptionsBucketOperation, OptionsObjectOperation, HeadObjectOperation, HeadBucketOperation:
return metrics.HEADRequest
case CreateMultipartUploadOperation, UploadPartCopyOperation, UploadPartOperation, CompleteMultipartUploadOperation,
PutObjectACLOperation, PutObjectTaggingOperation, CopyObjectOperation, PutObjectRetentionOperation, PutObjectLegalHoldOperation,

View file

@ -3,12 +3,16 @@ package middleware
import (
"context"
"crypto/elliptic"
"encoding/xml"
"fmt"
"io"
"net/http"
"net/url"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
apiErr "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
frostfsErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
@ -20,13 +24,46 @@ import (
"go.uber.org/zap"
)
const (
QueryVersionID = "versionId"
QueryPrefix = "prefix"
QueryDelimiter = "delimiter"
QueryMaxKeys = "max-keys"
amzTagging = "x-amz-tagging"
)
// In these operations we don't check resource tags because
// * they haven't been created yet
// * resource tags shouldn't be checked by AWS spec.
var withoutResourceOps = []string{
CreateBucketOperation,
CreateMultipartUploadOperation,
AbortMultipartUploadOperation,
CompleteMultipartUploadOperation,
UploadPartOperation,
UploadPartCopyOperation,
ListPartsOperation,
PutObjectOperation,
CopyObjectOperation,
DeleteObjectOperation,
DeleteMultipleObjectsOperation,
}
type PolicySettings interface {
PolicyDenyByDefault() bool
ACLEnabled() bool
}
type FrostFSIDInformer interface {
GetUserGroupIDs(userHash util.Uint160) ([]string, error)
GetUserGroupIDsAndClaims(userHash util.Uint160) ([]string, map[string]string, error)
}
type XMLDecoder interface {
NewXMLDecoder(io.Reader) *xml.Decoder
}
type ResourceTagging interface {
GetBucketTagging(ctx context.Context, bktInfo *data.BucketInfo) (map[string]string, error)
GetObjectTagging(ctx context.Context, p *data.GetObjectTaggingParams) (string, map[string]string, error)
}
// BucketResolveFunc is a func to resolve bucket info by name.
@ -39,6 +76,8 @@ type PolicyConfig struct {
Domains []string
Log *zap.Logger
BucketResolver BucketResolveFunc
Decoder XMLDecoder
Tagging ResourceTagging
}
func PolicyCheck(cfg PolicyConfig) Func {
@ -47,7 +86,10 @@ func PolicyCheck(cfg PolicyConfig) Func {
ctx := r.Context()
if err := policyCheck(r, cfg); err != nil {
reqLogOrDefault(ctx, cfg.Log).Error(logs.PolicyValidationFailed, zap.Error(err))
WriteErrorResponse(w, GetReqInfo(ctx), err)
err = frostfsErrors.UnwrapErr(err)
if _, wrErr := WriteErrorResponse(w, GetReqInfo(ctx), err); wrErr != nil {
reqLogOrDefault(ctx, cfg.Log).Error(logs.FailedToWriteResponse, zap.Error(wrErr))
}
return
}
@ -58,13 +100,39 @@ func PolicyCheck(cfg PolicyConfig) Func {
func policyCheck(r *http.Request, cfg PolicyConfig) error {
reqType, bktName, objName := getBucketObject(r, cfg.Domains)
req, err := getPolicyRequest(r, cfg.FrostfsID, reqType, bktName, objName, cfg.Log)
req, userKey, userGroups, err := getPolicyRequest(r, cfg, reqType, bktName, objName)
if err != nil {
return err
}
var bktInfo *data.BucketInfo
if reqType != noneType && !strings.HasSuffix(req.Operation(), CreateBucketOperation) {
bktInfo, err = cfg.BucketResolver(r.Context(), bktName)
if err != nil {
return err
}
}
reqInfo := GetReqInfo(r.Context())
target := engine.NewRequestTargetWithNamespace(reqInfo.Namespace)
if bktInfo != nil {
cnrTarget := engine.ContainerTarget(bktInfo.CID.EncodeToString())
target.Container = &cnrTarget
}
if userKey != nil {
entityName := fmt.Sprintf("%s:%s", reqInfo.Namespace, userKey.Address())
uTarget := engine.UserTarget(entityName)
target.User = &uTarget
}
gts := make([]engine.Target, len(userGroups))
for i, group := range userGroups {
entityName := fmt.Sprintf("%s:%s", reqInfo.Namespace, group)
gts[i] = engine.GroupTarget(entityName)
}
target.Groups = gts
st, found, err := cfg.Storage.IsAllowed(chain.S3, target, req)
if err != nil {
return err
@ -81,50 +149,33 @@ func policyCheck(r *http.Request, cfg PolicyConfig) error {
return apiErr.GetAPIErrorWithError(apiErr.ErrAccessDenied, fmt.Errorf("policy check: %s", st.String()))
}
isAPE, err := isAPEBehavior(r.Context(), req, cfg, reqType, bktName)
if err != nil {
return err
}
if isAPE && cfg.Settings.PolicyDenyByDefault() {
if cfg.Settings.PolicyDenyByDefault() {
return apiErr.GetAPIErrorWithError(apiErr.ErrAccessDenied, fmt.Errorf("policy check: %s", st.String()))
}
return nil
}
func isAPEBehavior(ctx context.Context, req *testutil.Request, cfg PolicyConfig, reqType ReqType, bktName string) (bool, error) {
if reqType == noneType ||
strings.HasSuffix(req.Operation(), CreateBucketOperation) {
return !cfg.Settings.ACLEnabled(), nil
}
bktInfo, err := cfg.BucketResolver(ctx, bktName) // we cannot use reqInfo.BucketName because it hasn't been set yet
if err != nil {
return false, err
}
return bktInfo.APEEnabled, nil
}
func getPolicyRequest(r *http.Request, frostfsid FrostFSIDInformer, reqType ReqType, bktName string, objName string, log *zap.Logger) (*testutil.Request, error) {
func getPolicyRequest(r *http.Request, cfg PolicyConfig, reqType ReqType, bktName string, objName string) (*testutil.Request, *keys.PublicKey, []string, error) {
var (
owner string
groups []string
tags map[string]string
pk *keys.PublicKey
)
ctx := r.Context()
bd, err := GetBoxData(ctx)
if err == nil && bd.Gate.BearerToken != nil {
pk, err := keys.NewPublicKeyFromBytes(bd.Gate.BearerToken.SigningKeyBytes(), elliptic.P256())
pk, err = keys.NewPublicKeyFromBytes(bd.Gate.BearerToken.SigningKeyBytes(), elliptic.P256())
if err != nil {
return nil, fmt.Errorf("parse pubclic key from btoken: %w", err)
return nil, nil, nil, fmt.Errorf("parse pubclic key from btoken: %w", err)
}
owner = pk.Address()
groups, err = frostfsid.GetUserGroupIDs(pk.GetScriptHash())
groups, tags, err = cfg.FrostfsID.GetUserGroupIDsAndClaims(pk.GetScriptHash())
if err != nil {
return nil, fmt.Errorf("get group ids: %w", err)
return nil, nil, nil, fmt.Errorf("get group ids: %w", err)
}
}
@ -137,15 +188,16 @@ func getPolicyRequest(r *http.Request, frostfsid FrostFSIDInformer, reqType ReqT
res = fmt.Sprintf(s3.ResourceFormatS3Bucket, bktName)
}
reqLogOrDefault(r.Context(), log).Debug(logs.PolicyRequest, zap.String("action", op),
zap.String("resource", res), zap.String("owner", owner))
requestProps, resourceProps, err := determineProperties(r, cfg.Decoder, cfg.BucketResolver, cfg.Tagging, reqType, op, bktName, objName, owner, groups, tags)
if err != nil {
return nil, nil, nil, fmt.Errorf("determine properties: %w", err)
}
return testutil.NewRequest(op, testutil.NewResource(res, nil),
map[string]string{
s3.PropertyKeyOwner: owner,
common.PropertyKeyFrostFSIDGroupID: chain.FormCondSliceContainsValue(groups),
},
), nil
reqLogOrDefault(r.Context(), cfg.Log).Debug(logs.PolicyRequest, zap.String("action", op),
zap.String("resource", res), zap.Any("request properties", requestProps),
zap.Any("resource properties", resourceProps))
return testutil.NewRequest(op, testutil.NewResource(res, resourceProps), requestProps), pk, groups, nil
}
type ReqType int
@ -176,11 +228,11 @@ func getBucketObject(r *http.Request, domains []string) (reqType ReqType, bktNam
return noneType, "", ""
}
if ind := strings.IndexByte(bktObj, '/'); ind != -1 {
if ind := strings.IndexByte(bktObj, '/'); ind != -1 && bktObj[ind+1:] != "" {
return objectType, bktObj[:ind], bktObj[ind+1:]
}
return bucketType, bktObj, ""
return bucketType, strings.TrimSuffix(bktObj, "/"), ""
}
func determineOperation(r *http.Request, reqType ReqType) (operation string) {
@ -200,7 +252,7 @@ func determineBucketOperation(r *http.Request) string {
query := r.URL.Query()
switch r.Method {
case http.MethodOptions:
return OptionsOperation
return OptionsBucketOperation
case http.MethodHead:
return HeadBucketOperation
case http.MethodGet:
@ -303,6 +355,8 @@ func determineBucketOperation(r *http.Request) string {
func determineObjectOperation(r *http.Request) string {
query := r.URL.Query()
switch r.Method {
case http.MethodOptions:
return OptionsObjectOperation
case http.MethodHead:
return HeadObjectOperation
case http.MethodGet:
@ -370,3 +424,135 @@ func determineGeneralOperation(r *http.Request) string {
}
return "UnmatchedOperation"
}
func determineProperties(r *http.Request, decoder XMLDecoder, resolver BucketResolveFunc, tagging ResourceTagging, reqType ReqType,
op, bktName, objName, owner string, groups []string, userClaims map[string]string) (requestProperties map[string]string, resourceProperties map[string]string, err error) {
requestProperties = map[string]string{
s3.PropertyKeyOwner: owner,
common.PropertyKeyFrostFSIDGroupID: chain.FormCondSliceContainsValue(groups),
common.PropertyKeyFrostFSSourceIP: GetReqInfo(r.Context()).RemoteHost,
}
queries := GetReqInfo(r.Context()).URL.Query()
for k, v := range userClaims {
requestProperties[fmt.Sprintf(common.PropertyKeyFormatFrostFSIDUserClaim, k)] = v
}
if reqType == objectType {
if versionID := queries.Get(QueryVersionID); len(versionID) > 0 {
requestProperties[s3.PropertyKeyVersionID] = versionID
}
}
if reqType == bucketType && (strings.HasSuffix(op, ListObjectsV1Operation) || strings.HasSuffix(op, ListObjectsV2Operation) ||
strings.HasSuffix(op, ListBucketObjectVersionsOperation) || strings.HasSuffix(op, ListMultipartUploadsOperation)) {
if prefix := queries.Get(QueryPrefix); len(prefix) > 0 {
requestProperties[s3.PropertyKeyPrefix] = prefix
}
if delimiter := queries.Get(QueryDelimiter); len(delimiter) > 0 {
requestProperties[s3.PropertyKeyDelimiter] = delimiter
}
if maxKeys := queries.Get(QueryMaxKeys); len(maxKeys) > 0 {
requestProperties[s3.PropertyKeyMaxKeys] = maxKeys
}
}
requestProperties[s3.PropertyKeyAccessBoxAttrMFA] = "false"
attrs, err := GetAccessBoxAttrs(r.Context())
if err == nil {
for _, attr := range attrs {
requestProperties[fmt.Sprintf(s3.PropertyKeyFormatAccessBoxAttr, attr.Key())] = attr.Value()
}
}
reqTags, err := determineRequestTags(r, decoder, op)
if err != nil {
return nil, nil, fmt.Errorf("determine request tags: %w", err)
}
for k, v := range reqTags {
requestProperties[k] = v
}
resourceProperties, err = determineResourceTags(r.Context(), reqType, op, bktName, objName, queries.Get(QueryVersionID), resolver, tagging)
if err != nil {
return nil, nil, fmt.Errorf("determine resource tags: %w", err)
}
return requestProperties, resourceProperties, nil
}
func determineRequestTags(r *http.Request, decoder XMLDecoder, op string) (map[string]string, error) {
tags := make(map[string]string)
if strings.HasSuffix(op, PutObjectTaggingOperation) || strings.HasSuffix(op, PutBucketTaggingOperation) {
tagging := new(data.Tagging)
if err := decoder.NewXMLDecoder(r.Body).Decode(tagging); err != nil {
return nil, fmt.Errorf("%w: %s", apiErr.GetAPIError(apiErr.ErrMalformedXML), err.Error())
}
GetReqInfo(r.Context()).Tagging = tagging
for _, tag := range tagging.TagSet {
tags[fmt.Sprintf(s3.PropertyKeyFormatRequestTag, tag.Key)] = tag.Value
}
}
if tagging := r.Header.Get(amzTagging); len(tagging) > 0 {
queries, err := url.ParseQuery(tagging)
if err != nil {
return nil, apiErr.GetAPIError(apiErr.ErrInvalidArgument)
}
for key := range queries {
tags[fmt.Sprintf(s3.PropertyKeyFormatRequestTag, key)] = queries.Get(key)
}
}
return tags, nil
}
func determineResourceTags(ctx context.Context, reqType ReqType, op, bktName, objName, versionID string, resolver BucketResolveFunc,
tagging ResourceTagging) (map[string]string, error) {
tags := make(map[string]string)
if reqType != bucketType && reqType != objectType {
return tags, nil
}
for _, withoutResOp := range withoutResourceOps {
if strings.HasSuffix(op, withoutResOp) {
return tags, nil
}
}
bktInfo, err := resolver(ctx, bktName)
if err != nil {
return nil, fmt.Errorf("get bucket info: %w", err)
}
if reqType == bucketType {
tags, err = tagging.GetBucketTagging(ctx, bktInfo)
if err != nil {
return nil, fmt.Errorf("get bucket tagging: %w", err)
}
}
if reqType == objectType {
tagPrm := &data.GetObjectTaggingParams{
ObjectVersion: &data.ObjectVersion{
BktInfo: bktInfo,
ObjectName: objName,
VersionID: versionID,
},
}
_, tags, err = tagging.GetObjectTagging(ctx, tagPrm)
if err != nil {
return nil, fmt.Errorf("get object tagging: %w", err)
}
}
res := make(map[string]string, len(tags))
for k, v := range tags {
res[fmt.Sprintf(s3.PropertyKeyFormatResourceTag, k)] = v
}
return res, nil
}

View file

@ -0,0 +1,546 @@
package middleware
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/stretchr/testify/require"
)
func TestReqTypeDetermination(t *testing.T) {
bkt, obj, domain := "test-bucket", "test-object", "domain"
for _, tc := range []struct {
name string
target string
host string
domains []string
expectedType ReqType
expectedBktName string
expectedObjName string
}{
{
name: "bucket request, path-style",
target: "/" + bkt,
expectedType: bucketType,
expectedBktName: bkt,
},
{
name: "bucket request with slash, path-style",
target: "/" + bkt + "/",
expectedType: bucketType,
expectedBktName: bkt,
},
{
name: "object request, path-style",
target: "/" + bkt + "/" + obj,
expectedType: objectType,
expectedBktName: bkt,
expectedObjName: obj,
},
{
name: "object request with slash, path-style",
target: "/" + bkt + "/" + obj + "/",
expectedType: objectType,
expectedBktName: bkt,
expectedObjName: obj + "/",
},
{
name: "none type request",
target: "/",
expectedType: noneType,
},
{
name: "bucket request, virtual-hosted style",
target: "/",
host: bkt + "." + domain,
domains: []string{"some-domain", domain},
expectedType: bucketType,
expectedBktName: bkt,
},
{
name: "object request, virtual-hosted style",
target: "/" + obj,
host: bkt + "." + domain,
domains: []string{"some-domain", domain},
expectedType: objectType,
expectedBktName: bkt,
expectedObjName: obj,
},
} {
t.Run(tc.name, func(t *testing.T) {
r := httptest.NewRequest(http.MethodPut, tc.target, nil)
r.Host = tc.host
reqType, bktName, objName := getBucketObject(r, tc.domains)
require.Equal(t, tc.expectedType, reqType)
require.Equal(t, tc.expectedBktName, bktName)
require.Equal(t, tc.expectedObjName, objName)
})
}
}
func TestDetermineBucketOperation(t *testing.T) {
const defaultValue = "value"
for _, tc := range []struct {
name string
method string
queryParam map[string]string
expected string
}{
{
name: "OptionsBucketOperation",
method: http.MethodOptions,
expected: OptionsBucketOperation,
},
{
name: "HeadBucketOperation",
method: http.MethodHead,
expected: HeadBucketOperation,
},
{
name: "ListMultipartUploadsOperation",
method: http.MethodGet,
queryParam: map[string]string{UploadsQuery: defaultValue},
expected: ListMultipartUploadsOperation,
},
{
name: "GetBucketLocationOperation",
method: http.MethodGet,
queryParam: map[string]string{LocationQuery: defaultValue},
expected: GetBucketLocationOperation,
},
{
name: "GetBucketPolicyOperation",
method: http.MethodGet,
queryParam: map[string]string{PolicyQuery: defaultValue},
expected: GetBucketPolicyOperation,
},
{
name: "GetBucketLifecycleOperation",
method: http.MethodGet,
queryParam: map[string]string{LifecycleQuery: defaultValue},
expected: GetBucketLifecycleOperation,
},
{
name: "GetBucketEncryptionOperation",
method: http.MethodGet,
queryParam: map[string]string{EncryptionQuery: defaultValue},
expected: GetBucketEncryptionOperation,
},
{
name: "GetBucketCorsOperation",
method: http.MethodGet,
queryParam: map[string]string{CorsQuery: defaultValue},
expected: GetBucketCorsOperation,
},
{
name: "GetBucketACLOperation",
method: http.MethodGet,
queryParam: map[string]string{ACLQuery: defaultValue},
expected: GetBucketACLOperation,
},
{
name: "GetBucketWebsiteOperation",
method: http.MethodGet,
queryParam: map[string]string{WebsiteQuery: defaultValue},
expected: GetBucketWebsiteOperation,
},
{
name: "GetBucketAccelerateOperation",
method: http.MethodGet,
queryParam: map[string]string{AccelerateQuery: defaultValue},
expected: GetBucketAccelerateOperation,
},
{
name: "GetBucketRequestPaymentOperation",
method: http.MethodGet,
queryParam: map[string]string{RequestPaymentQuery: defaultValue},
expected: GetBucketRequestPaymentOperation,
},
{
name: "GetBucketLoggingOperation",
method: http.MethodGet,
queryParam: map[string]string{LoggingQuery: defaultValue},
expected: GetBucketLoggingOperation,
},
{
name: "GetBucketReplicationOperation",
method: http.MethodGet,
queryParam: map[string]string{ReplicationQuery: defaultValue},
expected: GetBucketReplicationOperation,
},
{
name: "GetBucketTaggingOperation",
method: http.MethodGet,
queryParam: map[string]string{TaggingQuery: defaultValue},
expected: GetBucketTaggingOperation,
},
{
name: "GetBucketObjectLockConfigOperation",
method: http.MethodGet,
queryParam: map[string]string{ObjectLockQuery: defaultValue},
expected: GetBucketObjectLockConfigOperation,
},
{
name: "GetBucketVersioningOperation",
method: http.MethodGet,
queryParam: map[string]string{VersioningQuery: defaultValue},
expected: GetBucketVersioningOperation,
},
{
name: "GetBucketNotificationOperation",
method: http.MethodGet,
queryParam: map[string]string{NotificationQuery: defaultValue},
expected: GetBucketNotificationOperation,
},
{
name: "ListenBucketNotificationOperation",
method: http.MethodGet,
queryParam: map[string]string{EventsQuery: defaultValue},
expected: ListenBucketNotificationOperation,
},
{
name: "ListBucketObjectVersionsOperation",
method: http.MethodGet,
queryParam: map[string]string{VersionsQuery: defaultValue},
expected: ListBucketObjectVersionsOperation,
},
{
name: "ListObjectsV2MOperation",
method: http.MethodGet,
queryParam: map[string]string{ListTypeQuery: "2", MetadataQuery: "true"},
expected: ListObjectsV2MOperation,
},
{
name: "ListObjectsV2Operation",
method: http.MethodGet,
queryParam: map[string]string{ListTypeQuery: "2"},
expected: ListObjectsV2Operation,
},
{
name: "ListObjectsV1Operation",
method: http.MethodGet,
expected: ListObjectsV1Operation,
},
{
name: "PutBucketCorsOperation",
method: http.MethodPut,
queryParam: map[string]string{CorsQuery: defaultValue},
expected: PutBucketCorsOperation,
},
{
name: "PutBucketACLOperation",
method: http.MethodPut,
queryParam: map[string]string{ACLQuery: defaultValue},
expected: PutBucketACLOperation,
},
{
name: "PutBucketLifecycleOperation",
method: http.MethodPut,
queryParam: map[string]string{LifecycleQuery: defaultValue},
expected: PutBucketLifecycleOperation,
},
{
name: "PutBucketEncryptionOperation",
method: http.MethodPut,
queryParam: map[string]string{EncryptionQuery: defaultValue},
expected: PutBucketEncryptionOperation,
},
{
name: "PutBucketPolicyOperation",
method: http.MethodPut,
queryParam: map[string]string{PolicyQuery: defaultValue},
expected: PutBucketPolicyOperation,
},
{
name: "PutBucketObjectLockConfigOperation",
method: http.MethodPut,
queryParam: map[string]string{ObjectLockQuery: defaultValue},
expected: PutBucketObjectLockConfigOperation,
},
{
name: "PutBucketTaggingOperation",
method: http.MethodPut,
queryParam: map[string]string{TaggingQuery: defaultValue},
expected: PutBucketTaggingOperation,
},
{
name: "PutBucketVersioningOperation",
method: http.MethodPut,
queryParam: map[string]string{VersioningQuery: defaultValue},
expected: PutBucketVersioningOperation,
},
{
name: "PutBucketNotificationOperation",
method: http.MethodPut,
queryParam: map[string]string{NotificationQuery: defaultValue},
expected: PutBucketNotificationOperation,
},
{
name: "CreateBucketOperation",
method: http.MethodPut,
expected: CreateBucketOperation,
},
{
name: "DeleteMultipleObjectsOperation",
method: http.MethodPost,
queryParam: map[string]string{DeleteQuery: defaultValue},
expected: DeleteMultipleObjectsOperation,
},
{
name: "PostObjectOperation",
method: http.MethodPost,
expected: PostObjectOperation,
},
{
name: "DeleteBucketCorsOperation",
method: http.MethodDelete,
queryParam: map[string]string{CorsQuery: defaultValue},
expected: DeleteBucketCorsOperation,
},
{
name: "DeleteBucketWebsiteOperation",
method: http.MethodDelete,
queryParam: map[string]string{WebsiteQuery: defaultValue},
expected: DeleteBucketWebsiteOperation,
},
{
name: "DeleteBucketTaggingOperation",
method: http.MethodDelete,
queryParam: map[string]string{TaggingQuery: defaultValue},
expected: DeleteBucketTaggingOperation,
},
{
name: "DeleteBucketPolicyOperation",
method: http.MethodDelete,
queryParam: map[string]string{PolicyQuery: defaultValue},
expected: DeleteBucketPolicyOperation,
},
{
name: "DeleteBucketLifecycleOperation",
method: http.MethodDelete,
queryParam: map[string]string{LifecycleQuery: defaultValue},
expected: DeleteBucketLifecycleOperation,
},
{
name: "DeleteBucketEncryptionOperation",
method: http.MethodDelete,
queryParam: map[string]string{EncryptionQuery: defaultValue},
expected: DeleteBucketEncryptionOperation,
},
{
name: "DeleteBucketOperation",
method: http.MethodDelete,
expected: DeleteBucketOperation,
},
{
name: "UnmatchedBucketOperation",
method: "invalid-method",
expected: "UnmatchedBucketOperation",
},
} {
t.Run(tc.name, func(t *testing.T) {
req := httptest.NewRequest(tc.method, "/test", nil)
if tc.queryParam != nil {
addQueryParams(req, tc.queryParam)
}
actual := determineBucketOperation(req)
require.Equal(t, tc.expected, actual)
})
}
}
func TestDetermineObjectOperation(t *testing.T) {
const (
amzCopySource = "X-Amz-Copy-Source"
defaultValue = "value"
)
for _, tc := range []struct {
name string
method string
queryParam map[string]string
headerKeys []string
expected string
}{
{
name: "OptionsObjectOperation",
method: http.MethodOptions,
expected: OptionsObjectOperation,
},
{
name: "HeadObjectOperation",
method: http.MethodHead,
expected: HeadObjectOperation,
},
{
name: "ListPartsOperation",
method: http.MethodGet,
queryParam: map[string]string{UploadIDQuery: defaultValue},
expected: ListPartsOperation,
},
{
name: "GetObjectACLOperation",
method: http.MethodGet,
queryParam: map[string]string{ACLQuery: defaultValue},
expected: GetObjectACLOperation,
},
{
name: "GetObjectTaggingOperation",
method: http.MethodGet,
queryParam: map[string]string{TaggingQuery: defaultValue},
expected: GetObjectTaggingOperation,
},
{
name: "GetObjectRetentionOperation",
method: http.MethodGet,
queryParam: map[string]string{RetentionQuery: defaultValue},
expected: GetObjectRetentionOperation,
},
{
name: "GetObjectLegalHoldOperation",
method: http.MethodGet,
queryParam: map[string]string{LegalQuery: defaultValue},
expected: GetObjectLegalHoldOperation,
},
{
name: "GetObjectAttributesOperation",
method: http.MethodGet,
queryParam: map[string]string{AttributesQuery: defaultValue},
expected: GetObjectAttributesOperation,
},
{
name: "GetObjectOperation",
method: http.MethodGet,
expected: GetObjectOperation,
},
{
name: "UploadPartCopyOperation",
method: http.MethodPut,
queryParam: map[string]string{PartNumberQuery: defaultValue, UploadIDQuery: defaultValue},
headerKeys: []string{amzCopySource},
expected: UploadPartCopyOperation,
},
{
name: "UploadPartOperation",
method: http.MethodPut,
queryParam: map[string]string{PartNumberQuery: defaultValue, UploadIDQuery: defaultValue},
expected: UploadPartOperation,
},
{
name: "PutObjectACLOperation",
method: http.MethodPut,
queryParam: map[string]string{ACLQuery: defaultValue},
expected: PutObjectACLOperation,
},
{
name: "PutObjectTaggingOperation",
method: http.MethodPut,
queryParam: map[string]string{TaggingQuery: defaultValue},
expected: PutObjectTaggingOperation,
},
{
name: "CopyObjectOperation",
method: http.MethodPut,
headerKeys: []string{amzCopySource},
expected: CopyObjectOperation,
},
{
name: "PutObjectRetentionOperation",
method: http.MethodPut,
queryParam: map[string]string{RetentionQuery: defaultValue},
expected: PutObjectRetentionOperation,
},
{
name: "PutObjectLegalHoldOperation",
method: http.MethodPut,
queryParam: map[string]string{LegalHoldQuery: defaultValue},
expected: PutObjectLegalHoldOperation,
},
{
name: "PutObjectOperation",
method: http.MethodPut,
expected: PutObjectOperation,
},
{
name: "CompleteMultipartUploadOperation",
method: http.MethodPost,
queryParam: map[string]string{UploadIDQuery: defaultValue},
expected: CompleteMultipartUploadOperation,
},
{
name: "CreateMultipartUploadOperation",
method: http.MethodPost,
queryParam: map[string]string{UploadsQuery: defaultValue},
expected: CreateMultipartUploadOperation,
},
{
name: "SelectObjectContentOperation",
method: http.MethodPost,
expected: SelectObjectContentOperation,
},
{
name: "AbortMultipartUploadOperation",
method: http.MethodDelete,
queryParam: map[string]string{UploadIDQuery: defaultValue},
expected: AbortMultipartUploadOperation,
},
{
name: "DeleteObjectTaggingOperation",
method: http.MethodDelete,
queryParam: map[string]string{TaggingQuery: defaultValue},
expected: DeleteObjectTaggingOperation,
},
{
name: "DeleteObjectOperation",
method: http.MethodDelete,
expected: DeleteObjectOperation,
},
{
name: "UnmatchedObjectOperation",
method: "invalid-method",
expected: "UnmatchedObjectOperation",
},
} {
t.Run(tc.name, func(t *testing.T) {
req := httptest.NewRequest(tc.method, "/test", nil)
if tc.queryParam != nil {
addQueryParams(req, tc.queryParam)
}
if tc.headerKeys != nil {
addHeaderParams(req, tc.headerKeys)
}
actual := determineObjectOperation(req)
require.Equal(t, tc.expected, actual)
})
}
}
func addQueryParams(req *http.Request, pairs map[string]string) {
values := req.URL.Query()
for key, val := range pairs {
values.Add(key, val)
}
req.URL.RawQuery = values.Encode()
}
func addHeaderParams(req *http.Request, keys []string) {
for _, key := range keys {
req.Header.Set(key, "val")
}
}
func TestDetermineGeneralOperation(t *testing.T) {
req := httptest.NewRequest(http.MethodGet, "/test", nil)
actual := determineGeneralOperation(req)
require.Equal(t, ListBucketsOperation, actual)
req = httptest.NewRequest(http.MethodPost, "/test", nil)
actual = determineGeneralOperation(req)
require.Equal(t, "UnmatchedOperation", actual)
}

View file

@ -39,8 +39,8 @@ type (
TraceID string // Trace ID
URL *url.URL // Request url
Namespace string
User string // User owner id
tags []KeyVal // Any additional info not accommodated by above fields
User string // User owner id
Tagging *data.Tagging
}
// ObjectRequest represents object request data.
@ -82,61 +82,24 @@ var (
)
// NewReqInfo returns new ReqInfo based on parameters.
func NewReqInfo(w http.ResponseWriter, r *http.Request, req ObjectRequest) *ReqInfo {
return &ReqInfo{
func NewReqInfo(w http.ResponseWriter, r *http.Request, req ObjectRequest, sourceIPHeader string) *ReqInfo {
reqInfo := &ReqInfo{
API: req.Method,
BucketName: req.Bucket,
ObjectName: req.Object,
UserAgent: r.UserAgent(),
RemoteHost: getSourceIP(r),
RequestID: GetRequestID(w),
DeploymentID: deploymentID.String(),
URL: r.URL,
}
}
// AppendTags -- appends key/val to ReqInfo.tags.
func (r *ReqInfo) AppendTags(key string, val string) *ReqInfo {
if r == nil {
return nil
if sourceIPHeader != "" {
reqInfo.RemoteHost = r.Header.Get(sourceIPHeader)
} else {
reqInfo.RemoteHost = getSourceIP(r)
}
r.Lock()
defer r.Unlock()
r.tags = append(r.tags, KeyVal{key, val})
return r
}
// SetTags -- sets key/val to ReqInfo.tags.
func (r *ReqInfo) SetTags(key string, val string) *ReqInfo {
if r == nil {
return nil
}
r.Lock()
defer r.Unlock()
// Search for a tag key already existing in tags
var updated bool
for _, tag := range r.tags {
if tag.Key == key {
tag.Val = val
updated = true
break
}
}
if !updated {
// Append to the end of tags list
r.tags = append(r.tags, KeyVal{key, val})
}
return r
}
// GetTags -- returns the user defined tags.
func (r *ReqInfo) GetTags() []KeyVal {
if r == nil {
return nil
}
r.RLock()
defer r.RUnlock()
return append([]KeyVal(nil), r.tags...)
return reqInfo
}
// GetRequestID returns the request ID from the response writer or the context.
@ -192,6 +155,7 @@ func GetReqLog(ctx context.Context) *zap.Logger {
type RequestSettings interface {
NamespaceHeader() string
ResolveNamespaceAlias(string) string
SourceIPHeader() string
}
func Request(log *zap.Logger, settings RequestSettings) Func {
@ -210,7 +174,7 @@ func Request(log *zap.Logger, settings RequestSettings) Func {
// set request info into context
// bucket name and object will be set in reqInfo later (limitation of go-chi)
reqInfo := NewReqInfo(w, r, ObjectRequest{})
reqInfo := NewReqInfo(w, r, ObjectRequest{}, settings.SourceIPHeader())
reqInfo.Namespace = settings.ResolveNamespaceAlias(r.Header.Get(settings.NamespaceHeader()))
r = r.WithContext(SetReqInfo(r.Context(), reqInfo))
@ -316,11 +280,14 @@ func getSourceIP(r *http.Request) string {
}
}
if addr != "" {
return addr
if addr == "" {
addr = r.RemoteAddr
}
// Default to remote address if headers not set.
addr, _, _ = net.SplitHostPort(r.RemoteAddr)
return addr
raddr, _, _ := net.SplitHostPort(addr)
if raddr == "" {
return addr
}
return raddr
}

View file

@ -0,0 +1,70 @@
package middleware
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/stretchr/testify/require"
)
func TestGetSourceIP(t *testing.T) {
for _, tc := range []struct {
name string
req *http.Request
}{
{
name: "headers not set",
req: func() *http.Request {
request := httptest.NewRequest(http.MethodGet, "/test", nil)
request.RemoteAddr = "192.0.2.1:1234"
return request
}(),
},
{
name: "headers not set, and the port is not set",
req: func() *http.Request {
request := httptest.NewRequest(http.MethodGet, "/test", nil)
request.RemoteAddr = "192.0.2.1"
return request
}(),
},
{
name: "x-forwarded-for single-host header",
req: func() *http.Request {
request := httptest.NewRequest(http.MethodGet, "/test", nil)
request.Header.Set(xForwardedFor, "192.0.2.1")
return request
}(),
},
{
name: "x-forwarded-for header by multiple hosts",
req: func() *http.Request {
request := httptest.NewRequest(http.MethodGet, "/test", nil)
request.Header.Set(xForwardedFor, "192.0.2.1, 10.1.1.1")
return request
}(),
},
{
name: "x-real-ip header",
req: func() *http.Request {
request := httptest.NewRequest(http.MethodGet, "/test", nil)
request.Header.Set(xRealIP, "192.0.2.1")
return request
}(),
},
{
name: "forwarded header",
req: func() *http.Request {
request := httptest.NewRequest(http.MethodGet, "/test", nil)
request.Header.Set(forwarded, "for=192.0.2.1, 10.1.1.1; proto=https; by=192.0.2.4")
return request
}(),
},
} {
t.Run(tc.name, func(t *testing.T) {
actual := getSourceIP(tc.req)
require.Equal(t, actual, "192.0.2.1")
})
}
}

View file

@ -118,7 +118,8 @@ var s3ErrorResponseMap = map[string]string{
}
// WriteErrorResponse writes error headers.
func WriteErrorResponse(w http.ResponseWriter, reqInfo *ReqInfo, err error) int {
// returns http error code and error in case of failure of response writing.
func WriteErrorResponse(w http.ResponseWriter, reqInfo *ReqInfo, err error) (int, error) {
code := http.StatusInternalServerError
if e, ok := err.(errors.Error); ok {
@ -134,9 +135,14 @@ func WriteErrorResponse(w http.ResponseWriter, reqInfo *ReqInfo, err error) int
// Generates error response.
errorResponse := getAPIErrorResponse(reqInfo, err)
encodedErrorResponse := EncodeResponse(errorResponse)
WriteResponse(w, code, encodedErrorResponse, MimeXML)
return code
encodedErrorResponse, err := EncodeResponse(errorResponse)
if err != nil {
return 0, fmt.Errorf("encode response: %w", err)
}
if err = WriteResponse(w, code, encodedErrorResponse, MimeXML); err != nil {
return 0, fmt.Errorf("write response: %w", err)
}
return code, nil
}
// Write http common headers.
@ -157,7 +163,7 @@ func removeSensitiveHeaders(h http.Header) {
}
// WriteResponse writes given statusCode and response into w (with mType header if set).
func WriteResponse(w http.ResponseWriter, statusCode int, response []byte, mType mimeType) {
func WriteResponse(w http.ResponseWriter, statusCode int, response []byte, mType mimeType) error {
setCommonHeaders(w)
if mType != MimeNone {
w.Header().Set(hdrContentType, string(mType))
@ -165,37 +171,34 @@ func WriteResponse(w http.ResponseWriter, statusCode int, response []byte, mType
w.Header().Set(hdrContentLength, strconv.Itoa(len(response)))
w.WriteHeader(statusCode)
if response == nil {
return
return nil
}
WriteResponseBody(w, response)
return WriteResponseBody(w, response)
}
// WriteResponseBody writes response into w.
func WriteResponseBody(w http.ResponseWriter, response []byte) {
_, _ = w.Write(response)
func WriteResponseBody(w http.ResponseWriter, response []byte) error {
if _, err := w.Write(response); err != nil {
return err
}
if flusher, ok := w.(http.Flusher); ok {
flusher.Flush()
}
return nil
}
// EncodeResponse encodes the response headers into XML format.
func EncodeResponse(response interface{}) []byte {
func EncodeResponse(response interface{}) ([]byte, error) {
var bytesBuffer bytes.Buffer
bytesBuffer.WriteString(xml.Header)
_ = xml.
NewEncoder(&bytesBuffer).
Encode(response)
return bytesBuffer.Bytes()
}
if err := xml.NewEncoder(&bytesBuffer).Encode(response); err != nil {
return nil, err
}
// EncodeResponseNoHeader encodes response without setting xml.Header.
// Should be used with periodicXMLWriter which sends xml.Header to the client
// with whitespaces to keep connection alive.
func EncodeResponseNoHeader(response interface{}) []byte {
var bytesBuffer bytes.Buffer
_ = xml.NewEncoder(&bytesBuffer).Encode(response)
return bytesBuffer.Bytes()
return bytesBuffer.Bytes(), nil
}
// EncodeToResponse encodes the response into ResponseWriter.
@ -227,8 +230,8 @@ func EncodeToResponseNoHeader(w http.ResponseWriter, response interface{}) error
// WriteSuccessResponseHeadersOnly writes HTTP (200) OK response with no data
// to the client.
func WriteSuccessResponseHeadersOnly(w http.ResponseWriter) {
WriteResponse(w, http.StatusOK, nil, MimeNone)
func WriteSuccessResponseHeadersOnly(w http.ResponseWriter) error {
return WriteResponse(w, http.StatusOK, nil, MimeNone)
}
// Error -- Returns S3 error string.
@ -325,6 +328,9 @@ func LogSuccessResponse(l *zap.Logger) Func {
if reqInfo.ObjectName != "" {
fields = append(fields, zap.String("object", reqInfo.ObjectName))
}
if reqInfo.User != "" {
fields = append(fields, zap.String("user", reqInfo.User))
}
if traceID, err := trace.TraceIDFromHex(reqInfo.TraceID); err == nil && traceID.IsValid() {
fields = append(fields, zap.String("trace_id", reqInfo.TraceID))

View file

@ -0,0 +1,43 @@
package middleware
import (
"encoding/xml"
"net/http/httptest"
"testing"
"github.com/stretchr/testify/require"
)
type testXMLData struct {
XMLName xml.Name `xml:"data"`
Text string `xml:"text"`
}
func TestEncodeResponse(t *testing.T) {
w := httptest.NewRecorder()
err := EncodeToResponse(w, []byte{})
require.Error(t, err)
require.Contains(t, err.Error(), "encode xml response")
err = EncodeToResponse(w, testXMLData{Text: "test"})
require.NoError(t, err)
expectedXML := "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<data><text>test</text></data>"
require.Equal(t, expectedXML, w.Body.String())
}
func TestErrorResponse(t *testing.T) {
errResp := ErrorResponse{Code: "invalid-code"}
actual := errResp.Error()
require.Contains(t, actual, "Error response code")
errResp.Code = "AccessDenied"
actual = errResp.Error()
require.Equal(t, "Access Denied.", actual)
errResp.Message = "Request body is empty."
actual = errResp.Error()
require.Equal(t, "Request body is empty.", actual)
}

View file

@ -0,0 +1,115 @@
package middleware
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/stretchr/testify/require"
)
func TestHTTPResponseCarrierSetGet(t *testing.T) {
const (
testKey1 = "Key"
testValue1 = "Value"
)
respCarrier := httpResponseCarrier{}
respCarrier.resp = httptest.NewRecorder()
actual := respCarrier.Get(testKey1)
require.Equal(t, "", actual)
respCarrier.Set(testKey1, testValue1)
actual = respCarrier.Get(testKey1)
require.Equal(t, testValue1, actual)
}
func TestHTTPResponseCarrierKeys(t *testing.T) {
const (
testKey1 = "Key1"
testKey2 = "Key2"
testKey3 = "Key3"
testValue1 = "Value1"
testValue2 = "Value2"
testValue3 = "Value3"
)
respCarrier := httpResponseCarrier{}
respCarrier.resp = httptest.NewRecorder()
actual := respCarrier.Keys()
require.Equal(t, 0, len(actual))
respCarrier.Set(testKey1, testValue1)
respCarrier.Set(testKey2, testValue2)
respCarrier.Set(testKey3, testValue3)
actual = respCarrier.Keys()
require.Equal(t, 3, len(actual))
require.Contains(t, actual, testKey1)
require.Contains(t, actual, testKey2)
require.Contains(t, actual, testKey3)
}
func TestHTTPRequestCarrierSet(t *testing.T) {
const (
testKey = "Key"
testValue = "Value"
)
reqCarrier := httpRequestCarrier{}
reqCarrier.req = httptest.NewRequest(http.MethodGet, "/test", nil)
reqCarrier.req.Response = httptest.NewRecorder().Result()
actual := reqCarrier.req.Response.Header.Get(testKey)
require.Equal(t, "", actual)
reqCarrier.Set(testKey, testValue)
actual = reqCarrier.req.Response.Header.Get(testKey)
require.Contains(t, testValue, actual)
}
func TestHTTPRequestCarrierGet(t *testing.T) {
const (
testKey = "Key"
testValue = "Value"
)
reqCarrier := httpRequestCarrier{}
reqCarrier.req = httptest.NewRequest(http.MethodGet, "/test", nil)
actual := reqCarrier.Get(testKey)
require.Equal(t, "", actual)
reqCarrier.req.Header.Set(testKey, testValue)
actual = reqCarrier.Get(testKey)
require.Equal(t, testValue, actual)
}
func TestHTTPRequestCarrierKeys(t *testing.T) {
const (
testKey1 = "Key1"
testKey2 = "Key2"
testKey3 = "Key3"
testValue1 = "Value1"
testValue2 = "Value2"
testValue3 = "Value3"
)
reqCarrier := httpRequestCarrier{}
reqCarrier.req = httptest.NewRequest(http.MethodGet, "/test", nil)
actual := reqCarrier.Keys()
require.Equal(t, 0, len(actual))
reqCarrier.req.Header.Set(testKey1, testValue1)
reqCarrier.req.Header.Set(testKey2, testValue2)
reqCarrier.req.Header.Set(testKey3, testValue3)
actual = reqCarrier.Keys()
require.Equal(t, 3, len(actual))
require.Contains(t, actual, testKey1)
require.Contains(t, actual, testKey2)
require.Contains(t, actual, testKey3)
}

View file

@ -6,29 +6,27 @@ import (
"time"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
)
// keyWrapper is wrapper for context keys.
type keyWrapper string
// authHeaders is a wrapper for authentication headers of a request.
var authHeadersKey = keyWrapper("__context_auth_headers_key")
// boxData is an ID used to store accessbox.Box in a context.
var boxDataKey = keyWrapper("__context_box_key")
// clientTime is an ID used to store client time.Time in a context.
var clientTimeKey = keyWrapper("__context_client_time")
// boxKey is an ID used to store Box in a context.
var boxKey = keyWrapper("__context_box_key")
// GetBoxData extracts accessbox.Box from context.
func GetBoxData(ctx context.Context) (*accessbox.Box, error) {
var box *accessbox.Box
data, ok := ctx.Value(boxDataKey).(*accessbox.Box)
data, ok := ctx.Value(boxKey).(*Box)
if !ok || data == nil {
return nil, fmt.Errorf("couldn't get box from context")
}
if data.AccessBox == nil {
return nil, fmt.Errorf("couldn't get box data from context")
}
box = data
box := data.AccessBox
if box.Gate == nil {
box.Gate = &accessbox.GateData{}
}
@ -37,35 +35,39 @@ func GetBoxData(ctx context.Context) (*accessbox.Box, error) {
// GetAuthHeaders extracts auth.AuthHeader from context.
func GetAuthHeaders(ctx context.Context) (*AuthHeader, error) {
authHeaders, ok := ctx.Value(authHeadersKey).(*AuthHeader)
if !ok {
return nil, fmt.Errorf("couldn't get auth headers from context")
data, ok := ctx.Value(boxKey).(*Box)
if !ok || data == nil {
return nil, fmt.Errorf("couldn't get box from context")
}
return authHeaders, nil
return data.AuthHeaders, nil
}
// GetClientTime extracts time.Time from context.
func GetClientTime(ctx context.Context) (time.Time, error) {
clientTime, ok := ctx.Value(clientTimeKey).(time.Time)
if !ok {
data, ok := ctx.Value(boxKey).(*Box)
if !ok || data == nil {
return time.Time{}, fmt.Errorf("couldn't get box from context")
}
if data.ClientTime.IsZero() {
return time.Time{}, fmt.Errorf("couldn't get client time from context")
}
return clientTime, nil
return data.ClientTime, nil
}
// SetBoxData sets accessbox.Box in the context.
func SetBoxData(ctx context.Context, box *accessbox.Box) context.Context {
return context.WithValue(ctx, boxDataKey, box)
// GetAccessBoxAttrs extracts []object.Attribute from context.
func GetAccessBoxAttrs(ctx context.Context) ([]object.Attribute, error) {
data, ok := ctx.Value(boxKey).(*Box)
if !ok || data == nil {
return nil, fmt.Errorf("couldn't get box from context")
}
return data.Attributes, nil
}
// SetAuthHeaders sets auth.AuthHeader in the context.
func SetAuthHeaders(ctx context.Context, header *AuthHeader) context.Context {
return context.WithValue(ctx, authHeadersKey, header)
}
// SetClientTime sets time.Time in the context.
func SetClientTime(ctx context.Context, newTime time.Time) context.Context {
return context.WithValue(ctx, clientTimeKey, newTime)
// SetBox sets Box in the context.
func SetBox(ctx context.Context, box *Box) context.Context {
return context.WithValue(ctx, boxKey, box)
}

179
api/middleware/util_test.go Normal file
View file

@ -0,0 +1,179 @@
package middleware
import (
"context"
"testing"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"github.com/stretchr/testify/require"
)
func TestGetBoxData(t *testing.T) {
for _, tc := range []struct {
name string
value any
error string
}{
{
name: "valid",
value: &Box{
AccessBox: &accessbox.Box{},
},
},
{
name: "invalid data",
value: "invalid-data",
error: "couldn't get box from context",
},
{
name: "box does not exist",
error: "couldn't get box from context",
},
{
name: "access box is nil",
value: &Box{},
error: "couldn't get box data from context",
},
} {
t.Run(tc.name, func(t *testing.T) {
ctx := context.WithValue(context.Background(), boxKey, tc.value)
actual, err := GetBoxData(ctx)
if tc.error != "" {
require.Contains(t, err.Error(), tc.error)
return
}
require.NoError(t, err)
require.NotNil(t, actual)
require.NotNil(t, actual.Gate)
})
}
}
func TestGetAuthHeaders(t *testing.T) {
for _, tc := range []struct {
name string
value any
error bool
}{
{
name: "valid",
value: &Box{
AuthHeaders: &AuthHeader{
AccessKeyID: "valid-key",
Region: "valid-region",
SignatureV4: "valid-sign",
},
},
},
{
name: "invalid data",
value: "invalid-data",
error: true,
},
{
name: "box does not exist",
error: true,
},
} {
t.Run(tc.name, func(t *testing.T) {
ctx := context.WithValue(context.Background(), boxKey, tc.value)
actual, err := GetAuthHeaders(ctx)
if tc.error {
require.Contains(t, err.Error(), "couldn't get box from context")
return
}
require.NoError(t, err)
require.Equal(t, tc.value.(*Box).AuthHeaders.AccessKeyID, actual.AccessKeyID)
require.Equal(t, tc.value.(*Box).AuthHeaders.Region, actual.Region)
require.Equal(t, tc.value.(*Box).AuthHeaders.SignatureV4, actual.SignatureV4)
})
}
}
func TestGetClientTime(t *testing.T) {
for _, tc := range []struct {
name string
value any
error string
}{
{
name: "valid",
value: &Box{
ClientTime: time.Now(),
},
},
{
name: "invalid data",
value: "invalid-data",
error: "couldn't get box from context",
},
{
name: "box does not exist",
error: "couldn't get box from context",
},
{
name: "zero time",
value: &Box{
ClientTime: time.Time{},
},
error: "couldn't get client time from context",
},
} {
t.Run(tc.name, func(t *testing.T) {
ctx := context.WithValue(context.Background(), boxKey, tc.value)
actual, err := GetClientTime(ctx)
if tc.error != "" {
require.Contains(t, err.Error(), tc.error)
return
}
require.NoError(t, err)
require.Equal(t, tc.value.(*Box).ClientTime, actual)
})
}
}
func TestGetAccessBoxAttrs(t *testing.T) {
for _, tc := range []struct {
name string
value any
error bool
}{
{
name: "valid",
value: func() *Box {
var attr object.Attribute
attr.SetKey("key")
attr.SetValue("value")
return &Box{Attributes: []object.Attribute{attr}}
}(),
},
{
name: "invalid data",
value: "invalid-data",
error: true,
},
{
name: "box does not exist",
error: true,
},
} {
t.Run(tc.name, func(t *testing.T) {
ctx := context.WithValue(context.Background(), boxKey, tc.value)
actual, err := GetAccessBoxAttrs(ctx)
if tc.error {
require.Contains(t, err.Error(), "couldn't get box from context")
return
}
require.NoError(t, err)
require.Equal(t, len(tc.value.(*Box).Attributes), len(actual))
require.Equal(t, tc.value.(*Box).Attributes[0].Key(), actual[0].Key())
require.Equal(t, tc.value.(*Box).Attributes[0].Value(), actual[0].Value())
})
}
}

View file

@ -1,263 +0,0 @@
package notifications
import (
"context"
"encoding/json"
"fmt"
"sync"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/handler"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"github.com/nats-io/nats.go"
"go.uber.org/zap"
)
const (
DefaultTimeout = 30 * time.Second
// EventVersion23 is used for lifecycle, tiering, objectACL, objectTagging, object restoration notifications.
EventVersion23 = "2.3"
// EventVersion22 is used for replication notifications.
EventVersion22 = "2.2"
// EventVersion21 is used for all other notification types.
EventVersion21 = "2.1"
)
type (
Options struct {
URL string
TLSCertFilepath string
TLSAuthPrivateKeyFilePath string
Timeout time.Duration
RootCAFiles []string
}
Controller struct {
logger *zap.Logger
taskQueueConnection *nats.Conn
jsClient nats.JetStreamContext
handlers map[string]Stream
mu sync.RWMutex
}
Stream struct {
h layer.MsgHandler
ch chan *nats.Msg
}
TestEvent struct {
Service string
Event string
Time time.Time
Bucket string
RequestID string
HostID string
}
Event struct {
Records []EventRecord `json:"Records"`
}
EventRecord struct {
EventVersion string `json:"eventVersion"`
EventSource string `json:"eventSource"` // frostfs:s3
AWSRegion string `json:"awsRegion,omitempty"` // empty
EventTime time.Time `json:"eventTime"`
EventName string `json:"eventName"`
UserIdentity UserIdentity `json:"userIdentity"`
RequestParameters RequestParameters `json:"requestParameters"`
ResponseElements map[string]string `json:"responseElements"`
S3 S3Entity `json:"s3"`
}
UserIdentity struct {
PrincipalID string `json:"principalId"`
}
RequestParameters struct {
SourceIPAddress string `json:"sourceIPAddress"`
}
S3Entity struct {
SchemaVersion string `json:"s3SchemaVersion"`
ConfigurationID string `json:"configurationId,omitempty"`
Bucket Bucket `json:"bucket"`
Object Object `json:"object"`
}
Bucket struct {
Name string `json:"name"`
OwnerIdentity UserIdentity `json:"ownerIdentity,omitempty"`
Arn string `json:"arn,omitempty"`
}
Object struct {
Key string `json:"key"`
Size uint64 `json:"size,omitempty"`
VersionID string `json:"versionId,omitempty"`
ETag string `json:"eTag,omitempty"`
Sequencer string `json:"sequencer,omitempty"`
}
)
func NewController(p *Options, l *zap.Logger) (*Controller, error) {
ncopts := []nats.Option{
nats.Timeout(p.Timeout),
}
if len(p.TLSCertFilepath) != 0 && len(p.TLSAuthPrivateKeyFilePath) != 0 {
ncopts = append(ncopts, nats.ClientCert(p.TLSCertFilepath, p.TLSAuthPrivateKeyFilePath))
}
if len(p.RootCAFiles) != 0 {
ncopts = append(ncopts, nats.RootCAs(p.RootCAFiles...))
}
nc, err := nats.Connect(p.URL, ncopts...)
if err != nil {
return nil, fmt.Errorf("connect to nats: %w", err)
}
js, err := nc.JetStream()
if err != nil {
return nil, fmt.Errorf("get jet stream: %w", err)
}
return &Controller{
logger: l,
taskQueueConnection: nc,
jsClient: js,
handlers: make(map[string]Stream),
}, nil
}
func (c *Controller) Subscribe(_ context.Context, topic string, handler layer.MsgHandler) error {
ch := make(chan *nats.Msg, 1)
c.mu.RLock()
_, ok := c.handlers[topic]
c.mu.RUnlock()
if ok {
return fmt.Errorf("already subscribed to topic '%s'", topic)
}
if _, err := c.jsClient.AddStream(&nats.StreamConfig{Name: topic}); err != nil {
return fmt.Errorf("add stream: %w", err)
}
if _, err := c.jsClient.ChanSubscribe(topic, ch); err != nil {
return fmt.Errorf("could not subscribe: %w", err)
}
c.mu.Lock()
c.handlers[topic] = Stream{
h: handler,
ch: ch,
}
c.mu.Unlock()
return nil
}
func (c *Controller) Listen(ctx context.Context) {
c.mu.RLock()
defer c.mu.RUnlock()
for _, stream := range c.handlers {
go func(stream Stream) {
for {
select {
case msg := <-stream.ch:
if err := stream.h.HandleMessage(ctx, msg); err != nil {
c.logger.Error(logs.CouldNotHandleMessage, zap.Error(err))
} else if err = msg.Ack(); err != nil {
c.logger.Error(logs.CouldNotACKMessage, zap.Error(err))
}
case <-ctx.Done():
return
}
}
}(stream)
}
}
func (c *Controller) SendNotifications(topics map[string]string, p *handler.SendNotificationParams) error {
event := prepareEvent(p)
for id, topic := range topics {
event.Records[0].S3.ConfigurationID = id
msg, err := json.Marshal(event)
if err != nil {
c.logger.Error(logs.CouldntMarshalAnEvent, zap.String("subject", topic), zap.Error(err))
}
if err = c.publish(topic, msg); err != nil {
c.logger.Error(logs.CouldntSendAnEventToTopic, zap.String("subject", topic), zap.Error(err))
}
}
return nil
}
func (c *Controller) SendTestNotification(topic, bucketName, requestID, HostID string, now time.Time) error {
event := &TestEvent{
Service: "FrostFS S3",
Event: "s3:TestEvent",
Time: now,
Bucket: bucketName,
RequestID: requestID,
HostID: HostID,
}
msg, err := json.Marshal(event)
if err != nil {
return fmt.Errorf("couldn't marshal test event: %w", err)
}
return c.publish(topic, msg)
}
func prepareEvent(p *handler.SendNotificationParams) *Event {
return &Event{
Records: []EventRecord{
{
EventVersion: EventVersion21,
EventSource: "frostfs:s3",
AWSRegion: "",
EventTime: p.Time,
EventName: p.Event,
UserIdentity: UserIdentity{
PrincipalID: p.User,
},
RequestParameters: RequestParameters{
SourceIPAddress: p.ReqInfo.RemoteHost,
},
ResponseElements: nil,
S3: S3Entity{
SchemaVersion: "1.0",
// ConfigurationID is skipped and will be placed later
Bucket: Bucket{
Name: p.BktInfo.Name,
OwnerIdentity: UserIdentity{PrincipalID: p.BktInfo.Owner.String()},
Arn: p.BktInfo.Name,
},
Object: Object{
Key: p.NotificationInfo.Name,
Size: p.NotificationInfo.Size,
VersionID: p.NotificationInfo.Version,
ETag: p.NotificationInfo.HashSum,
Sequencer: "",
},
},
},
},
}
}
func (c *Controller) publish(topic string, msg []byte) error {
if _, err := c.jsClient.Publish(topic, msg); err != nil {
return fmt.Errorf("couldn't send event: %w", err)
}
return nil
}

View file

@ -37,6 +37,7 @@ type (
PutObjectHandler(http.ResponseWriter, *http.Request)
DeleteObjectHandler(http.ResponseWriter, *http.Request)
GetBucketLocationHandler(http.ResponseWriter, *http.Request)
GetBucketPolicyStatusHandler(http.ResponseWriter, *http.Request)
GetBucketPolicyHandler(http.ResponseWriter, *http.Request)
GetBucketLifecycleHandler(http.ResponseWriter, *http.Request)
GetBucketEncryptionHandler(http.ResponseWriter, *http.Request)
@ -120,6 +121,9 @@ type Config struct {
FrostFSIDValidation bool
PolicyChecker engine.ChainRouter
XMLDecoder s3middleware.XMLDecoder
Tagging s3middleware.ResourceTagging
}
func NewRouter(cfg Config) *chi.Mux {
@ -145,6 +149,8 @@ func NewRouter(cfg Config) *chi.Mux {
Domains: cfg.Domains,
Log: cfg.Log,
BucketResolver: cfg.Handler.ResolveBucket,
Decoder: cfg.XMLDecoder,
Tagging: cfg.Tagging,
}))
defaultRouter := chi.NewRouter()
@ -178,14 +184,24 @@ func errorResponseHandler(w http.ResponseWriter, r *http.Request) {
reqInfo := s3middleware.GetReqInfo(ctx)
desc := fmt.Sprintf("Unknown API request at %s", r.URL.Path)
s3middleware.WriteErrorResponse(w, reqInfo, errors.Error{
_, wrErr := s3middleware.WriteErrorResponse(w, reqInfo, errors.Error{
Code: "UnknownAPIRequest",
Description: desc,
HTTPStatusCode: http.StatusBadRequest,
})
if log := s3middleware.GetReqLog(ctx); log != nil {
log.Error(logs.RequestUnmatched, zap.String("method", reqInfo.API), zap.String("http method", r.Method), zap.String("url", r.RequestURI))
fields := []zap.Field{
zap.String("method", reqInfo.API),
zap.String("http method", r.Method),
zap.String("url", r.RequestURI),
}
if wrErr != nil {
fields = append(fields, zap.NamedError("write_response_error", wrErr))
}
log.Error(logs.RequestUnmatched, fields...)
}
}
@ -207,7 +223,7 @@ func bucketRouter(h Handler, log *zap.Logger) chi.Router {
bktRouter.Mount("/", objectRouter(h, log))
bktRouter.Options("/", h.Preflight)
bktRouter.Options("/", named(s3middleware.OptionsBucketOperation, h.Preflight))
bktRouter.Head("/", named(s3middleware.HeadBucketOperation, h.HeadBucketHandler))
@ -220,6 +236,9 @@ func bucketRouter(h Handler, log *zap.Logger) chi.Router {
Add(NewFilter().
Queries(s3middleware.LocationQuery).
Handler(named(s3middleware.GetBucketLocationOperation, h.GetBucketLocationHandler))).
Add(NewFilter().
Queries(s3middleware.PolicyStatusQuery).
Handler(named(s3middleware.GetBucketPolicyStatusOperation, h.GetBucketPolicyStatusHandler))).
Add(NewFilter().
Queries(s3middleware.PolicyQuery).
Handler(named(s3middleware.GetBucketPolicyOperation, h.GetBucketPolicyHandler))).
@ -353,6 +372,8 @@ func objectRouter(h Handler, l *zap.Logger) chi.Router {
objRouter := chi.NewRouter()
objRouter.Use(s3middleware.AddObjectName(l))
objRouter.Options("/*", named(s3middleware.OptionsObjectOperation, h.Preflight))
objRouter.Head("/*", named(s3middleware.HeadObjectOperation, h.HeadObjectHandler))
// GET method handlers

View file

@ -3,16 +3,20 @@ package api
import (
"context"
"encoding/json"
"errors"
"encoding/xml"
"fmt"
"io"
"net/http"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
apiErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
bearertest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer/test"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/util"
@ -29,11 +33,22 @@ func (p *poolStatisticMock) Statistic() pool.Statistic {
}
type centerMock struct {
t *testing.T
anon bool
t *testing.T
anon bool
noAuthHeader bool
isError bool
attrs []object.Attribute
}
func (c *centerMock) Authenticate(*http.Request) (*middleware.Box, error) {
if c.noAuthHeader {
return nil, middleware.ErrNoAuthorizationHeader
}
if c.isError {
return nil, fmt.Errorf("some error")
}
var token *bearer.Token
if !c.anon {
@ -51,12 +66,17 @@ func (c *centerMock) Authenticate(*http.Request) (*middleware.Box, error) {
BearerToken: token,
},
},
Attributes: c.attrs,
}, nil
}
type middlewareSettingsMock struct {
denyByDefault bool
aclEnabled bool
denyByDefault bool
sourceIPHeader string
}
func (r *middlewareSettingsMock) SourceIPHeader() string {
return r.sourceIPHeader
}
func (r *middlewareSettingsMock) NamespaceHeader() string {
@ -71,19 +91,53 @@ func (r *middlewareSettingsMock) PolicyDenyByDefault() bool {
return r.denyByDefault
}
func (r *middlewareSettingsMock) ACLEnabled() bool {
return r.aclEnabled
}
type frostFSIDMock struct {
tags map[string]string
validateError bool
userGroupsError bool
}
func (f *frostFSIDMock) ValidatePublicKey(*keys.PublicKey) error {
if f.validateError {
return fmt.Errorf("some error")
}
return nil
}
func (f *frostFSIDMock) GetUserGroupIDs(util.Uint160) ([]string, error) {
return []string{}, nil
func (f *frostFSIDMock) GetUserGroupIDsAndClaims(util.Uint160) ([]string, map[string]string, error) {
if f.userGroupsError {
return nil, nil, fmt.Errorf("some error")
}
return []string{}, f.tags, nil
}
type xmlMock struct {
}
func (m *xmlMock) NewXMLDecoder(r io.Reader) *xml.Decoder {
return xml.NewDecoder(r)
}
type resourceTaggingMock struct {
bucketTags map[string]string
objectTags map[string]string
noSuchObjectKey bool
noSuchBucketKey bool
}
func (m *resourceTaggingMock) GetBucketTagging(context.Context, *data.BucketInfo) (map[string]string, error) {
if m.noSuchBucketKey {
return nil, apiErrors.GetAPIError(apiErrors.ErrNoSuchKey)
}
return m.bucketTags, nil
}
func (m *resourceTaggingMock) GetObjectTagging(context.Context, *data.GetObjectTaggingParams) (string, map[string]string, error) {
if m.noSuchObjectKey {
return "", nil, apiErrors.GetAPIError(apiErrors.ErrNoSuchKey)
}
return "", m.objectTags, nil
}
type handlerMock struct {
@ -142,9 +196,13 @@ func (h *handlerMock) GetObjectLegalHoldHandler(http.ResponseWriter, *http.Reque
panic("implement me")
}
func (h *handlerMock) GetObjectHandler(http.ResponseWriter, *http.Request) {
//TODO implement me
panic("implement me")
func (h *handlerMock) GetObjectHandler(w http.ResponseWriter, r *http.Request) {
res := &handlerResult{
Method: middleware.GetObjectOperation,
ReqInfo: middleware.GetReqInfo(r.Context()),
}
h.writeResponse(w, res)
}
func (h *handlerMock) GetObjectAttributesHandler(http.ResponseWriter, *http.Request) {
@ -176,12 +234,21 @@ func (h *handlerMock) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
h.writeResponse(w, res)
}
func (h *handlerMock) DeleteObjectHandler(http.ResponseWriter, *http.Request) {
func (h *handlerMock) DeleteObjectHandler(w http.ResponseWriter, r *http.Request) {
res := &handlerResult{
Method: middleware.DeleteObjectOperation,
ReqInfo: middleware.GetReqInfo(r.Context()),
}
h.writeResponse(w, res)
}
func (h *handlerMock) GetBucketLocationHandler(http.ResponseWriter, *http.Request) {
//TODO implement me
panic("implement me")
}
func (h *handlerMock) GetBucketLocationHandler(http.ResponseWriter, *http.Request) {
func (h *handlerMock) GetBucketPolicyStatusHandler(http.ResponseWriter, *http.Request) {
//TODO implement me
panic("implement me")
}
@ -334,9 +401,13 @@ func (h *handlerMock) PutBucketObjectLockConfigHandler(http.ResponseWriter, *htt
panic("implement me")
}
func (h *handlerMock) PutBucketTaggingHandler(http.ResponseWriter, *http.Request) {
//TODO implement me
panic("implement me")
func (h *handlerMock) PutBucketTaggingHandler(w http.ResponseWriter, r *http.Request) {
res := &handlerResult{
Method: middleware.PutBucketTaggingOperation,
ReqInfo: middleware.GetReqInfo(r.Context()),
}
h.writeResponse(w, res)
}
func (h *handlerMock) PutBucketVersioningHandler(http.ResponseWriter, *http.Request) {
@ -353,8 +424,7 @@ func (h *handlerMock) CreateBucketHandler(w http.ResponseWriter, r *http.Request
reqInfo := middleware.GetReqInfo(r.Context())
h.buckets[reqInfo.Namespace+reqInfo.BucketName] = &data.BucketInfo{
Name: reqInfo.BucketName,
APEEnabled: !h.cfg.ACLEnabled(),
Name: reqInfo.BucketName,
}
res := &handlerResult{
@ -468,7 +538,7 @@ func (h *handlerMock) ResolveBucket(ctx context.Context, name string) (*data.Buc
reqInfo := middleware.GetReqInfo(ctx)
bktInfo, ok := h.buckets[reqInfo.Namespace+name]
if !ok {
return nil, errors.New("not found")
return nil, apiErrors.GetAPIError(apiErrors.ErrNoSuchBucket)
}
return bktInfo, nil
}

View file

@ -1,6 +1,7 @@
package api
import (
"bytes"
"encoding/json"
"encoding/xml"
"fmt"
@ -8,6 +9,7 @@ import (
"net/http"
"net/http/httptest"
"net/url"
"strconv"
"testing"
"time"
@ -15,10 +17,12 @@ import (
apiErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
s3middleware "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/metrics"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
engineiam "git.frostfs.info/TrueCloudLab/policy-engine/iam"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine/inmemory"
"git.frostfs.info/TrueCloudLab/policy-engine/schema/common"
"git.frostfs.info/TrueCloudLab/policy-engine/schema/s3"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
@ -33,13 +37,22 @@ type routerMock struct {
cfg Config
middlewareSettings *middlewareSettingsMock
policyChecker engine.LocalOverrideEngine
handler *handlerMock
}
func (m *routerMock) ServeHTTP(w http.ResponseWriter, r *http.Request) {
m.router.ServeHTTP(w, r)
}
func prepareRouter(t *testing.T) *routerMock {
type option func(*Config)
func frostFSIDValidation(flag bool) option {
return func(cfg *Config) {
cfg.FrostFSIDValidation = flag
}
}
func prepareRouter(t *testing.T, opts ...option) *routerMock {
middlewareSettings := &middlewareSettingsMock{}
policyChecker := inmemory.NewInMemoryLocalOverrides()
@ -52,12 +65,14 @@ func prepareRouter(t *testing.T) *routerMock {
Enabled: true,
}
handlerTestMock := &handlerMock{t: t, cfg: middlewareSettings, buckets: map[string]*data.BucketInfo{}}
cfg := Config{
Throttle: middleware.ThrottleOpts{
Limit: 10,
BacklogTimeout: 30 * time.Second,
},
Handler: &handlerMock{t: t, cfg: middlewareSettings, buckets: map[string]*data.BucketInfo{}},
Handler: handlerTestMock,
Center: &centerMock{t: t},
Log: logger,
Metrics: metrics.NewAppMetrics(metricsConfig),
@ -65,13 +80,21 @@ func prepareRouter(t *testing.T) *routerMock {
PolicyChecker: policyChecker,
Domains: []string{"domain1", "domain2"},
FrostfsID: &frostFSIDMock{},
XMLDecoder: &xmlMock{},
Tagging: &resourceTaggingMock{},
}
for _, o := range opts {
o(&cfg)
}
return &routerMock{
t: t,
router: NewRouter(cfg),
cfg: cfg,
middlewareSettings: middlewareSettings,
policyChecker: policyChecker,
handler: handlerTestMock,
}
}
@ -114,7 +137,7 @@ func TestRouterObjectWithSlashes(t *testing.T) {
ns, bktName, objName := "", "dkirillov", "/fix/object"
createBucket(chiRouter, ns, bktName)
resp := putObject(chiRouter, ns, bktName, objName)
resp := putObject(chiRouter, ns, bktName, objName, nil)
require.Equal(t, objName, resp.ReqInfo.ObjectName)
}
@ -156,7 +179,7 @@ func TestRouterObjectEscaping(t *testing.T) {
},
} {
t.Run(tc.name, func(t *testing.T) {
resp := putObject(chiRouter, ns, bktName, tc.objName)
resp := putObject(chiRouter, ns, bktName, tc.objName, nil)
require.Equal(t, tc.expectedObjName, resp.ReqInfo.ObjectName)
})
}
@ -184,20 +207,35 @@ func TestPolicyChecker(t *testing.T) {
require.NoError(t, err)
// check we can access 'bucket' in default namespace
putObject(chiRouter, ns1, bktName1, objName1)
putObject(chiRouter, ns1, bktName1, objName1, nil)
deleteObject(chiRouter, ns1, bktName1, objName1, nil)
// check we can access 'other-bucket' in custom namespace
putObject(chiRouter, ns2, bktName2, objName2)
putObject(chiRouter, ns2, bktName2, objName2, nil)
deleteObject(chiRouter, ns2, bktName2, objName2, nil)
// check we cannot access 'bucket' in custom namespace
putObjectErr(chiRouter, ns2, bktName1, objName2, apiErrors.ErrAccessDenied)
putObjectErr(chiRouter, ns2, bktName1, objName2, nil, apiErrors.ErrAccessDenied)
deleteObjectErr(chiRouter, ns2, bktName1, objName2, nil, apiErrors.ErrAccessDenied)
}
func TestPolicyCheckerError(t *testing.T) {
chiRouter := prepareRouter(t)
ns1, bktName1, objName1 := "", "bucket", "object"
putObjectErr(chiRouter, ns1, bktName1, objName1, nil, apiErrors.ErrNoSuchBucket)
chiRouter = prepareRouter(t)
chiRouter.cfg.FrostfsID.(*frostFSIDMock).userGroupsError = true
putObjectErr(chiRouter, ns1, bktName1, objName1, nil, apiErrors.ErrInternalError)
}
func TestPolicyCheckerReqTypeDetermination(t *testing.T) {
chiRouter := prepareRouter(t)
bktName, objName := "bucket", "object"
createBucket(chiRouter, "", bktName)
policy := engineiam.Policy{
Version: "2012-10-17",
Statement: []engineiam.Statement{{
Principal: map[engineiam.PrincipalType][]string{engineiam.Wildcard: {}},
Effect: engineiam.AllowEffect,
@ -212,6 +250,8 @@ func TestPolicyCheckerReqTypeDetermination(t *testing.T) {
_, _, err = chiRouter.policyChecker.MorphRuleChainStorage().AddMorphRuleChain(chain.S3, engine.NamespaceTarget(""), ruleChain)
require.NoError(t, err)
createBucket(chiRouter, "", bktName)
chiRouter.middlewareSettings.denyByDefault = true
t.Run("can list buckets", func(t *testing.T) {
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodGet, "/", nil)
@ -244,120 +284,336 @@ func TestDefaultBehaviorPolicyChecker(t *testing.T) {
// check we cannot access if rules not found when settings is enabled
chiRouter.middlewareSettings.denyByDefault = true
createBucketErr(chiRouter, ns, bktName, apiErrors.ErrAccessDenied)
createBucketErr(chiRouter, ns, bktName, nil, apiErrors.ErrAccessDenied)
}
func TestACLAPE(t *testing.T) {
t.Run("acl disabled, ape deny by default", func(t *testing.T) {
func TestDefaultPolicyCheckerWithUserTags(t *testing.T) {
router := prepareRouter(t)
ns, bktName := "", "bucket"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, engineiam.Conditions{
engineiam.CondStringEquals: engineiam.Condition{fmt.Sprintf(common.PropertyKeyFormatFrostFSIDUserClaim, "tag-test"): []string{"test"}},
})
createBucketErr(router, ns, bktName, nil, apiErrors.ErrAccessDenied)
tags := make(map[string]string)
tags["tag-test"] = "test"
router.cfg.FrostfsID.(*frostFSIDMock).tags = tags
createBucket(router, ns, bktName)
}
func TestRequestParametersCheck(t *testing.T) {
t.Run("prefix parameter, allow specific value", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, objName := "", "bucket", "object"
bktNameOld, bktNameNew := "old-bucket", "new-bucket"
createOldBucket(router, bktNameOld)
createNewBucket(router, bktNameNew)
router.middlewareSettings.aclEnabled = false
ns, bktName, prefix := "", "bucket", "prefix"
router.middlewareSettings.denyByDefault = true
// Allow because of using old bucket
putObject(router, ns, bktNameOld, objName)
// Deny because of deny by default
putObjectErr(router, ns, bktNameNew, objName, apiErrors.ErrAccessDenied)
// Deny because of deny by default
createBucketErr(router, ns, bktName, apiErrors.ErrAccessDenied)
listBucketsErr(router, ns, apiErrors.ErrAccessDenied)
// Allow operations and check
allowOperations(router, ns, []string{"s3:CreateBucket", "s3:ListBuckets"})
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
listBuckets(router, ns)
// Add policies and check
denyOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondStringNotEquals: engineiam.Condition{s3.PropertyKeyPrefix: []string{prefix}},
})
allowOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondStringEquals: engineiam.Condition{s3.PropertyKeyPrefix: []string{prefix}},
})
listObjectsV1(router, ns, bktName, prefix, "", "")
listObjectsV1Err(router, ns, bktName, "", "", "", apiErrors.ErrAccessDenied)
listObjectsV1Err(router, ns, bktName, "invalid", "", "", apiErrors.ErrAccessDenied)
})
t.Run("acl disabled, ape allow by default", func(t *testing.T) {
t.Run("delimiter parameter, prohibit specific value", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, objName := "", "bucket", "object"
bktNameOld, bktNameNew := "old-bucket", "new-bucket"
createOldBucket(router, bktNameOld)
createNewBucket(router, bktNameNew)
router.middlewareSettings.aclEnabled = false
router.middlewareSettings.denyByDefault = false
// Allow because of using old bucket
putObject(router, ns, bktNameOld, objName)
// Allow because of allow by default
putObject(router, ns, bktNameNew, objName)
// Allow because of deny by default
createBucket(router, ns, bktName)
listBuckets(router, ns)
// Deny operations and check
denyOperations(router, ns, []string{"s3:CreateBucket", "s3:ListBuckets"})
createBucketErr(router, ns, bktName, apiErrors.ErrAccessDenied)
listBucketsErr(router, ns, apiErrors.ErrAccessDenied)
})
t.Run("acl enabled, ape deny by default", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, objName := "", "bucket", "object"
bktNameOld, bktNameNew := "old-bucket", "new-bucket"
createOldBucket(router, bktNameOld)
createNewBucket(router, bktNameNew)
router.middlewareSettings.aclEnabled = true
ns, bktName, delimiter := "", "bucket", "delimiter"
router.middlewareSettings.denyByDefault = true
// Allow because of using old bucket
putObject(router, ns, bktNameOld, objName)
// Deny because of deny by default
putObjectErr(router, ns, bktNameNew, objName, apiErrors.ErrAccessDenied)
// Allow because of old behavior
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
listBuckets(router, ns)
// Add policies and check
denyOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondStringEquals: engineiam.Condition{s3.PropertyKeyDelimiter: []string{delimiter}},
})
allowOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondStringNotEquals: engineiam.Condition{s3.PropertyKeyDelimiter: []string{delimiter}},
})
listObjectsV1(router, ns, bktName, "", "", "")
listObjectsV1(router, ns, bktName, "", "some-delimiter", "")
listObjectsV1Err(router, ns, bktName, "", delimiter, "", apiErrors.ErrAccessDenied)
})
t.Run("acl enabled, ape allow by default", func(t *testing.T) {
t.Run("max-keys parameter, allow specific value", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, objName := "", "bucket", "object"
bktNameOld, bktNameNew := "old-bucket", "new-bucket"
createOldBucket(router, bktNameOld)
createNewBucket(router, bktNameNew)
ns, bktName, maxKeys := "", "bucket", 10
router.middlewareSettings.denyByDefault = true
router.middlewareSettings.aclEnabled = true
router.middlewareSettings.denyByDefault = false
// Allow because of using old bucket
putObject(router, ns, bktNameOld, objName)
// Allow because of allow by default
putObject(router, ns, bktNameNew, objName)
// Allow because of old behavior
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
listBuckets(router, ns)
// Add policies and check
denyOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondNumericNotEquals: engineiam.Condition{s3.PropertyKeyMaxKeys: []string{strconv.Itoa(maxKeys)}},
})
allowOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondNumericEquals: engineiam.Condition{s3.PropertyKeyMaxKeys: []string{strconv.Itoa(maxKeys)}},
})
listObjectsV1(router, ns, bktName, "", "", strconv.Itoa(maxKeys))
listObjectsV1Err(router, ns, bktName, "", "", "", apiErrors.ErrAccessDenied)
listObjectsV1Err(router, ns, bktName, "", "", strconv.Itoa(maxKeys-1), apiErrors.ErrAccessDenied)
listObjectsV1Err(router, ns, bktName, "", "", "invalid", apiErrors.ErrAccessDenied)
})
t.Run("max-keys parameter, allow range of values", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, maxKeys := "", "bucket", 10
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policies and check
denyOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondNumericGreaterThan: engineiam.Condition{s3.PropertyKeyMaxKeys: []string{strconv.Itoa(maxKeys)}},
})
allowOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondNumericLessThanEquals: engineiam.Condition{s3.PropertyKeyMaxKeys: []string{strconv.Itoa(maxKeys)}},
})
listObjectsV1(router, ns, bktName, "", "", strconv.Itoa(maxKeys))
listObjectsV1(router, ns, bktName, "", "", strconv.Itoa(maxKeys-1))
listObjectsV1Err(router, ns, bktName, "", "", strconv.Itoa(maxKeys+1), apiErrors.ErrAccessDenied)
})
t.Run("max-keys parameter, prohibit specific value", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, maxKeys := "", "bucket", 10
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policies and check
denyOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondNumericEquals: engineiam.Condition{s3.PropertyKeyMaxKeys: []string{strconv.Itoa(maxKeys)}},
})
allowOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondNumericNotEquals: engineiam.Condition{s3.PropertyKeyMaxKeys: []string{strconv.Itoa(maxKeys)}},
})
listObjectsV1(router, ns, bktName, "", "", "")
listObjectsV1(router, ns, bktName, "", "", strconv.Itoa(maxKeys-1))
listObjectsV1Err(router, ns, bktName, "", "", strconv.Itoa(maxKeys), apiErrors.ErrAccessDenied)
})
}
func allowOperations(router *routerMock, ns string, operations []string) {
addPolicy(router, ns, "allow", engineiam.AllowEffect, operations)
func TestRequestTagsCheck(t *testing.T) {
t.Run("put bucket tagging", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, tagKey, tagValue := "", "bucket", "tag", "value"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policies and check
allowOperations(router, ns, []string{"s3:PutBucketTagging"}, engineiam.Conditions{
engineiam.CondStringEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatRequestTag, tagKey): []string{tagValue}},
})
denyOperations(router, ns, []string{"s3:PutBucketTagging"}, engineiam.Conditions{
engineiam.CondStringNotEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatRequestTag, tagKey): []string{tagValue}},
})
tagging, err := xml.Marshal(data.Tagging{TagSet: []data.Tag{{Key: tagKey, Value: tagValue}}})
require.NoError(t, err)
putBucketTagging(router, ns, bktName, tagging)
tagging, err = xml.Marshal(data.Tagging{TagSet: []data.Tag{{Key: "key", Value: tagValue}}})
require.NoError(t, err)
putBucketTaggingErr(router, ns, bktName, tagging, apiErrors.ErrAccessDenied)
tagging = nil
putBucketTaggingErr(router, ns, bktName, tagging, apiErrors.ErrMalformedXML)
})
t.Run("put object with tag", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, objName, tagKey, tagValue := "", "bucket", "object", "tag", "value"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policies and check
allowOperations(router, ns, []string{"s3:PutObject"}, engineiam.Conditions{
engineiam.CondStringEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatRequestTag, tagKey): []string{tagValue}},
})
denyOperations(router, ns, []string{"s3:PutObject"}, engineiam.Conditions{
engineiam.CondStringNotEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatRequestTag, tagKey): []string{tagValue}},
})
putObject(router, ns, bktName, objName, &data.Tag{Key: tagKey, Value: tagValue})
putObjectErr(router, ns, bktName, objName, &data.Tag{Key: "key", Value: tagValue}, apiErrors.ErrAccessDenied)
})
}
func denyOperations(router *routerMock, ns string, operations []string) {
addPolicy(router, ns, "deny", engineiam.DenyEffect, operations)
func TestResourceTagsCheck(t *testing.T) {
t.Run("bucket tagging", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, tagKey, tagValue := "", "bucket", "tag", "value"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policies and check
allowOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondStringEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatResourceTag, tagKey): []string{tagValue}},
})
denyOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondStringNotEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatResourceTag, tagKey): []string{tagValue}},
})
router.cfg.Tagging.(*resourceTaggingMock).bucketTags = map[string]string{tagKey: tagValue}
listObjectsV1(router, ns, bktName, "", "", "")
router.cfg.Tagging.(*resourceTaggingMock).bucketTags = map[string]string{}
listObjectsV1Err(router, ns, bktName, "", "", "", apiErrors.ErrAccessDenied)
})
t.Run("object tagging", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, objName, tagKey, tagValue := "", "bucket", "object", "tag", "value"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket", "s3:PutObject"}, nil)
createBucket(router, ns, bktName)
putObject(router, ns, bktName, objName, nil)
// Add policies and check
allowOperations(router, ns, []string{"s3:GetObject"}, engineiam.Conditions{
engineiam.CondStringEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatResourceTag, tagKey): []string{tagValue}},
})
denyOperations(router, ns, []string{"s3:GetObject"}, engineiam.Conditions{
engineiam.CondStringNotEquals: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatResourceTag, tagKey): []string{tagValue}},
})
router.cfg.Tagging.(*resourceTaggingMock).objectTags = map[string]string{tagKey: tagValue}
getObject(router, ns, bktName, objName)
router.cfg.Tagging.(*resourceTaggingMock).objectTags = map[string]string{}
getObjectErr(router, ns, bktName, objName, apiErrors.ErrAccessDenied)
})
t.Run("non-existent resources", func(t *testing.T) {
router := prepareRouter(t)
ns, bktName, objName := "", "bucket", "object"
listObjectsV1Err(router, ns, bktName, "", "", "", apiErrors.ErrNoSuchBucket)
router.cfg.Tagging.(*resourceTaggingMock).noSuchBucketKey = true
createBucket(router, ns, bktName)
getBucketErr(router, ns, bktName, apiErrors.ErrNoSuchKey)
router.cfg.Tagging.(*resourceTaggingMock).noSuchObjectKey = true
createBucket(router, ns, bktName)
getObjectErr(router, ns, bktName, objName, apiErrors.ErrNoSuchKey)
})
}
func addPolicy(router *routerMock, ns string, id string, effect engineiam.Effect, operations []string) {
func TestAccessBoxAttributesCheck(t *testing.T) {
router := prepareRouter(t)
ns, bktName, attrKey, attrValue := "", "bucket", "key", "true"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
createBucket(router, ns, bktName)
// Add policy and check
allowOperations(router, ns, []string{"s3:ListBucket"}, engineiam.Conditions{
engineiam.CondBool: engineiam.Condition{fmt.Sprintf(s3.PropertyKeyFormatAccessBoxAttr, attrKey): []string{attrValue}},
})
listObjectsV1Err(router, ns, bktName, "", "", "", apiErrors.ErrAccessDenied)
var attr object.Attribute
attr.SetKey(attrKey)
attr.SetValue(attrValue)
router.cfg.Center.(*centerMock).attrs = []object.Attribute{attr}
listObjectsV1(router, ns, bktName, "", "", "")
}
func TestSourceIPCheck(t *testing.T) {
router := prepareRouter(t)
ns, bktName, hdr := "", "bucket", "Source-Ip"
router.middlewareSettings.denyByDefault = true
// Add policy and check
allowOperations(router, ns, []string{"s3:CreateBucket"}, engineiam.Conditions{
engineiam.CondIPAddress: engineiam.Condition{"aws:SourceIp": []string{"192.0.2.0/24"}},
})
router.middlewareSettings.sourceIPHeader = hdr
header := map[string][]string{hdr: {"192.0.3.0"}}
createBucketErr(router, ns, bktName, header, apiErrors.ErrAccessDenied)
router.middlewareSettings.sourceIPHeader = ""
createBucket(router, ns, bktName)
}
func TestMFAPolicy(t *testing.T) {
router := prepareRouter(t)
ns, bktName := "", "bucket"
router.middlewareSettings.denyByDefault = true
allowOperations(router, ns, []string{"s3:CreateBucket"}, nil)
denyOperations(router, ns, []string{"s3:CreateBucket"}, engineiam.Conditions{
engineiam.CondBool: engineiam.Condition{s3.PropertyKeyAccessBoxAttrMFA: []string{"false"}},
})
createBucketErr(router, ns, bktName, nil, apiErrors.ErrAccessDenied)
var attr object.Attribute
attr.SetKey("IAM-MFA")
attr.SetValue("true")
router.cfg.Center.(*centerMock).attrs = []object.Attribute{attr}
createBucket(router, ns, bktName)
}
func allowOperations(router *routerMock, ns string, operations []string, conditions engineiam.Conditions) {
addPolicy(router, ns, "allow", engineiam.AllowEffect, operations, conditions)
}
func denyOperations(router *routerMock, ns string, operations []string, conditions engineiam.Conditions) {
addPolicy(router, ns, "deny", engineiam.DenyEffect, operations, conditions)
}
func addPolicy(router *routerMock, ns string, id string, effect engineiam.Effect, operations []string, conditions engineiam.Conditions) {
policy := engineiam.Policy{
Version: "2012-10-17",
Statement: []engineiam.Statement{{
Principal: map[engineiam.PrincipalType][]string{engineiam.Wildcard: {}},
Effect: effect,
Action: engineiam.Action(operations),
Resource: engineiam.Resource{fmt.Sprintf(s3.ResourceFormatS3All)},
Principal: map[engineiam.PrincipalType][]string{engineiam.Wildcard: {}},
Effect: effect,
Action: engineiam.Action(operations),
Resource: engineiam.Resource{fmt.Sprintf(s3.ResourceFormatS3All)},
Conditions: conditions,
}},
}
@ -369,71 +625,159 @@ func addPolicy(router *routerMock, ns string, id string, effect engineiam.Effect
require.NoError(router.t, err)
}
func createOldBucket(router *routerMock, bktName string) {
createSpecificBucket(router, bktName, true)
}
func createNewBucket(router *routerMock, bktName string) {
createSpecificBucket(router, bktName, false)
}
func createSpecificBucket(router *routerMock, bktName string, old bool) {
aclEnabled := router.middlewareSettings.ACLEnabled()
router.middlewareSettings.aclEnabled = old
createBucket(router, "", bktName)
router.middlewareSettings.aclEnabled = aclEnabled
}
func createBucket(router *routerMock, namespace, bktName string) {
w := createBucketBase(router, namespace, bktName)
w := createBucketBase(router, namespace, bktName, nil)
resp := readResponse(router.t, w)
require.Equal(router.t, s3middleware.CreateBucketOperation, resp.Method)
}
func createBucketErr(router *routerMock, namespace, bktName string, errCode apiErrors.ErrorCode) {
w := createBucketBase(router, namespace, bktName)
func createBucketErr(router *routerMock, namespace, bktName string, header http.Header, errCode apiErrors.ErrorCode) {
w := createBucketBase(router, namespace, bktName, header)
assertAPIError(router.t, w, errCode)
}
func createBucketBase(router *routerMock, namespace, bktName string) *httptest.ResponseRecorder {
func createBucketBase(router *routerMock, namespace, bktName string, header http.Header) *httptest.ResponseRecorder {
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodPut, "/"+bktName, nil)
r.Header.Set(FrostfsNamespaceHeader, namespace)
for key := range header {
r.Header.Set(key, header.Get(key))
}
router.ServeHTTP(w, r)
return w
}
func listBuckets(router *routerMock, namespace string) {
w := listBucketsBase(router, namespace)
resp := readResponse(router.t, w)
require.Equal(router.t, s3middleware.ListBucketsOperation, resp.Method)
}
func listBucketsErr(router *routerMock, namespace string, errCode apiErrors.ErrorCode) {
w := listBucketsBase(router, namespace)
func getBucketErr(router *routerMock, namespace, bktName string, errCode apiErrors.ErrorCode) {
w := getBucketBase(router, namespace, bktName)
assertAPIError(router.t, w, errCode)
}
func listBucketsBase(router *routerMock, namespace string) *httptest.ResponseRecorder {
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodGet, "/", nil)
func getBucketBase(router *routerMock, namespace, bktName string) *httptest.ResponseRecorder {
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodGet, "/"+bktName, nil)
r.Header.Set(FrostfsNamespaceHeader, namespace)
router.ServeHTTP(w, r)
return w
}
func putObject(router *routerMock, namespace, bktName, objName string) handlerResult {
w := putObjectBase(router, namespace, bktName, objName)
func putObject(router *routerMock, namespace, bktName, objName string, tag *data.Tag) handlerResult {
w := putObjectBase(router, namespace, bktName, objName, tag)
resp := readResponse(router.t, w)
require.Equal(router.t, s3middleware.PutObjectOperation, resp.Method)
return resp
}
func putObjectErr(router *routerMock, namespace, bktName, objName string, errCode apiErrors.ErrorCode) {
w := putObjectBase(router, namespace, bktName, objName)
func putObjectErr(router *routerMock, namespace, bktName, objName string, tag *data.Tag, errCode apiErrors.ErrorCode) {
w := putObjectBase(router, namespace, bktName, objName, tag)
assertAPIError(router.t, w, errCode)
}
func putObjectBase(router *routerMock, namespace, bktName, objName string) *httptest.ResponseRecorder {
func putObjectBase(router *routerMock, namespace, bktName, objName string, tag *data.Tag) *httptest.ResponseRecorder {
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodPut, "/"+bktName+"/"+objName, nil)
if tag != nil {
queries := url.Values{
tag.Key: []string{tag.Value},
}
r.Header.Set(AmzTagging, queries.Encode())
}
r.Header.Set(FrostfsNamespaceHeader, namespace)
router.ServeHTTP(w, r)
return w
}
func deleteObject(router *routerMock, namespace, bktName, objName string, tag *data.Tag) handlerResult {
w := deleteObjectBase(router, namespace, bktName, objName, tag)
resp := readResponse(router.t, w)
require.Equal(router.t, s3middleware.DeleteObjectOperation, resp.Method)
return resp
}
func deleteObjectErr(router *routerMock, namespace, bktName, objName string, tag *data.Tag, errCode apiErrors.ErrorCode) {
w := deleteObjectBase(router, namespace, bktName, objName, tag)
assertAPIError(router.t, w, errCode)
}
func deleteObjectBase(router *routerMock, namespace, bktName, objName string, tag *data.Tag) *httptest.ResponseRecorder {
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodDelete, "/"+bktName+"/"+objName, nil)
if tag != nil {
queries := url.Values{
tag.Key: []string{tag.Value},
}
r.Header.Set(AmzTagging, queries.Encode())
}
r.Header.Set(FrostfsNamespaceHeader, namespace)
router.ServeHTTP(w, r)
return w
}
func putBucketTagging(router *routerMock, namespace, bktName string, tagging []byte) handlerResult {
w := putBucketTaggingBase(router, namespace, bktName, tagging)
resp := readResponse(router.t, w)
require.Equal(router.t, s3middleware.PutBucketTaggingOperation, resp.Method)
return resp
}
func putBucketTaggingErr(router *routerMock, namespace, bktName string, tagging []byte, errCode apiErrors.ErrorCode) {
w := putBucketTaggingBase(router, namespace, bktName, tagging)
assertAPIError(router.t, w, errCode)
}
func putBucketTaggingBase(router *routerMock, namespace, bktName string, tagging []byte) *httptest.ResponseRecorder {
queries := url.Values{}
queries.Add(s3middleware.TaggingQuery, "")
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodPut, "/"+bktName, bytes.NewBuffer(tagging))
r.URL.RawQuery = queries.Encode()
r.Header.Set(FrostfsNamespaceHeader, namespace)
router.ServeHTTP(w, r)
return w
}
func getObject(router *routerMock, namespace, bktName, objName string) handlerResult {
w := getObjectBase(router, namespace, bktName, objName)
resp := readResponse(router.t, w)
require.Equal(router.t, s3middleware.GetObjectOperation, resp.Method)
return resp
}
func getObjectErr(router *routerMock, namespace, bktName, objName string, errCode apiErrors.ErrorCode) {
w := getObjectBase(router, namespace, bktName, objName)
assertAPIError(router.t, w, errCode)
}
func getObjectBase(router *routerMock, namespace, bktName, objName string) *httptest.ResponseRecorder {
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodGet, "/"+bktName+"/"+objName, nil)
r.Header.Set(FrostfsNamespaceHeader, namespace)
router.ServeHTTP(w, r)
return w
}
func listObjectsV1(router *routerMock, namespace, bktName, prefix, delimiter, maxKeys string) handlerResult {
w := listObjectsV1Base(router, namespace, bktName, prefix, delimiter, maxKeys)
resp := readResponse(router.t, w)
require.Equal(router.t, s3middleware.ListObjectsV1Operation, resp.Method)
return resp
}
func listObjectsV1Err(router *routerMock, namespace, bktName, prefix, delimiter, maxKeys string, errCode apiErrors.ErrorCode) {
w := listObjectsV1Base(router, namespace, bktName, prefix, delimiter, maxKeys)
assertAPIError(router.t, w, errCode)
}
func listObjectsV1Base(router *routerMock, namespace, bktName, prefix, delimiter, maxKeys string) *httptest.ResponseRecorder {
queries := url.Values{}
if len(prefix) > 0 {
queries.Add(s3middleware.QueryPrefix, prefix)
}
if len(delimiter) > 0 {
queries.Add(s3middleware.QueryDelimiter, delimiter)
}
if len(maxKeys) > 0 {
queries.Add(s3middleware.QueryMaxKeys, maxKeys)
}
encoded := queries.Encode()
w, r := httptest.NewRecorder(), httptest.NewRequest(http.MethodGet, "/"+bktName, nil)
r.URL.RawQuery = encoded
r.Header.Set(FrostfsNamespaceHeader, namespace)
router.ServeHTTP(w, r)
return w
@ -446,11 +790,11 @@ func TestOwnerIDRetrieving(t *testing.T) {
createBucket(chiRouter, ns, bktName)
resp := putObject(chiRouter, ns, bktName, objName)
resp := putObject(chiRouter, ns, bktName, objName, nil)
require.NotEqual(t, "anon", resp.ReqInfo.User)
chiRouter.cfg.Center.(*centerMock).anon = true
resp = putObject(chiRouter, ns, bktName, objName)
resp = putObject(chiRouter, ns, bktName, objName, nil)
require.Equal(t, "anon", resp.ReqInfo.User)
}
@ -468,12 +812,41 @@ func TestBillingMetrics(t *testing.T) {
require.Equal(t, 1, dump.Requests[0].Requests)
chiRouter.cfg.Center.(*centerMock).anon = true
putObject(chiRouter, ns, bktName, objName)
putObject(chiRouter, ns, bktName, objName, nil)
dump = chiRouter.cfg.Metrics.UsersAPIStats().DumpMetrics()
require.Len(t, dump.Requests, 1)
require.Equal(t, "anon", dump.Requests[0].User)
}
func TestAuthenticate(t *testing.T) {
chiRouter := prepareRouter(t)
createBucket(chiRouter, "", "bkt-1")
chiRouter = prepareRouter(t)
chiRouter.cfg.Center.(*centerMock).noAuthHeader = true
createBucket(chiRouter, "", "bkt-2")
chiRouter = prepareRouter(t)
chiRouter.cfg.Center.(*centerMock).isError = true
createBucketErr(chiRouter, "", "bkt-3", nil, apiErrors.ErrAccessDenied)
}
func TestFrostFSIDValidation(t *testing.T) {
// successful frostFSID validation
chiRouter := prepareRouter(t, frostFSIDValidation(true))
createBucket(chiRouter, "", "bkt-1")
// anon request, skip frostFSID validation
chiRouter = prepareRouter(t, frostFSIDValidation(true))
chiRouter.cfg.Center.(*centerMock).anon = true
createBucket(chiRouter, "", "bkt-2")
// frostFSID validation failed
chiRouter = prepareRouter(t, frostFSIDValidation(true))
chiRouter.cfg.FrostfsID.(*frostFSIDMock).validateError = true
createBucketErr(chiRouter, "", "bkt-3", nil, apiErrors.ErrInternalError)
}
func readResponse(t *testing.T, w *httptest.ResponseRecorder) handlerResult {
var res handlerResult

View file

@ -20,7 +20,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
frostfsecdsa "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/crypto/ecdsa"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
@ -82,11 +81,6 @@ type FrostFS interface {
TimeToEpoch(context.Context, time.Time) (uint64, uint64, error)
}
// FrostFSID represents interface to interact with frostfsid contract.
type FrostFSID interface {
RegisterPublicKey(ns string, key *keys.PublicKey) error
}
// Agent contains client communicating with FrostFS and logger.
type Agent struct {
frostFS FrostFS
@ -107,7 +101,6 @@ type (
Container ContainerOptions
FrostFSKey *keys.PrivateKey
GatesPublicKeys []*keys.PublicKey
EACLRules []byte
Impersonate bool
SessionTokenRules []byte
SkipSessionRules bool
@ -344,7 +337,7 @@ func (a *Agent) UpdateSecret(ctx context.Context, w io.Writer, options *UpdateSe
creds := tokens.New(cfg)
box, err := creds.GetBox(ctx, options.Address)
box, _, err := creds.GetBox(ctx, options.Address)
if err != nil {
return fmt.Errorf("get accessbox: %w", err)
}
@ -431,7 +424,7 @@ func (a *Agent) ObtainSecret(ctx context.Context, w io.Writer, options *ObtainSe
return fmt.Errorf("failed to parse secret address: %w", err)
}
box, err := bearerCreds.GetBox(ctx, addr)
box, _, err := bearerCreds.GetBox(ctx, addr)
if err != nil {
return fmt.Errorf("failed to get tokens: %w", err)
}
@ -446,47 +439,11 @@ func (a *Agent) ObtainSecret(ctx context.Context, w io.Writer, options *ObtainSe
return enc.Encode(or)
}
func buildEACLTable(eaclTable []byte) (*eacl.Table, error) {
table := eacl.NewTable()
if len(eaclTable) != 0 {
return table, table.UnmarshalJSON(eaclTable)
}
record := eacl.NewRecord()
record.SetOperation(eacl.OperationGet)
record.SetAction(eacl.ActionAllow)
eacl.AddFormedTarget(record, eacl.RoleOthers)
table.AddRecord(record)
for _, rec := range restrictedRecords() {
table.AddRecord(rec)
}
return table, nil
}
func restrictedRecords() (records []*eacl.Record) {
for op := eacl.OperationGet; op <= eacl.OperationRangeHash; op++ {
record := eacl.NewRecord()
record.SetOperation(op)
record.SetAction(eacl.ActionDeny)
eacl.AddFormedTarget(record, eacl.RoleOthers)
records = append(records, record)
}
return
}
func buildBearerToken(key *keys.PrivateKey, impersonate bool, table *eacl.Table, lifetime lifetimeOptions, gateKey *keys.PublicKey) (*bearer.Token, error) {
func buildBearerToken(key *keys.PrivateKey, impersonate bool, lifetime lifetimeOptions, gateKey *keys.PublicKey) (*bearer.Token, error) {
var ownerID user.ID
user.IDFromKey(&ownerID, (ecdsa.PublicKey)(*gateKey))
var bearerToken bearer.Token
if !impersonate {
bearerToken.SetEACLTable(*table)
}
bearerToken.ForUser(ownerID)
bearerToken.SetExp(lifetime.Exp)
bearerToken.SetIat(lifetime.Iat)
@ -501,10 +458,10 @@ func buildBearerToken(key *keys.PrivateKey, impersonate bool, table *eacl.Table,
return &bearerToken, nil
}
func buildBearerTokens(key *keys.PrivateKey, impersonate bool, table *eacl.Table, lifetime lifetimeOptions, gatesKeys []*keys.PublicKey) ([]*bearer.Token, error) {
func buildBearerTokens(key *keys.PrivateKey, impersonate bool, lifetime lifetimeOptions, gatesKeys []*keys.PublicKey) ([]*bearer.Token, error) {
bearerTokens := make([]*bearer.Token, 0, len(gatesKeys))
for _, gateKey := range gatesKeys {
tkn, err := buildBearerToken(key, impersonate, table, lifetime, gateKey)
tkn, err := buildBearerToken(key, impersonate, lifetime, gateKey)
if err != nil {
return nil, fmt.Errorf("build bearer token: %w", err)
}
@ -549,12 +506,7 @@ func buildSessionTokens(key *keys.PrivateKey, lifetime lifetimeOptions, ctxs []s
func createTokens(options *IssueSecretOptions, lifetime lifetimeOptions) ([]*accessbox.GateData, error) {
gates := make([]*accessbox.GateData, len(options.GatesPublicKeys))
table, err := buildEACLTable(options.EACLRules)
if err != nil {
return nil, fmt.Errorf("failed to build eacl table: %w", err)
}
bearerTokens, err := buildBearerTokens(options.FrostFSKey, options.Impersonate, table, lifetime, options.GatesPublicKeys)
bearerTokens, err := buildBearerTokens(options.FrostFSKey, options.Impersonate, lifetime, options.GatesPublicKeys)
if err != nil {
return nil, fmt.Errorf("failed to build bearer tokens: %w", err)
}
@ -582,9 +534,14 @@ func createTokens(options *IssueSecretOptions, lifetime lifetimeOptions) ([]*acc
func formTokensToUpdate(options tokenUpdateOptions) ([]*accessbox.GateData, error) {
btoken := options.box.Gate.BearerToken
table := btoken.EACLTable()
bearerTokens, err := buildBearerTokens(options.frostFSKey, btoken.Impersonate(), &table, options.lifetime, options.gatesPublicKeys)
btokenv2 := new(acl.BearerToken)
btoken.WriteToV2(btokenv2)
if btokenv2.GetBody().GetEACL() != nil {
return nil, errors.New("EACL table in bearer token isn't supported")
}
bearerTokens, err := buildBearerTokens(options.frostFSKey, btoken.Impersonate(), options.lifetime, options.gatesPublicKeys)
if err != nil {
return nil, fmt.Errorf("failed to build bearer tokens: %w", err)
}

View file

@ -2,6 +2,7 @@ package authmate
import (
"encoding/json"
"errors"
"fmt"
apisession "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/session"
@ -55,22 +56,11 @@ func buildContext(rules []byte) ([]sessionTokenContext, error) {
return nil, fmt.Errorf("failed to unmarshal rules for session token: %w", err)
}
var (
containsPut = false
containsSetEACL = false
)
for _, d := range sessionCtxs {
if d.verb == session.VerbContainerPut {
containsPut = true
} else if d.verb == session.VerbContainerSetEACL {
containsSetEACL = true
if d.verb == session.VerbContainerSetEACL {
return nil, errors.New("verb container SetEACL isn't supported")
}
}
if containsPut && !containsSetEACL {
sessionCtxs = append(sessionCtxs, sessionTokenContext{
verb: session.VerbContainerSetEACL,
})
}
return sessionCtxs, nil
}
@ -78,6 +68,5 @@ func buildContext(rules []byte) ([]sessionTokenContext, error) {
return []sessionTokenContext{
{verb: session.VerbContainerPut},
{verb: session.VerbContainerDelete},
{verb: session.VerbContainerSetEACL},
}, nil
}

View file

@ -17,20 +17,15 @@ func TestContainerSessionRules(t *testing.T) {
{
"verb": "DELETE",
"containerID": "6CcWg8LkcbfMUC8pt7wiy5zM1fyS3psNoxgfppcCgig1"
},
{
"verb": "SETEACL"
}
]`)
sessionContext, err := buildContext(jsonRules)
require.NoError(t, err)
require.Len(t, sessionContext, 3)
require.Len(t, sessionContext, 2)
require.Equal(t, sessionContext[0].verb, session.VerbContainerPut)
require.Zero(t, sessionContext[0].containerID)
require.Equal(t, sessionContext[1].verb, session.VerbContainerDelete)
require.NotNil(t, sessionContext[1].containerID)
require.Equal(t, sessionContext[2].verb, session.VerbContainerSetEACL)
require.Zero(t, sessionContext[2].containerID)
}

View file

@ -16,6 +16,10 @@ type (
frostFSIDInitError struct {
err error
}
policyInitError struct {
err error
}
)
func wrapPreparationError(e error) error {
@ -50,6 +54,14 @@ func (e frostFSIDInitError) Error() string {
return e.err.Error()
}
func wrapPolicyInitError(e error) error {
return policyInitError{e}
}
func (e policyInitError) Error() string {
return e.err.Error()
}
// ExitCode picks corresponding error code depending on the type of error provided.
// Returns 1 if error type is unknown.
func ExitCode(e error) int {
@ -62,6 +74,8 @@ func ExitCode(e error) int {
return 4
case frostFSIDInitError:
return 4
case policyInitError:
return 5
}
return 1
}

View file

@ -2,13 +2,11 @@ package modules
import (
"context"
"errors"
"fmt"
"os"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/authmate"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/wallet"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
@ -29,8 +27,6 @@ const (
walletFlag = "wallet"
addressFlag = "address"
peerFlag = "peer"
bearerRulesFlag = "bearer-rules"
disableImpersonateFlag = "disable-impersonate"
gatePublicKeyFlag = "gate-public-key"
containerIDFlag = "container-id"
containerFriendlyNameFlag = "container-friendly-name"
@ -39,16 +35,10 @@ const (
lifetimeFlag = "lifetime"
containerPolicyFlag = "container-policy"
awsCLICredentialFlag = "aws-cli-credentials"
frostfsIDFlag = "frostfsid"
frostfsIDProxyFlag = "frostfsid-proxy"
frostfsIDNamespaceFlag = "frostfsid-namespace"
rpcEndpointFlag = "rpc-endpoint"
attributesFlag = "attributes"
)
const (
walletPassphraseCfg = "wallet.passphrase"
)
const walletPassphraseCfg = "wallet.passphrase"
const (
defaultAccessBoxLifetime = 30 * 24 * time.Hour
@ -70,8 +60,6 @@ func initIssueSecretCmd() {
issueSecretCmd.Flags().String(walletFlag, "", "Path to the wallet that will be owner of the credentials")
issueSecretCmd.Flags().String(addressFlag, "", "Address of the wallet account")
issueSecretCmd.Flags().String(peerFlag, "", "Address of a frostfs peer to connect to")
issueSecretCmd.Flags().String(bearerRulesFlag, "", "Rules for bearer token (filepath or a plain json string are allowed, can be used only with --disable-impersonate)")
issueSecretCmd.Flags().Bool(disableImpersonateFlag, false, "Mark token as not impersonate to don't consider token signer as request owner (must be provided to use --bearer-rules flag)")
issueSecretCmd.Flags().StringSlice(gatePublicKeyFlag, nil, "Public 256r1 key of a gate (use flags repeatedly for multiple gates or separate them by comma)")
issueSecretCmd.Flags().String(containerIDFlag, "", "Auth container id to put the secret into (if not provided new container will be created)")
issueSecretCmd.Flags().String(containerFriendlyNameFlag, "", "Friendly name of auth container to put the secret into (flag value will be used only if --container-id is missed)")
@ -84,10 +72,6 @@ func initIssueSecretCmd() {
issueSecretCmd.Flags().Duration(poolHealthcheckTimeoutFlag, defaultPoolHealthcheckTimeout, "Timeout for request to node to decide if it is alive")
issueSecretCmd.Flags().Duration(poolRebalanceIntervalFlag, defaultPoolRebalanceInterval, "Interval for updating nodes health status")
issueSecretCmd.Flags().Duration(poolStreamTimeoutFlag, defaultPoolStreamTimeout, "Timeout for individual operation in streaming RPC")
issueSecretCmd.Flags().String(frostfsIDFlag, "", "FrostfsID contract hash (LE) or name in NNS to register public key in contract (rpc-endpoint flag also must be provided)")
issueSecretCmd.Flags().String(frostfsIDProxyFlag, "", "Proxy contract hash (LE) or name in NNS to use when interact with frostfsid contract")
issueSecretCmd.Flags().String(frostfsIDNamespaceFlag, "", "Namespace to register public key in frostfsid contract")
issueSecretCmd.Flags().String(rpcEndpointFlag, "", "NEO node RPC address")
issueSecretCmd.Flags().String(attributesFlag, "", "User attributes in form of Key1=Value1,Key2=Value2 (note: you cannot override system attributes)")
_ = issueSecretCmd.MarkFlagRequired(walletFlag)
@ -134,17 +118,6 @@ func runIssueSecretCmd(cmd *cobra.Command, _ []string) error {
return wrapPreparationError(fmt.Errorf("couldn't parse container policy: %s", err.Error()))
}
disableImpersonate := viper.GetBool(disableImpersonateFlag)
eaclRules := viper.GetString(bearerRulesFlag)
if !disableImpersonate && eaclRules != "" {
return wrapPreparationError(errors.New("--bearer-rules flag can be used only with --disable-impersonate"))
}
bearerRules, err := getJSONRules(eaclRules)
if err != nil {
return wrapPreparationError(fmt.Errorf("couldn't parse 'bearer-rules' flag: %s", err.Error()))
}
sessionRules, skipSessionRules, err := getSessionRules(viper.GetString(sessionTokensFlag))
if err != nil {
return wrapPreparationError(fmt.Errorf("couldn't parse 'session-tokens' flag: %s", err.Error()))
@ -164,29 +137,6 @@ func runIssueSecretCmd(cmd *cobra.Command, _ []string) error {
return wrapFrostFSInitError(fmt.Errorf("failed to create FrostFS component: %s", err))
}
frostFSID := viper.GetString(frostfsIDFlag)
if frostFSID != "" {
rpcAddress := viper.GetString(rpcEndpointFlag)
if rpcAddress == "" {
return wrapPreparationError(fmt.Errorf("you can use '%s' flag only along with '%s'", frostfsIDFlag, rpcEndpointFlag))
}
cfg := frostfsid.Config{
RPCAddress: rpcAddress,
Contract: frostFSID,
ProxyContract: viper.GetString(frostfsIDProxyFlag),
Key: key,
}
frostfsIDClient, err := createFrostFSID(ctx, log, cfg)
if err != nil {
return wrapFrostFSIDInitError(err)
}
if err = frostfsIDClient.RegisterPublicKey(viper.GetString(frostfsIDNamespaceFlag), key.PublicKey()); err != nil {
return wrapBusinessLogicError(fmt.Errorf("failed to register key in frostfsid: %w", err))
}
}
customAttrs, err := parseObjectAttrs(viper.GetString(attributesFlag))
if err != nil {
return wrapPreparationError(fmt.Errorf("failed to parse attributes: %s", err))
@ -200,8 +150,7 @@ func runIssueSecretCmd(cmd *cobra.Command, _ []string) error {
},
FrostFSKey: key,
GatesPublicKeys: gatesPublicKeys,
EACLRules: bearerRules,
Impersonate: !disableImpersonate,
Impersonate: true,
SessionTokenRules: sessionRules,
SkipSessionRules: skipSessionRules,
ContainerPolicies: policies,

View file

@ -0,0 +1,202 @@
package modules
import (
"context"
"encoding/json"
"fmt"
"os"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-contract/policy"
ffsidContract "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid/contract"
policyContact "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/policy/contract"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/wallet"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"go.uber.org/zap"
)
var registerUserCmd = &cobra.Command{
Use: "register-user",
Short: "Register user and add allowed policy to him",
Long: "Register user in FrostFSID contract and add allowed policies. This is need to get access to s3-gw operations.",
Example: `frostfs-s3-authmate register-user --wallet wallet.json --rpc-endpoint http://morph-chain.frostfs.devenv:30333
frostfs-s3-authmate register-user --wallet wallet.json --contract-wallet contract-wallet.json --rpc-endpoint http://morph-chain.frostfs.devenv:30333 --namespace namespace --frostfsid-name devenv --frostfsid-contract frostfsid.frostfs --proxy-contract proxy.frostfs --policy-contract policy.frostfs`,
RunE: runRegisterUserCmd,
}
const (
frostfsIDContractFlag = "frostfsid-contract"
proxyContractFlag = "proxy-contract"
usernameFlag = "username"
namespaceFlag = "namespace"
policyContractFlag = "policy-contract"
contractWalletFlag = "contract-wallet"
contractWalletAddressFlag = "contract-wallet-address"
rpcEndpointFlag = "rpc-endpoint"
)
const walletContractPassphraseCfg = "wallet.contract.passphrase"
func initRegisterUserCmd() {
registerUserCmd.Flags().String(walletFlag, "", "Path to the wallet with account of the user that will be registered in FrostFS ID contract")
registerUserCmd.Flags().String(addressFlag, "", "Address of the user wallet that will be registered in FrostFS ID contract")
registerUserCmd.Flags().String(frostfsIDContractFlag, "frostfsid.frostfs", "FrostfsID contract hash (LE) or name in NNS to register public key in contract")
registerUserCmd.Flags().String(proxyContractFlag, "proxy.frostfs", "Proxy contract hash (LE) or name in NNS to use when interact with frostfsid contract")
registerUserCmd.Flags().String(namespaceFlag, "", "Namespace to register public key in frostfsid contract and add policy chains")
registerUserCmd.Flags().String(usernameFlag, "", "Username to set for public key in frostfsid contract")
registerUserCmd.Flags().String(contractWalletFlag, "", "Path to wallet that will be used to interact with contracts (if missing key from wallet flag be used)")
registerUserCmd.Flags().String(contractWalletAddressFlag, "", "Address of the contract wallet account")
registerUserCmd.Flags().String(policyContractFlag, "policy.frostfs", "Policy contract hash (LE) or name in NNS to save allowed chains for key")
registerUserCmd.Flags().String(rpcEndpointFlag, "", "NEO node RPC address")
_ = registerUserCmd.MarkFlagRequired(walletFlag)
_ = registerUserCmd.MarkFlagRequired(rpcEndpointFlag)
}
func runRegisterUserCmd(cmd *cobra.Command, _ []string) error {
ctx, cancel := context.WithTimeout(cmd.Context(), viper.GetDuration(timeoutFlag))
defer cancel()
log := getLogger()
key, contractKey, err := parseKeys()
if err != nil {
return wrapPreparationError(err)
}
frostfsIDClient, err := initFrostFSIDContract(ctx, log, contractKey)
if err != nil {
return wrapFrostFSIDInitError(err)
}
if err = registerPublicKey(log, frostfsIDClient, key.PublicKey()); err != nil {
return wrapBusinessLogicError(err)
}
policyClient, err := initPolicyContract(ctx, log, contractKey)
if err != nil {
return wrapPolicyInitError(err)
}
if err = addAllowedPolicyChains(cmd, log, policyClient, key.PublicKey()); err != nil {
return wrapBusinessLogicError(err)
}
return nil
}
func parseKeys() (userKey *keys.PrivateKey, contractKey *keys.PrivateKey, err error) {
password := wallet.GetPassword(viper.GetViper(), walletPassphraseCfg)
key, err := wallet.GetKeyFromPath(viper.GetString(walletFlag), viper.GetString(addressFlag), password)
if err != nil {
return nil, nil, fmt.Errorf("failed to load frostfs private key: %s", err)
}
contractKey = key
if contractWallet := viper.GetString(contractWalletFlag); contractWallet != "" {
password = wallet.GetPassword(viper.GetViper(), walletContractPassphraseCfg)
contractKey, err = wallet.GetKeyFromPath(contractWallet, viper.GetString(contractWalletAddressFlag), password)
if err != nil {
return nil, nil, fmt.Errorf("failed to load contract private key: %s", err)
}
}
return key, contractKey, nil
}
func initFrostFSIDContract(ctx context.Context, log *zap.Logger, key *keys.PrivateKey) (*ffsidContract.FrostFSID, error) {
log.Debug(logs.PrepareFrostfsIDClient)
cfg := ffsidContract.Config{
RPCAddress: viper.GetString(rpcEndpointFlag),
Contract: viper.GetString(frostfsIDContractFlag),
ProxyContract: viper.GetString(proxyContractFlag),
Key: key,
}
cli, err := ffsidContract.New(ctx, cfg)
if err != nil {
return nil, fmt.Errorf("create frostfsid client: %w", err)
}
return cli, nil
}
func registerPublicKey(log *zap.Logger, cli *ffsidContract.FrostFSID, key *keys.PublicKey) error {
namespace := viper.GetString(namespaceFlag)
log.Debug(logs.CreateSubjectInFrostFSID)
err := cli.Wait(cli.CreateSubject(namespace, key))
if err != nil {
if strings.Contains(err.Error(), "subject already exists") {
log.Debug(logs.SubjectAlreadyExistsInFrostFSID, zap.String("address", key.Address()))
} else {
return fmt.Errorf("create subject in frostfsid: %w", err)
}
}
name := viper.GetString(usernameFlag)
if name == "" {
return nil
}
log.Debug(logs.SetSubjectNameInFrostFSID)
if err = cli.Wait(cli.SetSubjectName(key, name)); err != nil {
return fmt.Errorf("set subject name in frostfsid: %w", err)
}
return nil
}
func initPolicyContract(ctx context.Context, log *zap.Logger, key *keys.PrivateKey) (*policyContact.Client, error) {
log.Debug(logs.PreparePolicyClient)
cfg := policyContact.Config{
RPCAddress: viper.GetString(rpcEndpointFlag),
Contract: viper.GetString(policyContractFlag),
ProxyContract: viper.GetString(proxyContractFlag),
Key: key,
}
cli, err := policyContact.New(ctx, cfg)
if err != nil {
return nil, fmt.Errorf("create policy client: %w", err)
}
return cli, nil
}
func addAllowedPolicyChains(cmd *cobra.Command, log *zap.Logger, cli *policyContact.Client, key *keys.PublicKey) error {
log.Debug(logs.AddPolicyChainRules)
namespace := viper.GetString(namespaceFlag)
allowAllRule := chain.Rule{
Status: chain.Allow,
Actions: chain.Actions{Names: []string{"*"}},
Resources: chain.Resources{Names: []string{"*"}},
}
chains := []*chain.Chain{
{ID: chain.ID(chain.S3 + ":authmate"), Rules: []chain.Rule{allowAllRule}},
{ID: chain.ID(chain.Ingress + ":authmate"), Rules: []chain.Rule{allowAllRule}},
}
kind := policy.Kind(policy.User)
entity := namespace + ":" + key.Address()
tx := cli.StartTx()
for _, ch := range chains {
tx.AddChain(kind, entity, ch.ID, ch.Bytes())
}
if err := cli.SendTx(tx); err != nil {
return fmt.Errorf("add policy chain: %w", err)
}
cmd.Printf("Added policy rules:\nkind: '%c'\nentity: '%s'\nchains:\n", kind, entity)
enc := json.NewEncoder(os.Stdout)
return enc.Encode(chains)
}

View file

@ -65,4 +65,7 @@ GoVersion: {{ runtimeVersion }}
rootCmd.AddCommand(updateSecretCmd)
initUpdateSecretCmd()
rootCmd.AddCommand(registerUserCmd)
initRegisterUserCmd()
}

View file

@ -7,7 +7,6 @@ import (
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/authmate"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/wallet"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
@ -40,10 +39,6 @@ func initUpdateSecretCmd() {
updateSecretCmd.Flags().Duration(poolHealthcheckTimeoutFlag, defaultPoolHealthcheckTimeout, "Timeout for request to node to decide if it is alive")
updateSecretCmd.Flags().Duration(poolRebalanceIntervalFlag, defaultPoolRebalanceInterval, "Interval for updating nodes health status")
updateSecretCmd.Flags().Duration(poolStreamTimeoutFlag, defaultPoolStreamTimeout, "Timeout for individual operation in streaming RPC")
updateSecretCmd.Flags().String(frostfsIDFlag, "", "FrostfsID contract hash (LE) or name in NNS to register public key in contract (rpc-endpoint flag also must be provided)")
updateSecretCmd.Flags().String(frostfsIDProxyFlag, "", "Proxy contract hash (LE) or name in NNS to use when interact with frostfsid contract")
updateSecretCmd.Flags().String(frostfsIDNamespaceFlag, "", "Namespace to register public key in frostfsid contract")
updateSecretCmd.Flags().String(rpcEndpointFlag, "", "NEO node RPC address")
updateSecretCmd.Flags().String(attributesFlag, "", "User attributes in form of Key1=Value1,Key2=Value2 (note: you cannot override system attributes)")
_ = updateSecretCmd.MarkFlagRequired(walletFlag)
@ -100,29 +95,6 @@ func runUpdateSecretCmd(cmd *cobra.Command, _ []string) error {
return wrapFrostFSInitError(fmt.Errorf("failed to create FrostFS component: %s", err))
}
frostFSID := viper.GetString(frostfsIDFlag)
if frostFSID != "" {
rpcAddress := viper.GetString(rpcEndpointFlag)
if rpcAddress == "" {
return wrapPreparationError(fmt.Errorf("you can use '%s' flag only along with '%s'", frostfsIDFlag, rpcEndpointFlag))
}
cfg := frostfsid.Config{
RPCAddress: rpcAddress,
Contract: frostFSID,
ProxyContract: viper.GetString(frostfsIDProxyFlag),
Key: key,
}
frostfsIDClient, err := createFrostFSID(ctx, log, cfg)
if err != nil {
return wrapFrostFSIDInitError(err)
}
if err = frostfsIDClient.RegisterPublicKey(viper.GetString(frostfsIDNamespaceFlag), key.PublicKey()); err != nil {
return wrapBusinessLogicError(fmt.Errorf("failed to register key in frostfsid: %w", err))
}
}
customAttrs, err := parseObjectAttrs(viper.GetString(attributesFlag))
if err != nil {
return wrapPreparationError(fmt.Errorf("failed to parse attributes: %s", err))

View file

@ -11,7 +11,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/authmate"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
@ -30,7 +29,7 @@ type PoolConfig struct {
RebalanceInterval time.Duration
}
func createFrostFS(ctx context.Context, log *zap.Logger, cfg PoolConfig) (authmate.FrostFS, error) {
func createFrostFS(ctx context.Context, log *zap.Logger, cfg PoolConfig) (*frostfs.AuthmateFrostFS, error) {
log.Debug(logs.PrepareConnectionPool)
var prm pool.InitParameters
@ -51,7 +50,7 @@ func createFrostFS(ctx context.Context, log *zap.Logger, cfg PoolConfig) (authma
return nil, fmt.Errorf("dial pool: %w", err)
}
return frostfs.NewAuthmateFrostFS(p, cfg.Key), nil
return frostfs.NewAuthmateFrostFS(frostfs.NewFrostFS(p, cfg.Key)), nil
}
func parsePolicies(val string) (authmate.ContainerPolicies, error) {
@ -145,17 +144,6 @@ func getLogger() *zap.Logger {
return log
}
func createFrostFSID(ctx context.Context, log *zap.Logger, cfg frostfsid.Config) (authmate.FrostFSID, error) {
log.Debug(logs.PrepareFrostfsIDClient)
cli, err := frostfsid.New(ctx, cfg)
if err != nil {
return nil, fmt.Errorf("create frostfsid client: %w", err)
}
return cli, nil
}
func parseObjectAttrs(attributes string) ([]object.Attribute, error) {
if len(attributes) == 0 {
return nil, nil

View file

@ -7,7 +7,6 @@ import (
"errors"
"fmt"
"io"
"net"
"net/http"
"os"
"os/signal"
@ -25,11 +24,11 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/handler"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
s3middleware "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/notifications"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/resolver"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/tokens"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid"
ffidcontract "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/frostfsid/contract"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/policy"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/policy/contract"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/services"
@ -37,8 +36,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/version"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/wallet"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/metrics"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/pkg/service/control"
controlSvc "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/pkg/service/control/server"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/pkg/service/tree"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
@ -49,6 +46,7 @@ import (
"github.com/spf13/viper"
"go.uber.org/zap"
"golang.org/x/exp/slices"
"golang.org/x/text/encoding/ianaindex"
"google.golang.org/grpc"
)
@ -63,17 +61,16 @@ type (
pool *pool.Pool
treePool *treepool.Pool
key *keys.PrivateKey
nc *notifications.Controller
obj layer.Client
obj *layer.Layer
api api.Handler
frostfsid *frostfsid.FrostFSID
policyStorage *policy.Storage
servers []Server
controlAPI *grpc.Server
servers []Server
unbindServers []ServerInfo
mu sync.RWMutex
metrics *metrics.AppMetrics
bucketResolver *resolver.BucketResolver
@ -88,7 +85,7 @@ type (
logLevel zap.AtomicLevel
maxClient maxClientsConfig
defaultMaxAge int
notificatorEnabled bool
reconnectInterval time.Duration
resolveZoneList []string
isResolveListAllow bool // True if ResolveZoneList contains allowed zones
frostfsidValidation bool
@ -100,11 +97,13 @@ type (
clientCut bool
maxBufferSizeForPut uint64
md5Enabled bool
aclEnabled bool
namespaceHeader string
defaultNamespaces []string
authorizedControlAPIKeys [][]byte
policyDenyByDefault bool
sourceIPHeader string
retryMaxAttempts int
retryMaxBackoff time.Duration
retryStrategy handler.RetryStrategy
}
maxClientsConfig struct {
@ -122,7 +121,7 @@ func newApp(ctx context.Context, log *Logger, v *viper.Viper) *App {
objPool, treePool, key := getPools(ctx, log.logger, v)
cfg := tokens.Config{
FrostFS: frostfs.NewAuthmateFrostFS(objPool, key),
FrostFS: frostfs.NewAuthmateFrostFS(frostfs.NewFrostFS(objPool, key)),
Key: key,
CacheConfig: getAccessBoxCacheConfig(v, log.logger),
RemovingCheckAfterDurations: fetchRemovingCheckInterval(v, log.logger),
@ -142,7 +141,7 @@ func newApp(ctx context.Context, log *Logger, v *viper.Viper) *App {
webDone: make(chan struct{}, 1),
wrkDone: make(chan struct{}, 1),
settings: newAppSettings(log, v, key),
settings: newAppSettings(log, v),
}
app.init(ctx)
@ -154,14 +153,13 @@ func (a *App) init(ctx context.Context) {
a.setRuntimeParameters()
a.initFrostfsID(ctx)
a.initPolicyStorage(ctx)
a.initAPI(ctx)
a.initAPI()
a.initMetrics()
a.initControlAPI()
a.initServers(ctx)
a.initTracing(ctx)
}
func (a *App) initLayer(ctx context.Context) {
func (a *App) initLayer() {
a.initResolver()
// prepare random key for anonymous requests
@ -186,26 +184,14 @@ func (a *App) initLayer(ctx context.Context) {
// prepare object layer
a.obj = layer.NewLayer(a.log, frostfs.NewFrostFS(a.pool, a.key), layerCfg)
if a.cfg.GetBool(cfgEnableNATS) {
nopts := getNotificationsOptions(a.cfg, a.log)
a.nc, err = notifications.NewController(nopts, a.log)
if err != nil {
a.log.Fatal(logs.FailedToEnableNotifications, zap.Error(err))
}
if err = a.obj.Initialize(ctx, a.nc); err != nil {
a.log.Fatal(logs.CouldntInitializeLayer, zap.Error(err))
}
}
}
func newAppSettings(log *Logger, v *viper.Viper, key *keys.PrivateKey) *appSettings {
func newAppSettings(log *Logger, v *viper.Viper) *appSettings {
settings := &appSettings{
logLevel: log.lvl,
maxClient: newMaxClients(v),
defaultMaxAge: fetchDefaultMaxAge(v, log.logger),
notificatorEnabled: v.GetBool(cfgEnableNATS),
reconnectInterval: fetchReconnectInterval(v),
frostfsidValidation: v.GetBool(cfgFrostfsIDValidationEnabled),
}
@ -215,21 +201,23 @@ func newAppSettings(log *Logger, v *viper.Viper, key *keys.PrivateKey) *appSetti
settings.resolveZoneList = v.GetStringSlice(cfgResolveBucketDeny)
}
settings.update(v, log.logger, key)
settings.update(v, log.logger)
return settings
}
func (s *appSettings) update(v *viper.Viper, log *zap.Logger, key *keys.PrivateKey) {
func (s *appSettings) update(v *viper.Viper, log *zap.Logger) {
s.updateNamespacesSettings(v, log)
s.useDefaultXMLNamespace(v.GetBool(cfgKludgeUseDefaultXMLNS))
s.setACLEnabled(v.GetBool(cfgKludgeACLEnabled))
s.setBypassContentEncodingInChunks(v.GetBool(cfgKludgeBypassContentEncodingCheckInChunks))
s.setClientCut(v.GetBool(cfgClientCut))
s.setBufferMaxSizeForPut(v.GetUint64(cfgBufferMaxSizeForPut))
s.setMD5Enabled(v.GetBool(cfgMD5Enabled))
s.setAuthorizedControlAPIKeys(append(fetchAuthorizedKeys(log, v), key.PublicKey()))
s.setPolicyDenyByDefault(v.GetBool(cfgPolicyDenyByDefault))
s.setSourceIPHeader(v.GetString(cfgSourceIPHeader))
s.setRetryMaxAttempts(fetchRetryMaxAttempts(v))
s.setRetryMaxBackoff(fetchRetryMaxBackoff(v))
s.setRetryStrategy(fetchRetryStrategy(v))
}
func (s *appSettings) updateNamespacesSettings(v *viper.Viper, log *zap.Logger) {
@ -310,6 +298,13 @@ func (s *appSettings) DefaultCopiesNumbers(namespace string) []uint32 {
func (s *appSettings) NewXMLDecoder(r io.Reader) *xml.Decoder {
dec := xml.NewDecoder(r)
dec.CharsetReader = func(charset string, reader io.Reader) (io.Reader, error) {
enc, err := ianaindex.IANA.Encoding(charset)
if err != nil {
return nil, fmt.Errorf("charset %s: %w", charset, err)
}
return enc.NewDecoder().Reader(reader), nil
}
s.mu.RLock()
if s.defaultXMLNS {
@ -330,10 +325,6 @@ func (s *appSettings) DefaultMaxAge() int {
return s.defaultMaxAge
}
func (s *appSettings) NotificatorEnabled() bool {
return s.notificatorEnabled
}
func (s *appSettings) ResolveZoneList() []string {
return s.resolveZoneList
}
@ -354,18 +345,6 @@ func (s *appSettings) setMD5Enabled(md5Enabled bool) {
s.mu.Unlock()
}
func (s *appSettings) setACLEnabled(enableACL bool) {
s.mu.Lock()
s.aclEnabled = enableACL
s.mu.Unlock()
}
func (s *appSettings) ACLEnabled() bool {
s.mu.RLock()
defer s.mu.RUnlock()
return s.aclEnabled
}
func (s *appSettings) NamespaceHeader() string {
s.mu.RLock()
defer s.mu.RUnlock()
@ -387,23 +366,6 @@ func (s *appSettings) isDefaultNamespace(ns string) bool {
return slices.Contains(namespaces, ns)
}
func (s *appSettings) FetchRawKeys() [][]byte {
s.mu.RLock()
defer s.mu.RUnlock()
return s.authorizedControlAPIKeys
}
func (s *appSettings) setAuthorizedControlAPIKeys(keys keys.PublicKeys) {
rawPubs := make([][]byte, len(keys))
for i := range keys {
rawPubs[i] = keys[i].Bytes()
}
s.mu.Lock()
s.authorizedControlAPIKeys = rawPubs
s.mu.Unlock()
}
func (s *appSettings) ResolveNamespaceAlias(namespace string) string {
if s.isDefaultNamespace(namespace) {
return defaultNamespace
@ -424,21 +386,57 @@ func (s *appSettings) setPolicyDenyByDefault(policyDenyByDefault bool) {
s.mu.Unlock()
}
func (a *App) initAPI(ctx context.Context) {
a.initLayer(ctx)
a.initHandler()
func (s *appSettings) setSourceIPHeader(header string) {
s.mu.Lock()
s.sourceIPHeader = header
s.mu.Unlock()
}
func (a *App) initControlAPI() {
svc := controlSvc.New(
controlSvc.WithSettings(a.settings),
controlSvc.WithLogger(a.log),
controlSvc.WithChainStorage(a.policyStorage.LocalStorage()),
)
func (s *appSettings) SourceIPHeader() string {
s.mu.RLock()
defer s.mu.RUnlock()
return s.sourceIPHeader
}
a.controlAPI = grpc.NewServer()
func (s *appSettings) setRetryMaxAttempts(maxAttempts int) {
s.mu.Lock()
s.retryMaxAttempts = maxAttempts
s.mu.Unlock()
}
control.RegisterControlServiceServer(a.controlAPI, svc)
func (s *appSettings) RetryMaxAttempts() int {
s.mu.RLock()
defer s.mu.RUnlock()
return s.retryMaxAttempts
}
func (s *appSettings) setRetryMaxBackoff(maxBackoff time.Duration) {
s.mu.Lock()
s.retryMaxBackoff = maxBackoff
s.mu.Unlock()
}
func (s *appSettings) RetryMaxBackoff() time.Duration {
s.mu.RLock()
defer s.mu.RUnlock()
return s.retryMaxBackoff
}
func (s *appSettings) setRetryStrategy(strategy handler.RetryStrategy) {
s.mu.Lock()
s.retryStrategy = strategy
s.mu.Unlock()
}
func (s *appSettings) RetryStrategy() handler.RetryStrategy {
s.mu.RLock()
defer s.mu.RUnlock()
return s.retryStrategy
}
func (a *App) initAPI() {
a.initLayer()
a.initHandler()
}
func (a *App) initMetrics() {
@ -453,8 +451,7 @@ func (a *App) initMetrics() {
}
func (a *App) initFrostfsID(ctx context.Context) {
var err error
a.frostfsid, err = frostfsid.New(ctx, frostfsid.Config{
cli, err := ffidcontract.New(ctx, ffidcontract.Config{
RPCAddress: a.cfg.GetString(cfgRPCEndpoint),
Contract: a.cfg.GetString(cfgFrostfsIDContract),
ProxyContract: a.cfg.GetString(cfgProxyContract),
@ -463,6 +460,15 @@ func (a *App) initFrostfsID(ctx context.Context) {
if err != nil {
a.log.Fatal(logs.InitFrostfsIDContractFailed, zap.Error(err))
}
a.frostfsid, err = frostfsid.NewFrostFSID(frostfsid.Config{
Cache: cache.NewFrostfsIDCache(getFrostfsIDCacheConfig(a.cfg, a.log)),
FrostFSID: cli,
Logger: a.log,
})
if err != nil {
a.log.Fatal(logs.InitFrostfsIDContractFailed, zap.Error(err))
}
}
func (a *App) initPolicyStorage(ctx context.Context) {
@ -684,6 +690,9 @@ func (a *App) Serve(ctx context.Context) {
FrostfsID: a.frostfsid,
FrostFSIDValidation: a.settings.frostfsidValidation,
XMLDecoder: a.settings,
Tagging: a.obj,
}
chiRouter := api.NewRouter(cfg)
@ -699,26 +708,22 @@ func (a *App) Serve(ctx context.Context) {
a.startServices()
for i := range a.servers {
go func(i int) {
a.log.Info(logs.StartingServer, zap.String("address", a.servers[i].Address()))
servs := a.getServers()
if err := srv.Serve(a.servers[i].Listener()); err != nil && err != http.ErrServerClosed {
a.metrics.MarkUnhealthy(a.servers[i].Address())
for i := range servs {
go func(i int) {
a.log.Info(logs.StartingServer, zap.String("address", servs[i].Address()))
if err := srv.Serve(servs[i].Listener()); err != nil && err != http.ErrServerClosed {
a.metrics.MarkUnhealthy(servs[i].Address())
a.log.Fatal(logs.ListenAndServe, zap.Error(err))
}
}(i)
}
go func() {
address := a.cfg.GetString(cfgControlGRPCEndpoint)
a.log.Info(logs.StartingControlAPI, zap.String("address", address))
if listener, err := net.Listen("tcp", address); err != nil {
a.log.Fatal(logs.ListenAndServe, zap.Error(err))
} else if err = a.controlAPI.Serve(listener); err != nil {
a.log.Fatal(logs.ListenAndServe, zap.Error(err))
}
}()
if len(a.unbindServers) != 0 {
a.scheduleReconnect(ctx, srv)
}
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGHUP)
@ -738,7 +743,6 @@ LOOP:
a.log.Info(logs.StoppingServer, zap.Error(srv.Shutdown(ctx)))
a.stopControlAPI()
a.metrics.Shutdown()
a.stopServices()
a.shutdownTracing()
@ -750,25 +754,6 @@ func shutdownContext() (context.Context, context.CancelFunc) {
return context.WithTimeout(context.Background(), defaultShutdownTimeout)
}
func (a *App) stopControlAPI() {
ctx, cancel := shutdownContext()
defer cancel()
go func() {
a.controlAPI.GracefulStop()
cancel()
}()
<-ctx.Done()
if errors.Is(ctx.Err(), context.DeadlineExceeded) {
a.log.Info(logs.ControlAPICannotShutdownGracefully)
a.controlAPI.Stop()
}
a.log.Info(logs.ControlAPIServiceStopped)
}
func (a *App) configReload(ctx context.Context) {
a.log.Info(logs.SIGHUPConfigReloadStarted)
@ -810,7 +795,7 @@ func (a *App) updateSettings() {
a.settings.logLevel.SetLevel(lvl)
}
a.settings.update(a.cfg, a.log, a.key)
a.settings.update(a.cfg, a.log)
}
func (a *App) startServices() {
@ -826,7 +811,7 @@ func (a *App) startServices() {
}
func (a *App) initServers(ctx context.Context) {
serversInfo := fetchServers(a.cfg)
serversInfo := fetchServers(a.cfg, a.log)
a.servers = make([]Server, 0, len(serversInfo))
for _, serverInfo := range serversInfo {
@ -836,6 +821,7 @@ func (a *App) initServers(ctx context.Context) {
}
srv, err := newServer(ctx, serverInfo)
if err != nil {
a.unbindServers = append(a.unbindServers, serverInfo)
a.metrics.MarkUnhealthy(serverInfo.Address)
a.log.Warn(logs.FailedToAddServer, append(fields, zap.Error(err))...)
continue
@ -852,21 +838,24 @@ func (a *App) initServers(ctx context.Context) {
}
func (a *App) updateServers() error {
serversInfo := fetchServers(a.cfg)
serversInfo := fetchServers(a.cfg, a.log)
a.mu.Lock()
defer a.mu.Unlock()
var found bool
for _, serverInfo := range serversInfo {
index := a.serverIndex(serverInfo.Address)
if index == -1 {
continue
}
if serverInfo.TLS.Enabled {
if err := a.servers[index].UpdateCert(serverInfo.TLS.CertFile, serverInfo.TLS.KeyFile); err != nil {
return fmt.Errorf("failed to update tls certs: %w", err)
ser := a.getServer(serverInfo.Address)
if ser != nil {
if serverInfo.TLS.Enabled {
if err := ser.UpdateCert(serverInfo.TLS.CertFile, serverInfo.TLS.KeyFile); err != nil {
return fmt.Errorf("failed to update tls certs: %w", err)
}
found = true
}
} else if unbind := a.updateUnbindServerInfo(serverInfo); unbind {
found = true
}
found = true
}
if !found {
@ -876,15 +865,6 @@ func (a *App) updateServers() error {
return nil
}
func (a *App) serverIndex(address string) int {
for i := range a.servers {
if a.servers[i].Address() == address {
return i
}
}
return -1
}
func (a *App) stopServices() {
ctx, cancel := shutdownContext()
defer cancel()
@ -894,17 +874,6 @@ func (a *App) stopServices() {
}
}
func getNotificationsOptions(v *viper.Viper, l *zap.Logger) *notifications.Options {
cfg := notifications.Options{}
cfg.URL = v.GetString(cfgNATSEndpoint)
cfg.Timeout = fetchNATSTimeout(v, l)
cfg.TLSCertFilepath = v.GetString(cfgNATSTLSCertFile)
cfg.TLSAuthPrivateKeyFilePath = v.GetString(cfgNATSAuthPrivateKeyFile)
cfg.RootCAFiles = v.GetStringSlice(cfgNATSRootCAFiles)
return &cfg
}
func getCacheOptions(v *viper.Viper, l *zap.Logger) *layer.CachesConfig {
cacheCfg := layer.DefaultCachesConfigs(l)
@ -950,15 +919,49 @@ func getMorphPolicyCacheConfig(v *viper.Viper, l *zap.Logger) *cache.Config {
return cacheCfg
}
func getFrostfsIDCacheConfig(v *viper.Viper, l *zap.Logger) *cache.Config {
cacheCfg := cache.DefaultFrostfsIDConfig(l)
cacheCfg.Lifetime = fetchCacheLifetime(v, l, cfgFrostfsIDCacheLifetime, cacheCfg.Lifetime)
cacheCfg.Size = fetchCacheSize(v, l, cfgFrostfsIDCacheSize, cacheCfg.Size)
return cacheCfg
}
func (a *App) initHandler() {
var err error
a.api, err = handler.New(a.log, a.obj, a.nc, a.settings, a.policyStorage, a.frostfsid)
a.api, err = handler.New(a.log, a.obj, a.settings, a.policyStorage, a.frostfsid)
if err != nil {
a.log.Fatal(logs.CouldNotInitializeAPIHandler, zap.Error(err))
}
}
func (a *App) getServer(address string) Server {
for i := range a.servers {
if a.servers[i].Address() == address {
return a.servers[i]
}
}
return nil
}
func (a *App) updateUnbindServerInfo(info ServerInfo) bool {
for i := range a.unbindServers {
if a.unbindServers[i].Address == info.Address {
a.unbindServers[i] = info
return true
}
}
return false
}
func (a *App) getServers() []Server {
a.mu.RLock()
defer a.mu.RUnlock()
return a.servers
}
func (a *App) setRuntimeParameters() {
if len(os.Getenv("GOMEMLIMIT")) != 0 {
// default limit < yaml limit < app env limit < GOMEMLIMIT
@ -974,3 +977,60 @@ func (a *App) setRuntimeParameters() {
zap.Int64("old_value", previous))
}
}
func (a *App) scheduleReconnect(ctx context.Context, srv *http.Server) {
go func() {
t := time.NewTicker(a.settings.reconnectInterval)
defer t.Stop()
for {
select {
case <-t.C:
if a.tryReconnect(ctx, srv) {
return
}
t.Reset(a.settings.reconnectInterval)
case <-ctx.Done():
return
}
}
}()
}
func (a *App) tryReconnect(ctx context.Context, sr *http.Server) bool {
a.mu.Lock()
defer a.mu.Unlock()
a.log.Info(logs.ServerReconnecting)
var failedServers []ServerInfo
for _, serverInfo := range a.unbindServers {
fields := []zap.Field{
zap.String("address", serverInfo.Address), zap.Bool("tls enabled", serverInfo.TLS.Enabled),
zap.String("tls cert", serverInfo.TLS.CertFile), zap.String("tls key", serverInfo.TLS.KeyFile),
}
srv, err := newServer(ctx, serverInfo)
if err != nil {
a.log.Warn(logs.ServerReconnectFailed, zap.Error(err))
failedServers = append(failedServers, serverInfo)
a.metrics.MarkUnhealthy(serverInfo.Address)
continue
}
go func() {
a.log.Info(logs.StartingServer, zap.String("address", srv.Address()))
a.metrics.MarkHealthy(serverInfo.Address)
if err = sr.Serve(srv.Listener()); err != nil && !errors.Is(err, http.ErrServerClosed) {
a.log.Warn(logs.ListenAndServe, zap.Error(err))
a.metrics.MarkUnhealthy(serverInfo.Address)
}
}()
a.servers = append(a.servers, srv)
a.log.Info(logs.ServerReconnectedSuccessfully, fields...)
}
a.unbindServers = failedServers
return len(a.unbindServers) == 0
}

View file

@ -14,14 +14,12 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/handler"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/notifications"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/resolver"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/version"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
"git.frostfs.info/TrueCloudLab/zapjournald"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/spf13/pflag"
"github.com/spf13/viper"
"github.com/ssgreg/journald"
@ -59,6 +57,12 @@ const (
defaultConstraintName = "default"
defaultNamespace = ""
defaultReconnectInterval = time.Minute
defaultRetryMaxAttempts = 4
defaultRetryMaxBackoff = 30 * time.Second
defaultRetryStrategy = handler.RetryStrategyExponential
)
var (
@ -84,10 +88,6 @@ const ( // Settings.
cfgTLSKeyFile = "tls.key_file"
cfgTLSCertFile = "tls.cert_file"
// Control API.
cfgControlAuthorizedKeys = "control.authorized_keys"
cfgControlGRPCEndpoint = "control.grpc.endpoint"
// Pool config.
cfgConnectTimeout = "connect_timeout"
cfgStreamTimeout = "stream_timeout"
@ -114,17 +114,11 @@ const ( // Settings.
cfgAccessControlCacheSize = "cache.accesscontrol.size"
cfgMorphPolicyCacheLifetime = "cache.morph_policy.lifetime"
cfgMorphPolicyCacheSize = "cache.morph_policy.size"
cfgFrostfsIDCacheLifetime = "cache.frostfsid.lifetime"
cfgFrostfsIDCacheSize = "cache.frostfsid.size"
cfgAccessBoxCacheRemovingCheckInterval = "cache.accessbox.removing_check_interval"
// NATS.
cfgEnableNATS = "nats.enabled"
cfgNATSEndpoint = "nats.endpoint"
cfgNATSTimeout = "nats.timeout"
cfgNATSTLSCertFile = "nats.cert_file"
cfgNATSAuthPrivateKeyFile = "nats.key_file"
cfgNATSRootCAFiles = "nats.root_ca"
// Policy.
cfgPolicyDefault = "placement_policy.default"
cfgPolicyRegionMapFile = "placement_policy.region_mapping"
@ -166,17 +160,22 @@ const ( // Settings.
cfgKludgeUseDefaultXMLNS = "kludge.use_default_xmlns"
cfgKludgeBypassContentEncodingCheckInChunks = "kludge.bypass_content_encoding_check_in_chunks"
cfgKludgeDefaultNamespaces = "kludge.default_namespaces"
cfgKludgeACLEnabled = "kludge.acl_enabled"
// Web.
cfgWebReadTimeout = "web.read_timeout"
cfgWebReadHeaderTimeout = "web.read_header_timeout"
cfgWebWriteTimeout = "web.write_timeout"
cfgWebIdleTimeout = "web.idle_timeout"
// Retry.
cfgRetryMaxAttempts = "retry.max_attempts"
cfgRetryMaxBackoff = "retry.max_backoff"
cfgRetryStrategy = "retry.strategy"
// Namespaces.
cfgNamespacesConfig = "namespaces.config"
cfgSourceIPHeader = "source_ip_header"
// Command line args.
cmdHelp = "help"
cmdVersion = "version"
@ -222,6 +221,9 @@ const ( // Settings.
// Proxy.
cfgProxyContract = "proxy.contract"
// Server.
cfgReconnectInterval = "reconnect_interval"
// envPrefix is an environment variables prefix used for configuration.
envPrefix = "S3_GW"
)
@ -244,6 +246,15 @@ func fetchConnectTimeout(cfg *viper.Viper) time.Duration {
return connTimeout
}
func fetchReconnectInterval(cfg *viper.Viper) time.Duration {
reconnect := cfg.GetDuration(cfgReconnectInterval)
if reconnect <= 0 {
reconnect = defaultReconnectInterval
}
return reconnect
}
func fetchStreamTimeout(cfg *viper.Viper) time.Duration {
streamTimeout := cfg.GetDuration(cfgStreamTimeout)
if streamTimeout <= 0 {
@ -307,6 +318,33 @@ func fetchSoftMemoryLimit(cfg *viper.Viper) int64 {
return int64(softMemoryLimit)
}
func fetchRetryMaxAttempts(cfg *viper.Viper) int {
val := cfg.GetInt(cfgRetryMaxAttempts)
if val <= 0 {
val = defaultRetryMaxAttempts
}
return val
}
func fetchRetryMaxBackoff(cfg *viper.Viper) time.Duration {
val := cfg.GetDuration(cfgRetryMaxBackoff)
if val <= 0 {
val = defaultRetryMaxBackoff
}
return val
}
func fetchRetryStrategy(cfg *viper.Viper) handler.RetryStrategy {
val := handler.RetryStrategy(cfg.GetString(cfgRetryStrategy))
if val != handler.RetryStrategyExponential && val != handler.RetryStrategyConstant {
val = defaultRetryStrategy
}
return val
}
func fetchDefaultPolicy(l *zap.Logger, cfg *viper.Viper) netmap.PlacementPolicy {
var policy netmap.PlacementPolicy
@ -327,19 +365,6 @@ func fetchDefaultPolicy(l *zap.Logger, cfg *viper.Viper) netmap.PlacementPolicy
return policy
}
func fetchNATSTimeout(cfg *viper.Viper, l *zap.Logger) time.Duration {
timeout := cfg.GetDuration(cfgNATSTimeout)
if timeout <= 0 {
l.Error(logs.InvalidLifetimeUsingDefaultValue,
zap.String("parameter", cfgNATSTimeout),
zap.Duration("value in config", timeout),
zap.Duration("default", notifications.DefaultTimeout))
timeout = notifications.DefaultTimeout
}
return timeout
}
func fetchCacheLifetime(v *viper.Viper, l *zap.Logger, cfgEntry string, defaultValue time.Duration) time.Duration {
if v.IsSet(cfgEntry) {
lifetime := v.GetDuration(cfgEntry)
@ -611,8 +636,9 @@ func fetchPeers(l *zap.Logger, v *viper.Viper) []pool.NodeParam {
return nodes
}
func fetchServers(v *viper.Viper) []ServerInfo {
func fetchServers(v *viper.Viper, log *zap.Logger) []ServerInfo {
var servers []ServerInfo
seen := make(map[string]struct{})
for i := 0; ; i++ {
key := cfgServer + "." + strconv.Itoa(i) + "."
@ -627,29 +653,17 @@ func fetchServers(v *viper.Viper) []ServerInfo {
break
}
if _, ok := seen[serverInfo.Address]; ok {
log.Warn(logs.WarnDuplicateAddress, zap.String("address", serverInfo.Address))
continue
}
seen[serverInfo.Address] = struct{}{}
servers = append(servers, serverInfo)
}
return servers
}
func fetchAuthorizedKeys(l *zap.Logger, v *viper.Viper) keys.PublicKeys {
strKeys := v.GetStringSlice(cfgControlAuthorizedKeys)
pubs := make(keys.PublicKeys, 0, len(strKeys))
for i := range strKeys {
pub, err := keys.NewPublicKeyFromString(strKeys[i])
if err != nil {
l.Warn(logs.FailedToParsePublicKey, zap.String("key", strKeys[i]))
continue
}
pubs = append(pubs, pub)
}
return pubs
}
func newSettings() *viper.Viper {
v := viper.New()
@ -708,8 +722,6 @@ func newSettings() *viper.Viper {
v.SetDefault(cfgPProfAddress, "localhost:8085")
v.SetDefault(cfgPrometheusAddress, "localhost:8086")
v.SetDefault(cfgControlGRPCEndpoint, "localhost:8083")
// frostfs
v.SetDefault(cfgBufferMaxSizeForPut, 1024*1024) // 1mb
@ -717,7 +729,6 @@ func newSettings() *viper.Viper {
v.SetDefault(cfgKludgeUseDefaultXMLNS, false)
v.SetDefault(cfgKludgeBypassContentEncodingCheckInChunks, false)
v.SetDefault(cfgKludgeDefaultNamespaces, defaultDefaultNamespaces)
v.SetDefault(cfgKludgeACLEnabled, false)
// web
v.SetDefault(cfgWebReadHeaderTimeout, defaultReadHeaderTimeout)
@ -735,6 +746,10 @@ func newSettings() *viper.Viper {
// resolve
v.SetDefault(cfgResolveNamespaceHeader, defaultNamespaceHeader)
// retry
v.SetDefault(cfgRetryMaxAttempts, defaultRetryMaxAttempts)
v.SetDefault(cfgRetryMaxBackoff, defaultRetryMaxBackoff)
// Bind flags
if err := bindFlags(v, flags); err != nil {
panic(fmt.Errorf("bind flags: %w", err))

View file

@ -34,6 +34,15 @@ func TestDefaultNamespace(t *testing.T) {
</Part>
</CompleteMultipartUpload>
`
xmlASCII := `<?xml version="1.0" encoding="US-ASCII"?>
<CompleteMultipartUpload>
<Part>
<PartNumber>1</PartNumber>
<ETag>
8b73814bee405ec32b5d1fc88cd5d97a
</ETag>
</Part>
</CompleteMultipartUpload>`
for _, tc := range []struct {
settings *appSettings
@ -82,6 +91,13 @@ func TestDefaultNamespace(t *testing.T) {
input: xmlBodyWithInvalidNamespace,
err: true,
},
{
settings: &appSettings{
defaultXMLNS: true,
},
input: xmlASCII,
err: false,
},
} {
t.Run("", func(t *testing.T) {
model := new(handler.CompleteMultipartUpload)

View file

@ -74,6 +74,7 @@ func newServer(ctx context.Context, serverInfo ServerInfo) (*server, error) {
ln = tls.NewListener(ln, &tls.Config{
GetCertificate: tlsProvider.GetCertificate,
NextProtos: []string{"h2"}, // required to enable HTTP/2 requests in `http.Serve`
})
}

119
cmd/s3-gw/server_test.go Normal file
View file

@ -0,0 +1,119 @@
package main
import (
"context"
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"fmt"
"math/big"
"net"
"net/http"
"os"
"path"
"testing"
"time"
"github.com/stretchr/testify/require"
"golang.org/x/net/http2"
)
const (
expHeaderKey = "Foo"
expHeaderValue = "Bar"
)
func TestHTTP2TLS(t *testing.T) {
ctx := context.Background()
certPath, keyPath := prepareTestCerts(t)
srv := &http.Server{
Handler: http.HandlerFunc(testHandler),
}
tlsListener, err := newServer(ctx, ServerInfo{
Address: ":0",
TLS: ServerTLSInfo{
Enabled: true,
CertFile: certPath,
KeyFile: keyPath,
},
})
require.NoError(t, err)
port := tlsListener.Listener().Addr().(*net.TCPAddr).Port
addr := fmt.Sprintf("https://localhost:%d", port)
go func() {
_ = srv.Serve(tlsListener.Listener())
}()
// Server is running, now send HTTP/2 request
tlsClientConfig := &tls.Config{
InsecureSkipVerify: true,
}
cliHTTP1 := http.Client{Transport: &http.Transport{TLSClientConfig: tlsClientConfig}}
cliHTTP2 := http.Client{Transport: &http2.Transport{TLSClientConfig: tlsClientConfig}}
req, err := http.NewRequest("GET", addr, nil)
require.NoError(t, err)
req.Header[expHeaderKey] = []string{expHeaderValue}
resp, err := cliHTTP1.Do(req)
require.NoError(t, err)
require.Equal(t, http.StatusOK, resp.StatusCode)
resp, err = cliHTTP2.Do(req)
require.NoError(t, err)
require.Equal(t, http.StatusOK, resp.StatusCode)
}
func testHandler(resp http.ResponseWriter, req *http.Request) {
hdr, ok := req.Header[expHeaderKey]
if !ok || len(hdr) != 1 || hdr[0] != expHeaderValue {
resp.WriteHeader(http.StatusBadRequest)
} else {
resp.WriteHeader(http.StatusOK)
}
}
func prepareTestCerts(t *testing.T) (certPath, keyPath string) {
privateKey, err := rsa.GenerateKey(rand.Reader, 2048)
require.NoError(t, err)
template := x509.Certificate{
SerialNumber: big.NewInt(1),
Subject: pkix.Name{CommonName: "localhost"},
NotBefore: time.Now(),
NotAfter: time.Now().Add(time.Hour * 24 * 365),
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
BasicConstraintsValid: true,
}
derBytes, err := x509.CreateCertificate(rand.Reader, &template, &template, &privateKey.PublicKey, privateKey)
require.NoError(t, err)
dir := t.TempDir()
certPath = path.Join(dir, "cert.pem")
keyPath = path.Join(dir, "key.pem")
certFile, err := os.Create(certPath)
require.NoError(t, err)
defer certFile.Close()
keyFile, err := os.Create(keyPath)
require.NoError(t, err)
defer keyFile.Close()
err = pem.Encode(certFile, &pem.Block{Type: "CERTIFICATE", Bytes: derBytes})
require.NoError(t, err)
err = pem.Encode(keyFile, &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(privateKey)})
require.NoError(t, err)
return certPath, keyPath
}

View file

@ -33,11 +33,8 @@ S3_GW_SERVER_1_TLS_ENABLED=true
S3_GW_SERVER_1_TLS_CERT_FILE=/path/to/tls/cert
S3_GW_SERVER_1_TLS_KEY_FILE=/path/to/tls/key
# Control API
# List of hex-encoded public keys that have rights to use the Control Service
S3_GW_CONTROL_AUTHORIZED_KEYS=035839e45d472a3b7769a2a1bd7d54c4ccd4943c3b40f547870e83a8fcbfb3ce11 028f42cfcb74499d7b15b35d9bff260a1c8d27de4f446a627406a382d8961486d6
# Endpoint that is listened by the Control Service
S3_GW_CONTROL_GRPC_ENDPOINT=localhost:8083
# How often to reconnect to the servers
S3_GW_RECONNECT_INTERVAL: 1m
# Domains to be able to use virtual-hosted-style access to bucket.
S3_GW_LISTEN_DOMAINS=s3dev.frostfs.devenv
@ -91,7 +88,7 @@ S3_GW_CACHE_BUCKETS_SIZE=1000
# Cache which contains mapping of nice name to object addresses
S3_GW_CACHE_NAMES_LIFETIME=1m
S3_GW_CACHE_NAMES_SIZE=10000
# Cache for system objects in a bucket: bucket settings, notification configuration etc
# Cache for system objects in a bucket: bucket settings etc
S3_GW_CACHE_SYSTEM_LIFETIME=5m
S3_GW_CACHE_SYSTEM_SIZE=100000
# Cache which stores access box with tokens by its address
@ -104,14 +101,9 @@ S3_GW_CACHE_ACCESSCONTROL_SIZE=100000
# Cache which stores list of policy chains
S3_GW_CACHE_MORPH_POLICY_LIFETIME=1m
S3_GW_CACHE_MORPH_POLICY_SIZE=10000
# NATS
S3_GW_NATS_ENABLED=true
S3_GW_NATS_ENDPOINT=nats://nats.frostfs.devenv:4222
S3_GW_NATS_TIMEOUT=30s
S3_GW_NATS_CERT_FILE=/path/to/cert
S3_GW_NATS_KEY_FILE=/path/to/key
S3_GW_NATS_ROOT_CA=/path/to/ca
# Cache which stores frostfsid subject info
S3_GW_CACHE_FROSTFSID_LIFETIME=1m
S3_GW_CACHE_FROSTFSID_SIZE=10000
# Default policy of placing containers in FrostFS
# If a user sends a request `CreateBucket` and doesn't define policy for placing of a container in FrostFS, the S3 Gateway
@ -162,8 +154,6 @@ S3_GW_KLUDGE_USE_DEFAULT_XMLNS=false
S3_GW_KLUDGE_BYPASS_CONTENT_ENCODING_CHECK_IN_CHUNKS=false
# Namespaces that should be handled as default
S3_GW_KLUDGE_DEFAULT_NAMESPACES="" "root"
# Enable bucket/object ACL support for newly created buckets.
S3_GW_KLUDGE_ACL_ENABLED=false
S3_GW_TRACING_ENABLED=false
S3_GW_TRACING_ENDPOINT="localhost:4318"
@ -214,3 +204,15 @@ S3_GW_PROXY_CONTRACT=proxy.frostfs
# Namespaces configuration
S3_GW_NAMESPACES_CONFIG=namespaces.json
# Custom header to retrieve Source IP
S3_GW_SOURCE_IP_HEADER=Source-Ip
# Retry strategy configuration.
# Max amount of request attempts. Currently only for updating bucket settings request.
S3_GW_RETRY_MAX_ATTEMPTS=4
# Max delay before next attempt.
S3_GW_RETRY_MAX_BACKOFF=30s
# Backoff strategy. `exponential` and `constant` are allowed.
S3_GW_RETRY_STRATEGY=exponential

View file

@ -25,6 +25,8 @@ peers:
priority: 2
weight: 0.9
reconnect_interval: 1m
server:
- address: 0.0.0.0:8080
tls:
@ -37,15 +39,6 @@ server:
cert_file: /path/to/cert
key_file: /path/to/key
control:
# List of hex-encoded public keys that have rights to use the Control Service
authorized_keys:
- 035839e45d472a3b7769a2a1bd7d54c4ccd4943c3b40f547870e83a8fcbfb3ce11
- 028f42cfcb74499d7b15b35d9bff260a1c8d27de4f446a627406a382d8961486d6
grpc:
# Endpoint that is listened by the Control Service
endpoint: localhost:8083
# Domains to be able to use virtual-hosted-style access to bucket.
listen_domains:
- s3dev.frostfs.devenv
@ -112,7 +105,7 @@ cache:
buckets:
lifetime: 1m
size: 500
# Cache for system objects in a bucket: bucket settings, notification configuration etc
# Cache for system objects in a bucket: bucket settings etc
system:
lifetime: 2m
size: 1000
@ -129,14 +122,10 @@ cache:
morph_policy:
lifetime: 1m
size: 10000
nats:
enabled: true
endpoint: nats://localhost:4222
timeout: 30s
cert_file: /path/to/cert
key_file: /path/to/key
root_ca: /path/to/ca
# Cache which stores frostfsid subject info
frostfsid:
lifetime: 1m
size: 10000
# Parameters of FrostFS container placement policy
placement_policy:
@ -193,8 +182,6 @@ kludge:
bypass_content_encoding_check_in_chunks: false
# Namespaces that should be handled as default
default_namespaces: [ "", "root" ]
# Enable bucket/object ACL support for newly created buckets.
acl_enabled: false
runtime:
soft_memory_limit: 1gb
@ -253,3 +240,15 @@ proxy:
namespaces:
config: namespaces.json
# Custom header to retrieve Source IP
source_ip_header: "Source-Ip"
# Retry strategy configuration.
retry:
# Max amount of request attempts. Currently only for updating bucket settings request.
max_attempts: 4
# Max delay before next attempt.
max_backoff: 30s
# Backoff strategy. `exponential` and `constant` are allowed.
strategy: exponential

View file

@ -54,11 +54,6 @@ func (g *GateData) SessionTokenForDelete() *session.Container {
return g.containerSessionToken(session.VerbContainerDelete)
}
// SessionTokenForSetEACL returns the first suitable container session context for SetEACL operation.
func (g *GateData) SessionTokenForSetEACL() *session.Container {
return g.containerSessionToken(session.VerbContainerSetEACL)
}
// SessionToken returns the first container session context.
func (g *GateData) SessionToken() *session.Container {
if len(g.SessionTokens) != 0 {
@ -78,7 +73,7 @@ func (g *GateData) containerSessionToken(verb session.ContainerVerb) *session.Co
func isAppropriateContainerContext(tok *session.Container, verb session.ContainerVerb) bool {
switch verb {
case session.VerbContainerSetEACL, session.VerbContainerDelete, session.VerbContainerPut:
case session.VerbContainerDelete, session.VerbContainerPut:
return tok.AssertVerb(verb)
default:
return false

View file

@ -1,6 +1,7 @@
package accessbox
import (
"encoding/hex"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
@ -170,3 +171,136 @@ func TestUnknownKey(t *testing.T) {
_, err = box.GetTokens(wrongCred)
require.Error(t, err)
}
func TestGateDataSessionToken(t *testing.T) {
cred, err := keys.NewPrivateKey()
require.NoError(t, err)
var tkn bearer.Token
gate := NewGateData(cred.PublicKey(), &tkn)
require.Equal(t, cred.PublicKey(), gate.GateKey)
assertBearerToken(t, tkn, *gate.BearerToken)
t.Run("session token for put", func(t *testing.T) {
gate.SessionTokens = []*session.Container{}
sessionTkn := gate.SessionTokenForPut()
require.Nil(t, sessionTkn)
sessionTknPut := new(session.Container)
sessionTknPut.ForVerb(session.VerbContainerPut)
gate.SessionTokens = []*session.Container{sessionTknPut}
sessionTkn = gate.SessionTokenForPut()
require.Equal(t, sessionTknPut, sessionTkn)
})
t.Run("session token for delete", func(t *testing.T) {
gate.SessionTokens = []*session.Container{}
sessionTkn := gate.SessionTokenForDelete()
require.Nil(t, sessionTkn)
sessionTknDelete := new(session.Container)
sessionTknDelete.ForVerb(session.VerbContainerDelete)
gate.SessionTokens = []*session.Container{sessionTknDelete}
sessionTkn = gate.SessionTokenForDelete()
require.Equal(t, sessionTknDelete, sessionTkn)
})
t.Run("session token", func(t *testing.T) {
gate.SessionTokens = []*session.Container{}
sessionTkn := gate.SessionToken()
require.Nil(t, sessionTkn)
sessionTknPut := new(session.Container)
sessionTknPut.ForVerb(session.VerbContainerPut)
gate.SessionTokens = []*session.Container{sessionTknPut}
sessionTkn = gate.SessionToken()
require.Equal(t, sessionTkn, sessionTknPut)
})
}
func TestGetBox(t *testing.T) {
cred, err := keys.NewPrivateKey()
require.NoError(t, err)
var tkn bearer.Token
gate := NewGateData(cred.PublicKey(), &tkn)
secret := []byte("secret")
accessBox, _, err := PackTokens([]*GateData{gate}, secret)
require.NoError(t, err)
box, err := accessBox.GetBox(cred)
require.NoError(t, err)
require.Equal(t, hex.EncodeToString(secret), box.Gate.SecretKey)
}
func TestAccessBox(t *testing.T) {
cred, err := keys.NewPrivateKey()
require.NoError(t, err)
var tkn bearer.Token
gate := NewGateData(cred.PublicKey(), &tkn)
accessBox, _, err := PackTokens([]*GateData{gate}, nil)
require.NoError(t, err)
t.Run("invalid owner", func(t *testing.T) {
randomKey, err := keys.NewPrivateKey()
require.NoError(t, err)
_, err = accessBox.GetTokens(randomKey)
require.Error(t, err)
_, err = accessBox.GetBox(randomKey)
require.Error(t, err)
})
t.Run("empty placement policy", func(t *testing.T) {
policy, err := accessBox.GetPlacementPolicy()
require.NoError(t, err)
require.Nil(t, policy)
})
t.Run("get correct placement policy", func(t *testing.T) {
policy := &AccessBox_ContainerPolicy{LocationConstraint: "locationConstraint"}
accessBox.ContainerPolicy = []*AccessBox_ContainerPolicy{policy}
policies, err := accessBox.GetPlacementPolicy()
require.NoError(t, err)
require.Len(t, policies, 1)
require.Equal(t, policy.LocationConstraint, policies[0].LocationConstraint)
})
t.Run("get incorrect placement policy", func(t *testing.T) {
policy := &AccessBox_ContainerPolicy{
LocationConstraint: "locationConstraint",
Policy: []byte("policy"),
}
accessBox.ContainerPolicy = []*AccessBox_ContainerPolicy{policy}
_, err = accessBox.GetPlacementPolicy()
require.Error(t, err)
_, err = accessBox.GetBox(cred)
require.Error(t, err)
})
t.Run("empty seed key", func(t *testing.T) {
accessBox.SeedKey = nil
_, err = accessBox.GetTokens(cred)
require.Error(t, err)
_, err = accessBox.GetBox(cred)
require.Error(t, err)
})
t.Run("invalid gate key", func(t *testing.T) {
gate = &GateData{
BearerToken: &tkn,
GateKey: &keys.PublicKey{},
}
_, _, err = PackTokens([]*GateData{gate}, nil)
require.Error(t, err)
})
}

View file

@ -22,7 +22,7 @@ import (
type (
// Credentials is a bearer token get/put interface.
Credentials interface {
GetBox(context.Context, oid.Address) (*accessbox.Box, error)
GetBox(context.Context, oid.Address) (*accessbox.Box, []object.Attribute, error)
Put(context.Context, cid.ID, CredentialsParam) (oid.Address, error)
Update(context.Context, oid.Address, CredentialsParam) (oid.Address, error)
}
@ -86,13 +86,13 @@ type FrostFS interface {
// prevented the object from being created.
CreateObject(context.Context, PrmObjectCreate) (oid.ID, error)
// GetCredsPayload gets payload of the credential object from FrostFS network.
// GetCredsObject gets the credential object from FrostFS network.
// It uses search by system name and select using CRDT 2PSet. In case of absence CRDT header
// it heads object by address.
//
// It returns exactly one non-nil value. It returns any error encountered which
// prevented the object payload from being read.
GetCredsPayload(context.Context, oid.Address) ([]byte, error)
GetCredsObject(context.Context, oid.Address) (*object.Object, error)
}
var (
@ -115,72 +115,72 @@ func New(cfg Config) Credentials {
}
}
func (c *cred) GetBox(ctx context.Context, addr oid.Address) (*accessbox.Box, error) {
func (c *cred) GetBox(ctx context.Context, addr oid.Address) (*accessbox.Box, []object.Attribute, error) {
cachedBoxValue := c.cache.Get(addr)
if cachedBoxValue != nil {
return c.checkIfCredentialsAreRemoved(ctx, addr, cachedBoxValue)
}
box, err := c.getAccessBox(ctx, addr)
box, attrs, err := c.getAccessBox(ctx, addr)
if err != nil {
return nil, fmt.Errorf("get access box: %w", err)
return nil, nil, fmt.Errorf("get access box: %w", err)
}
cachedBox, err := box.GetBox(c.key)
if err != nil {
return nil, fmt.Errorf("get gate box: %w", err)
return nil, nil, fmt.Errorf("get gate box: %w", err)
}
c.putBoxToCache(addr, cachedBox)
c.putBoxToCache(addr, cachedBox, attrs)
return cachedBox, nil
return cachedBox, attrs, nil
}
func (c *cred) checkIfCredentialsAreRemoved(ctx context.Context, addr oid.Address, cachedBoxValue *cache.AccessBoxCacheValue) (*accessbox.Box, error) {
func (c *cred) checkIfCredentialsAreRemoved(ctx context.Context, addr oid.Address, cachedBoxValue *cache.AccessBoxCacheValue) (*accessbox.Box, []object.Attribute, error) {
if time.Since(cachedBoxValue.PutTime) < c.removingCheckDuration {
return cachedBoxValue.Box, nil
return cachedBoxValue.Box, cachedBoxValue.Attributes, nil
}
box, err := c.getAccessBox(ctx, addr)
box, attrs, err := c.getAccessBox(ctx, addr)
if err != nil {
if client.IsErrObjectAlreadyRemoved(err) {
c.cache.Delete(addr)
return nil, fmt.Errorf("get access box: %w", err)
return nil, nil, fmt.Errorf("get access box: %w", err)
}
return cachedBoxValue.Box, nil
return cachedBoxValue.Box, cachedBoxValue.Attributes, nil
}
cachedBox, err := box.GetBox(c.key)
if err != nil {
c.cache.Delete(addr)
return nil, fmt.Errorf("get gate box: %w", err)
return nil, nil, fmt.Errorf("get gate box: %w", err)
}
// we need this to reset PutTime
// to don't check for removing each time after removingCheckDuration interval
c.putBoxToCache(addr, cachedBox)
c.putBoxToCache(addr, cachedBox, attrs)
return cachedBoxValue.Box, nil
return cachedBoxValue.Box, attrs, nil
}
func (c *cred) putBoxToCache(addr oid.Address, box *accessbox.Box) {
if err := c.cache.Put(addr, box); err != nil {
func (c *cred) putBoxToCache(addr oid.Address, box *accessbox.Box, attrs []object.Attribute) {
if err := c.cache.Put(addr, box, attrs); err != nil {
c.log.Warn(logs.CouldntPutAccessBoxIntoCache, zap.String("address", addr.EncodeToString()))
}
}
func (c *cred) getAccessBox(ctx context.Context, addr oid.Address) (*accessbox.AccessBox, error) {
data, err := c.frostFS.GetCredsPayload(ctx, addr)
func (c *cred) getAccessBox(ctx context.Context, addr oid.Address) (*accessbox.AccessBox, []object.Attribute, error) {
obj, err := c.frostFS.GetCredsObject(ctx, addr)
if err != nil {
return nil, fmt.Errorf("read payload: %w", err)
return nil, nil, fmt.Errorf("read payload and attributes: %w", err)
}
// decode access box
var box accessbox.AccessBox
if err = box.Unmarshal(data); err != nil {
return nil, fmt.Errorf("unmarhal access box: %w", err)
if err = box.Unmarshal(obj.Payload()); err != nil {
return nil, nil, fmt.Errorf("unmarhal access box: %w", err)
}
return &box, nil
return &box, obj.Attributes(), nil
}
func (c *cred) Put(ctx context.Context, idCnr cid.ID, prm CredentialsParam) (oid.Address, error) {

View file

@ -11,6 +11,8 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/creds/accessbox"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
cidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id/test"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
oidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id/test"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
@ -19,24 +21,68 @@ import (
)
type frostfsMock struct {
objects map[oid.Address][]byte
objects map[oid.Address][]*object.Object
errors map[oid.Address]error
}
func (f *frostfsMock) CreateObject(context.Context, PrmObjectCreate) (oid.ID, error) {
panic("implement me for test")
func newFrostfsMock() *frostfsMock {
return &frostfsMock{
objects: map[oid.Address][]*object.Object{},
errors: map[oid.Address]error{},
}
}
func (f *frostfsMock) GetCredsPayload(_ context.Context, address oid.Address) ([]byte, error) {
func (f *frostfsMock) CreateObject(_ context.Context, prm PrmObjectCreate) (oid.ID, error) {
var obj object.Object
obj.SetPayload(prm.Payload)
obj.SetOwnerID(prm.Creator)
obj.SetContainerID(prm.Container)
a := object.NewAttribute()
a.SetKey(object.AttributeFilePath)
a.SetValue(prm.Filepath)
prm.CustomAttributes = append(prm.CustomAttributes, *a)
obj.SetAttributes(prm.CustomAttributes...)
if prm.NewVersionFor != nil {
var addr oid.Address
addr.SetObject(*prm.NewVersionFor)
addr.SetContainer(prm.Container)
_, ok := f.objects[addr]
if !ok {
return oid.ID{}, errors.New("not found")
}
objID := oidtest.ID()
obj.SetID(objID)
f.objects[addr] = append(f.objects[addr], &obj)
return objID, nil
}
objID := oidtest.ID()
obj.SetID(objID)
var addr oid.Address
addr.SetObject(objID)
addr.SetContainer(prm.Container)
f.objects[addr] = []*object.Object{&obj}
return objID, nil
}
func (f *frostfsMock) GetCredsObject(_ context.Context, address oid.Address) (*object.Object, error) {
if err := f.errors[address]; err != nil {
return nil, err
}
data, ok := f.objects[address]
objects, ok := f.objects[address]
if !ok {
return nil, errors.New("not found")
}
return data, nil
return objects[len(objects)-1], nil
}
func TestRemovingAccessBox(t *testing.T) {
@ -59,9 +105,14 @@ func TestRemovingAccessBox(t *testing.T) {
data, err := accessBox.Marshal()
require.NoError(t, err)
var obj object.Object
obj.SetPayload(data)
addr := oidtest.Address()
obj.SetID(addr.Object())
obj.SetContainerID(addr.Container())
frostfs := &frostfsMock{
objects: map[oid.Address][]byte{addr: data},
objects: map[oid.Address][]*object.Object{addr: {&obj}},
errors: map[oid.Address]error{},
}
@ -78,14 +129,201 @@ func TestRemovingAccessBox(t *testing.T) {
creds := New(cfg)
_, err = creds.GetBox(ctx, addr)
_, _, err = creds.GetBox(ctx, addr)
require.NoError(t, err)
frostfs.errors[addr] = errors.New("network error")
_, err = creds.GetBox(ctx, addr)
_, _, err = creds.GetBox(ctx, addr)
require.NoError(t, err)
frostfs.errors[addr] = &apistatus.ObjectAlreadyRemoved{}
_, err = creds.GetBox(ctx, addr)
_, _, err = creds.GetBox(ctx, addr)
require.Error(t, err)
}
func TestGetBox(t *testing.T) {
ctx := context.Background()
key, err := keys.NewPrivateKey()
require.NoError(t, err)
gateData := []*accessbox.GateData{{
BearerToken: &bearer.Token{},
GateKey: key.PublicKey(),
}}
secret := []byte("secret")
accessBox, _, err := accessbox.PackTokens(gateData, secret)
require.NoError(t, err)
data, err := accessBox.Marshal()
require.NoError(t, err)
var attr object.Attribute
attr.SetKey("key")
attr.SetValue("value")
attrs := []object.Attribute{attr}
cfg := Config{
CacheConfig: &cache.Config{
Size: 10,
Lifetime: 24 * time.Hour,
Logger: zaptest.NewLogger(t),
},
}
t.Run("no removing check, accessbox from cache", func(t *testing.T) {
frostfs := newFrostfsMock()
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = time.Hour
cfg.Key = key
creds := New(cfg)
cnrID := cidtest.ID()
addr, err := creds.Put(ctx, cnrID, CredentialsParam{Keys: keys.PublicKeys{key.PublicKey()}, AccessBox: accessBox})
require.NoError(t, err)
_, _, err = creds.GetBox(ctx, addr)
require.NoError(t, err)
frostfs.errors[addr] = &apistatus.ObjectAlreadyRemoved{}
_, _, err = creds.GetBox(ctx, addr)
require.NoError(t, err)
})
t.Run("error while getting box from frostfs", func(t *testing.T) {
frostfs := newFrostfsMock()
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = 0
cfg.Key = key
creds := New(cfg)
cnrID := cidtest.ID()
addr, err := creds.Put(ctx, cnrID, CredentialsParam{Keys: keys.PublicKeys{key.PublicKey()}, AccessBox: accessBox})
require.NoError(t, err)
frostfs.errors[addr] = errors.New("network error")
_, _, err = creds.GetBox(ctx, addr)
require.Error(t, err)
})
t.Run("invalid key", func(t *testing.T) {
frostfs := newFrostfsMock()
var obj object.Object
obj.SetPayload(data)
addr := oidtest.Address()
frostfs.objects[addr] = []*object.Object{&obj}
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = 0
cfg.Key = &keys.PrivateKey{}
creds := New(cfg)
_, _, err = creds.GetBox(ctx, addr)
require.Error(t, err)
})
t.Run("invalid payload", func(t *testing.T) {
frostfs := newFrostfsMock()
var obj object.Object
obj.SetPayload([]byte("invalid"))
addr := oidtest.Address()
frostfs.objects[addr] = []*object.Object{&obj}
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = 0
cfg.Key = key
creds := New(cfg)
_, _, err = creds.GetBox(ctx, addr)
require.Error(t, err)
})
t.Run("check attributes update", func(t *testing.T) {
frostfs := newFrostfsMock()
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = 0
cfg.Key = key
creds := New(cfg)
cnrID := cidtest.ID()
addr, err := creds.Put(ctx, cnrID, CredentialsParam{Keys: keys.PublicKeys{key.PublicKey()}, AccessBox: accessBox})
require.NoError(t, err)
_, boxAttrs, err := creds.GetBox(ctx, addr)
require.NoError(t, err)
_, err = creds.Update(ctx, addr, CredentialsParam{Keys: keys.PublicKeys{key.PublicKey()}, AccessBox: accessBox, CustomAttributes: attrs})
require.NoError(t, err)
_, newBoxAttrs, err := creds.GetBox(ctx, addr)
require.NoError(t, err)
require.Equal(t, len(boxAttrs)+1, len(newBoxAttrs))
})
t.Run("check accessbox update", func(t *testing.T) {
frostfs := newFrostfsMock()
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = 0
cfg.Key = key
creds := New(cfg)
cnrID := cidtest.ID()
addr, err := creds.Put(ctx, cnrID, CredentialsParam{Keys: keys.PublicKeys{key.PublicKey()}, AccessBox: accessBox})
require.NoError(t, err)
box, _, err := creds.GetBox(ctx, addr)
require.NoError(t, err)
require.Equal(t, hex.EncodeToString(secret), box.Gate.SecretKey)
newKey, err := keys.NewPrivateKey()
require.NoError(t, err)
newGateData := []*accessbox.GateData{{
BearerToken: &bearer.Token{},
GateKey: newKey.PublicKey(),
}}
newSecret := []byte("new-secret")
newAccessBox, _, err := accessbox.PackTokens(newGateData, newSecret)
require.NoError(t, err)
_, err = creds.Update(ctx, addr, CredentialsParam{Keys: keys.PublicKeys{newKey.PublicKey()}, AccessBox: newAccessBox})
require.NoError(t, err)
_, _, err = creds.GetBox(ctx, addr)
require.Error(t, err)
cfg.Key = newKey
newCreds := New(cfg)
box, _, err = newCreds.GetBox(ctx, addr)
require.NoError(t, err)
require.Equal(t, hex.EncodeToString(newSecret), box.Gate.SecretKey)
})
t.Run("empty keys", func(t *testing.T) {
frostfs := newFrostfsMock()
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = 0
cfg.Key = key
creds := New(cfg)
cnrID := cidtest.ID()
_, err = creds.Put(ctx, cnrID, CredentialsParam{AccessBox: accessBox})
require.ErrorIs(t, err, ErrEmptyPublicKeys)
})
t.Run("empty accessbox", func(t *testing.T) {
frostfs := newFrostfsMock()
cfg.FrostFS = frostfs
cfg.RemovingCheckAfterDurations = 0
cfg.Key = key
creds := New(cfg)
cnrID := cidtest.ID()
_, err = creds.Put(ctx, cnrID, CredentialsParam{Keys: keys.PublicKeys{key.PublicKey()}})
require.ErrorIs(t, err, ErrEmptyBearerToken)
})
}

Some files were not shown because too many files have changed in this diff Show more