Compare commits

...

32 commits

Author SHA1 Message Date
406075aebb [#236] Add support zapjournald logger configuration
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2023-11-13 16:31:11 +03:00
fe796ba538 [#217] Consider Copy-Source-SSE-* headers during copy
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2023-11-13 13:22:58 +00:00
5ee73fad6a [#248] Correct NextVersionIDMarker in listing versions
Despite the spec https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html#API_ListObjectVersions_ResponseElements
says that
"When the number of responses exceeds the value of MaxKeys,
NextVersionIdMarker specifies the first object version not returned
 that satisfies the search criteria. Use this value for the
 version-id-marker request parameter in a subsequent request."
 the actual behavior of AWS S3 is returning NextVersionIdMarker as the last returned object version

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-31 17:36:24 +03:00
890a8ed237 [#227] Add versionID header after complete multipart
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2023-10-31 14:07:08 +00:00
0bed25816c [#224] Add conditional escaping for object name
Chi gives inconsistent results in terms of whether
the strings returned are URL coded or not
See:
* https://github.com/go-chi/chi/issues/641
* https://github.com/go-chi/chi/issues/642

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-31 13:58:51 +00:00
b169c5e6c3 [#239] Update test for check goroutines leak
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-31 13:51:23 +00:00
122af0b5a7 [#220] Support configuring web server timeout params
Set IdleTimeout and ReadHeaderTimeout to `30s`.

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-31 13:48:08 +00:00
cf13aae342 [#225] Add default storage class to responses
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2023-10-31 13:37:07 +00:00
0938d7ee82 [#226] Fix status code in GET/HEAD delete marker
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-27 10:58:57 +03:00
4f5f5fb5c8 [#222] Fix marshaling errors in DeleteObjects method
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-25 14:54:02 +00:00
25bb581fee [#205] Add md5 checksum in header
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2023-10-25 11:04:19 +03:00
8d6aa0d40a [#243] Fix list object versions marker param
According to https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html
we have to use `key-marker`

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-18 10:35:47 +03:00
7e91f62c28 [#223] Add store content language
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2023-10-17 14:42:02 +00:00
01323ca8e0 [#216] Add check tag key uniqueness
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2023-10-17 14:40:29 +00:00
298662df9d [#221] Expand xmlns field ignore
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2023-10-13 16:21:13 +03:00
10a03faeb4 [#197] Update CHANGELOG.md
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-11 12:32:48 +00:00
65412ce1d3 [#197] Configure buffer max size for PUT
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-11 12:32:48 +00:00
7de73f6b73 [#197] Disable homomorphic hash for PUT
Disable TZ hash for PUT if it's disabled for container itself

Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-11 12:32:48 +00:00
8fc9d93f37 [#197] Update SDK
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
2023-10-11 12:32:48 +00:00
7301ca52ab [#154] Rename OwnerPublicKey to SeedKey
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2023-10-06 14:00:37 +03:00
e1ec61ddfc [#215] Fix get latest version node
When the object version is received,
the node of the secondary object may return.
Now we choose the right node ourselves.

Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2023-10-06 09:21:41 +00:00
e3f2d59565 [#154] Rename access key to secret key
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2023-10-06 09:20:39 +00:00
c4af1dc4ad [#171] Update message error auth header malformed
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2023-10-04 11:13:12 +00:00
b8c93ed391 [#172] Convert handler config to interface
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2023-10-04 11:01:27 +00:00
51e591877b [#207] Fix list parts with empty list
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2023-09-21 11:27:20 +03:00
a4c612614a [#210] Fix multipart object reader
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2023-09-19 16:30:08 +03:00
12cf29aed2 [#207] Fix part-number-marker handling
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2023-09-19 12:43:07 +03:00
16840f1256 [#177] Add release instructions page
Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2023-09-07 12:32:12 +00:00
066b9a0250 [#142] Add trace ID into log when tracing is enabled
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2023-09-07 14:19:37 +03:00
54e1c333a1 [#152] authmate: Add basic error types and exit codes
Signed-off-by: Artem Tataurov <a.tataurov@yadro.com>
2023-09-06 23:56:56 +03:00
69227b4845 [#199] Add metrics for HTTP endpoint status
Signed-off-by: Marina Biryukova <m.biryukova@yadro.com>
2023-09-05 13:30:27 +00:00
c66c09765d [#196] Support soft memory limit setting
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2023-09-05 13:13:56 +00:00
86 changed files with 2381 additions and 728 deletions

View file

@ -13,10 +13,16 @@ This document outlines major changes between releases.
- Replace part on re-upload when use multipart upload (#176)
- Fix goroutine leak on put object error (#178)
- Fix parsing signed headers in presigned urls (#182)
- Fix url escaping (#188)
- Fix url escaping (#188, #224)
- Use correct keys in `list-multipart-uploads` response (#185)
- Fix parsing `key-marker` for object list versions (#243)
- Fix marshaling errors in `DeleteObjects` method (#222)
- Fix status code in GET/HEAD delete marker (#226)
- Fix `NextVersionIDMarker` in `list-object-versions` (#248)
### Added
- Add `trace_id` value into log record when tracing is enabled (#142)
- Add basic error types and exit codes to `frostfs-s3-authmate` (#152)
- Add a metric with addresses of nodes of the same and highest priority that are currently healthy (#51)
- Support dump metrics descriptions (#80)
- Add `copies_numbers` section to `placement_policy` in config file and support vectors of copies numbers (#70, #101)
@ -28,6 +34,10 @@ This document outlines major changes between releases.
- Implement chunk uploading (#106)
- Add new `kludge.bypass_content_encoding_check_in_chunks` config param (#146)
- Add new `frostfs.client_cut` config param (#192)
- Add new `frostfs.buffer_max_size_for_put` config param and sync TZ hash for PUT operations (#197)
- Add `X-Amz-Version-Id` header after complete multipart upload (#227)
- Add handling of `X-Amz-Copy-Source-Server-Side-Encryption-Customer-*` headers during copy (#217)
- Add new `logger.destination` config param (#236)
### Changed
- Update prometheus to v1.15.0 (#94)
@ -42,9 +52,12 @@ This document outlines major changes between releases.
- Complete multipart upload doesn't unnecessary copy now. Thus, the total time of multipart upload was reduced by 2 times (#63)
- Use gate key to form object owner (#175)
- Apply placement policies and copies if there is at least one valid value (#168)
- Generalise config param `use_default_xmlns_for_complete_multipart` to `use_default_xmlns` so that use default xmlns for all requests (#221)
- Set server IdleTimeout and ReadHeaderTimeout to `30s` and allow to configure them (#220)
### Removed
- Drop `tree.service` param (now endpoints from `peers` section are used) (#133)
- Drop sending whitespace characters during complete multipart upload and related config param `kludge.complete_multipart_keepalive` (#227)
## [0.27.0] - Karpinsky - 2023-07-12

View file

@ -261,7 +261,7 @@ func (c *center) checkFormData(r *http.Request) (*Box, error) {
return nil, fmt.Errorf("get box: %w", err)
}
secret := box.Gate.AccessKey
secret := box.Gate.SecretKey
service, region := submatches["service"], submatches["region"]
signature := signStr(secret, service, region, signatureDateTime, policy)
@ -294,7 +294,7 @@ func cloneRequest(r *http.Request, authHeader *AuthHeader) *http.Request {
}
func (c *center) checkSign(authHeader *AuthHeader, box *accessbox.Box, request *http.Request, signatureDateTime time.Time) error {
awsCreds := credentials.NewStaticCredentials(authHeader.AccessKeyID, box.Gate.AccessKey, "")
awsCreds := credentials.NewStaticCredentials(authHeader.AccessKeyID, box.Gate.SecretKey, "")
signer := v4.NewSigner(awsCreds)
signer.DisableURIPathEscaping = true

View file

@ -77,7 +77,7 @@ func TestCheckSign(t *testing.T) {
expBox := &accessbox.Box{
Gate: &accessbox.GateData{
AccessKey: secretKey,
SecretKey: secretKey,
},
}

View file

@ -29,6 +29,7 @@ type (
Created time.Time
LocationConstraint string
ObjectLockEnabled bool
HomomorphicHashDisabled bool
}
// ObjectInfo holds S3 object data.
@ -45,6 +46,7 @@ type (
Created time.Time
CreationEpoch uint64
HashSum string
MD5Sum string
Owner user.ID
Headers map[string]string
}
@ -115,6 +117,13 @@ func (o *ObjectInfo) Address() oid.Address {
return addr
}
func (o *ObjectInfo) ETag(md5Enabled bool) string {
if md5Enabled && len(o.MD5Sum) > 0 {
return o.MD5Sum
}
return o.HashSum
}
func (b BucketSettings) Unversioned() bool {
return b.Versioning == VersioningUnversioned
}

View file

@ -1,7 +1,10 @@
package data
import "encoding/xml"
type (
NotificationConfiguration struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ NotificationConfiguration" json:"-"`
QueueConfigurations []QueueConfiguration `xml:"QueueConfiguration" json:"QueueConfigurations"`
// Not supported topics
TopicConfigurations []TopicConfiguration `xml:"TopicConfiguration" json:"TopicConfigurations"`

View file

@ -56,6 +56,7 @@ type BaseNodeVersion struct {
Timestamp uint64
Size uint64
ETag string
MD5 string
FilePath string
}
@ -86,6 +87,7 @@ type PartInfo struct {
OID oid.ID `json:"oid"`
Size uint64 `json:"size"`
ETag string `json:"etag"`
MD5 string `json:"md5"`
Created time.Time `json:"created"`
}

View file

@ -73,6 +73,7 @@ const (
ErrInvalidArgument
ErrInvalidTagKey
ErrInvalidTagValue
ErrInvalidTagKeyUniqueness
ErrInvalidTagsSizeExceed
ErrNotImplemented
ErrPreconditionFailed
@ -148,6 +149,7 @@ const (
ErrInvalidEncryptionAlgorithm
ErrInvalidSSECustomerKey
ErrMissingSSECustomerKey
ErrMissingSSECustomerAlgorithm
ErrMissingSSECustomerKeyMD5
ErrSSECustomerKeyMD5Mismatch
ErrInvalidSSECustomerParameters
@ -182,6 +184,7 @@ const (
ErrInvalidRequest
ErrInvalidRequestLargeCopy
ErrInvalidStorageClass
VersionIDMarkerWithoutKeyMarker
ErrMalformedJSON
ErrInsecureClientRequest
@ -313,6 +316,12 @@ var errorCodes = errorCodeMap{
Description: "Invalid storage class.",
HTTPStatusCode: http.StatusBadRequest,
},
VersionIDMarkerWithoutKeyMarker: {
ErrCode: VersionIDMarkerWithoutKeyMarker,
Code: "VersionIDMarkerWithoutKeyMarker",
Description: "A version-id marker cannot be specified without a key marker.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidRequestBody: {
ErrCode: ErrInvalidRequestBody,
Code: "InvalidArgument",
@ -526,13 +535,19 @@ var errorCodes = errorCodeMap{
ErrInvalidTagKey: {
ErrCode: ErrInvalidTagKey,
Code: "InvalidTag",
Description: "The TagValue you have provided is invalid",
Description: "The TagKey you have provided is invalid",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidTagValue: {
ErrCode: ErrInvalidTagValue,
Code: "InvalidTag",
Description: "The TagKey you have provided is invalid",
Description: "The TagValue you have provided is invalid",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidTagKeyUniqueness: {
ErrCode: ErrInvalidTagKeyUniqueness,
Code: "InvalidTag",
Description: "Cannot provide multiple Tags with the same key",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidTagsSizeExceed: {
@ -598,7 +613,7 @@ var errorCodes = errorCodeMap{
ErrAuthorizationHeaderMalformed: {
ErrCode: ErrAuthorizationHeaderMalformed,
Code: "AuthorizationHeaderMalformed",
Description: "The authorization header is malformed; the region is wrong; expecting 'us-east-1'.",
Description: "The authorization header that you provided is not valid.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMalformedPOSTRequest: {
@ -1048,6 +1063,12 @@ var errorCodes = errorCodeMap{
Description: "Requests specifying Server Side Encryption with Customer provided keys must provide an appropriate secret key.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingSSECustomerAlgorithm: {
ErrCode: ErrMissingSSECustomerAlgorithm,
Code: "InvalidArgument",
Description: "Requests specifying Server Side Encryption with Customer provided keys must provide a valid encryption algorithm.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingSSECustomerKeyMD5: {
ErrCode: ErrMissingSSECustomerKeyMD5,
Code: "InvalidArgument",

View file

@ -6,7 +6,6 @@ import (
"crypto/elliptic"
"encoding/hex"
"encoding/json"
"encoding/xml"
stderrors "errors"
"fmt"
"net/http"
@ -304,7 +303,7 @@ func (h *handler) PutBucketACLHandler(w http.ResponseWriter, r *http.Request) {
h.logAndSendError(w, "could not parse bucket acl", reqInfo, err)
return
}
} else if err = xml.NewDecoder(r.Body).Decode(list); err != nil {
} else if err = h.cfg.NewXMLDecoder(r.Body).Decode(list); err != nil {
h.logAndSendError(w, "could not parse bucket acl", reqInfo, errors.GetAPIError(errors.ErrMalformedXML))
return
}
@ -441,7 +440,7 @@ func (h *handler) PutObjectACLHandler(w http.ResponseWriter, r *http.Request) {
h.logAndSendError(w, "could not parse bucket acl", reqInfo, err)
return
}
} else if err = xml.NewDecoder(r.Body).Decode(list); err != nil {
} else if err = h.cfg.NewXMLDecoder(r.Body).Decode(list); err != nil {
h.logAndSendError(w, "could not parse bucket acl", reqInfo, errors.GetAPIError(errors.ErrMalformedXML))
return
}

View file

@ -21,7 +21,7 @@ type (
log *zap.Logger
obj layer.Client
notificator Notificator
cfg *Config
cfg Config
}
Notificator interface {
@ -30,37 +30,25 @@ type (
}
// Config contains data which handler needs to keep.
Config struct {
Policy PlacementPolicy
XMLDecoder XMLDecoderProvider
DefaultMaxAge int
NotificatorEnabled bool
ResolveZoneList []string
IsResolveListAllow bool // True if ResolveZoneList contains allowed zones
CompleteMultipartKeepalive time.Duration
Kludge KludgeSettings
}
PlacementPolicy interface {
Config interface {
DefaultPlacementPolicy() netmap.PlacementPolicy
PlacementPolicy(string) (netmap.PlacementPolicy, bool)
CopiesNumbers(string) ([]uint32, bool)
DefaultCopiesNumbers() []uint32
}
XMLDecoderProvider interface {
NewCompleteMultipartDecoder(io.Reader) *xml.Decoder
}
KludgeSettings interface {
NewXMLDecoder(io.Reader) *xml.Decoder
DefaultMaxAge() int
NotificatorEnabled() bool
ResolveZoneList() []string
IsResolveListAllow() bool
BypassContentEncodingInChunks() bool
MD5Enabled() bool
}
)
var _ api.Handler = (*handler)(nil)
// New creates new api.Handler using given logger and client.
func New(log *zap.Logger, obj layer.Client, notificator Notificator, cfg *Config) (api.Handler, error) {
func New(log *zap.Logger, obj layer.Client, notificator Notificator, cfg Config) (api.Handler, error) {
switch {
case obj == nil:
return nil, errors.New("empty FrostFS Object Layer")
@ -68,7 +56,7 @@ func New(log *zap.Logger, obj layer.Client, notificator Notificator, cfg *Config
return nil, errors.New("empty logger")
}
if !cfg.NotificatorEnabled {
if !cfg.NotificatorEnabled() {
log.Warn(logs.NotificatorIsDisabledS3WontProduceNotificationEvents)
} else if notificator == nil {
return nil, errors.New("empty notificator")
@ -96,12 +84,12 @@ func (h *handler) pickCopiesNumbers(metadata map[string]string, locationConstrai
return result, nil
}
copiesNumbers, ok := h.cfg.Policy.CopiesNumbers(locationConstraint)
copiesNumbers, ok := h.cfg.CopiesNumbers(locationConstraint)
if ok {
return copiesNumbers, nil
}
return h.cfg.Policy.DefaultCopiesNumbers(), nil
return h.cfg.DefaultCopiesNumbers(), nil
}
func parseCopiesNumbers(copiesNumbersStr string) ([]uint32, error) {

View file

@ -12,11 +12,9 @@ func TestCopiesNumberPicker(t *testing.T) {
locationConstraint2 := "two"
locationConstraints[locationConstraint1] = []uint32{2, 3, 4}
config := &Config{
Policy: &placementPolicyMock{
config := &configMock{
copiesNumbers: locationConstraints,
defaultCopiesNumbers: []uint32{1},
},
}
h := handler{
cfg: config,

View file

@ -187,7 +187,7 @@ func encodeToObjectAttributesResponse(info *data.ObjectInfo, p *GetObjectAttribu
case eTag:
resp.ETag = info.HashSum
case storageClass:
resp.StorageClass = "STANDARD"
resp.StorageClass = api.DefaultStorageClass
case objectSize:
resp.ObjectSize = info.Size
case checksum:

View file

@ -107,23 +107,36 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
}
srcObjInfo := extendedSrcObjInfo.ObjectInfo
encryptionParams, err := formEncryptionParams(r)
srcEncryptionParams, err := formCopySourceEncryptionParams(r)
if err != nil {
h.logAndSendError(w, "invalid sse headers", reqInfo, err)
return
}
dstEncryptionParams, err := formEncryptionParams(r)
if err != nil {
h.logAndSendError(w, "invalid sse headers", reqInfo, err)
return
}
if err = encryptionParams.MatchObjectEncryption(layer.FormEncryptionInfo(srcObjInfo.Headers)); err != nil {
if err = srcEncryptionParams.MatchObjectEncryption(layer.FormEncryptionInfo(srcObjInfo.Headers)); err != nil {
if errors.IsS3Error(err, errors.ErrInvalidEncryptionParameters) || errors.IsS3Error(err, errors.ErrSSEEncryptedObject) ||
errors.IsS3Error(err, errors.ErrInvalidSSECustomerParameters) {
h.logAndSendError(w, "encryption doesn't match object", reqInfo, err, zap.Error(err))
return
}
h.logAndSendError(w, "encryption doesn't match object", reqInfo, errors.GetAPIError(errors.ErrBadRequest), zap.Error(err))
return
}
var dstSize uint64
if srcSize, err := layer.GetObjectSize(srcObjInfo); err != nil {
h.logAndSendError(w, "failed to get source object size", reqInfo, err)
return
} else if srcSize > layer.UploadMaxSize { //https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html
h.logAndSendError(w, "too bid object to copy with single copy operation, use multipart upload copy instead", reqInfo, errors.GetAPIError(errors.ErrInvalidRequestLargeCopy))
return
} else {
dstSize = srcSize
}
args, err := parseCopyObjectArgs(r.Header)
@ -174,7 +187,7 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
srcObjInfo.Headers[api.ContentType] = srcObjInfo.ContentType
}
metadata = makeCopyMap(srcObjInfo.Headers)
delete(metadata, layer.MultipartObjectSize) // object payload will be real one rather than list of compound parts
filterMetadataMap(metadata)
} else if contentType := r.Header.Get(api.ContentType); len(contentType) > 0 {
metadata[api.ContentType] = contentType
}
@ -185,9 +198,10 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
ScrBktInfo: srcObjPrm.BktInfo,
DstBktInfo: dstBktInfo,
DstObject: reqInfo.ObjectName,
SrcSize: srcObjInfo.Size,
DstSize: dstSize,
Header: metadata,
Encryption: encryptionParams,
SrcEncryption: srcEncryptionParams,
DstEncryption: dstEncryptionParams,
}
params.CopiesNumbers, err = h.pickCopiesNumbers(metadata, dstBktInfo.LocationConstraint)
@ -262,7 +276,7 @@ func (h *handler) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
h.reqLogger(ctx).Error(logs.CouldntSendNotification, zap.Error(err))
}
if encryptionParams.Enabled() {
if dstEncryptionParams.Enabled() {
addSSECHeaders(w.Header(), r.Header)
}
}
@ -275,6 +289,13 @@ func makeCopyMap(headers map[string]string) map[string]string {
return res
}
func filterMetadataMap(metadata map[string]string) {
delete(metadata, layer.MultipartObjectSize) // object payload will be real one rather than list of compound parts
for key := range layer.EncryptionMetadata {
delete(metadata, key)
}
}
func isCopyingToItselfForbidden(reqInfo *middleware.ReqInfo, srcBucket string, srcObject string, settings *data.BucketSettings, args *copyObjectArgs) bool {
if reqInfo.BucketName != srcBucket || reqInfo.ObjectName != srcObject {
return false

View file

@ -1,13 +1,19 @@
package handler
import (
"crypto/md5"
"crypto/tls"
"encoding/base64"
"encoding/xml"
"net/http"
"net/url"
"strconv"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer/encryption"
"github.com/stretchr/testify/require"
)
@ -98,6 +104,165 @@ func TestCopyMultipart(t *testing.T) {
equalDataSlices(t, data, copiedData)
}
func TestCopyEncryptedToUnencrypted(t *testing.T) {
tc := prepareHandlerContext(t)
bktName, srcObjName := "bucket-for-copy", "object-for-copy"
key1 := []byte("firstencriptionkeyofsourceobject")
key1Md5 := md5.Sum(key1)
key2 := []byte("anotherencriptionkeysourceobject")
key2Md5 := md5.Sum(key2)
bktInfo := createTestBucket(tc, bktName)
srcEnc, err := encryption.NewParams(key1)
require.NoError(t, err)
srcObjInfo := createTestObject(tc, bktInfo, srcObjName, *srcEnc)
require.True(t, containEncryptionMetadataHeaders(srcObjInfo.Headers))
dstObjName := "copy-object"
// empty copy-source-sse headers
w, r := prepareTestRequest(tc, bktName, dstObjName, nil)
r.TLS = &tls.ConnectionState{}
r.Header.Set(api.AmzCopySource, bktName+"/"+srcObjName)
tc.Handler().CopyObjectHandler(w, r)
assertStatus(t, w, http.StatusBadRequest)
assertS3Error(t, w, errors.GetAPIError(errors.ErrSSEEncryptedObject))
// empty copy-source-sse-custom-key
w, r = prepareTestRequest(tc, bktName, dstObjName, nil)
r.TLS = &tls.ConnectionState{}
r.Header.Set(api.AmzCopySource, bktName+"/"+srcObjName)
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerAlgorithm, layer.AESEncryptionAlgorithm)
tc.Handler().CopyObjectHandler(w, r)
assertStatus(t, w, http.StatusBadRequest)
assertS3Error(t, w, errors.GetAPIError(errors.ErrMissingSSECustomerKey))
// empty copy-source-sse-custom-algorithm
w, r = prepareTestRequest(tc, bktName, dstObjName, nil)
r.TLS = &tls.ConnectionState{}
r.Header.Set(api.AmzCopySource, bktName+"/"+srcObjName)
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerKey, base64.StdEncoding.EncodeToString(key1))
tc.Handler().CopyObjectHandler(w, r)
assertStatus(t, w, http.StatusBadRequest)
assertS3Error(t, w, errors.GetAPIError(errors.ErrMissingSSECustomerAlgorithm))
// invalid copy-source-sse-custom-key
w, r = prepareTestRequest(tc, bktName, dstObjName, nil)
r.TLS = &tls.ConnectionState{}
r.Header.Set(api.AmzCopySource, bktName+"/"+srcObjName)
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerAlgorithm, layer.AESEncryptionAlgorithm)
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerKey, base64.StdEncoding.EncodeToString(key2))
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerKeyMD5, base64.StdEncoding.EncodeToString(key2Md5[:]))
tc.Handler().CopyObjectHandler(w, r)
assertStatus(t, w, http.StatusBadRequest)
assertS3Error(t, w, errors.GetAPIError(errors.ErrInvalidSSECustomerParameters))
// success copy
w, r = prepareTestRequest(tc, bktName, dstObjName, nil)
r.TLS = &tls.ConnectionState{}
r.Header.Set(api.AmzCopySource, bktName+"/"+srcObjName)
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerAlgorithm, layer.AESEncryptionAlgorithm)
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerKey, base64.StdEncoding.EncodeToString(key1))
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerKeyMD5, base64.StdEncoding.EncodeToString(key1Md5[:]))
tc.Handler().CopyObjectHandler(w, r)
assertStatus(t, w, http.StatusOK)
dstObjInfo, err := tc.Layer().GetObjectInfo(tc.Context(), &layer.HeadObjectParams{BktInfo: bktInfo, Object: dstObjName})
require.NoError(t, err)
require.Equal(t, srcObjInfo.Headers[layer.AttributeDecryptedSize], strconv.Itoa(int(dstObjInfo.Size)))
require.False(t, containEncryptionMetadataHeaders(dstObjInfo.Headers))
}
func TestCopyUnencryptedToEncrypted(t *testing.T) {
tc := prepareHandlerContext(t)
bktName, srcObjName := "bucket-for-copy", "object-for-copy"
key := []byte("firstencriptionkeyofsourceobject")
keyMd5 := md5.Sum(key)
bktInfo := createTestBucket(tc, bktName)
srcObjInfo := createTestObject(tc, bktInfo, srcObjName, encryption.Params{})
require.False(t, containEncryptionMetadataHeaders(srcObjInfo.Headers))
dstObjName := "copy-object"
// invalid copy-source-sse headers
w, r := prepareTestRequest(tc, bktName, dstObjName, nil)
r.TLS = &tls.ConnectionState{}
r.Header.Set(api.AmzCopySource, bktName+"/"+srcObjName)
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerAlgorithm, layer.AESEncryptionAlgorithm)
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerKey, base64.StdEncoding.EncodeToString(key))
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerKeyMD5, base64.StdEncoding.EncodeToString(keyMd5[:]))
tc.Handler().CopyObjectHandler(w, r)
assertStatus(t, w, http.StatusBadRequest)
assertS3Error(t, w, errors.GetAPIError(errors.ErrInvalidEncryptionParameters))
// success copy
w, r = prepareTestRequest(tc, bktName, dstObjName, nil)
r.TLS = &tls.ConnectionState{}
r.Header.Set(api.AmzCopySource, bktName+"/"+srcObjName)
r.Header.Set(api.AmzServerSideEncryptionCustomerAlgorithm, layer.AESEncryptionAlgorithm)
r.Header.Set(api.AmzServerSideEncryptionCustomerKey, base64.StdEncoding.EncodeToString(key))
r.Header.Set(api.AmzServerSideEncryptionCustomerKeyMD5, base64.StdEncoding.EncodeToString(keyMd5[:]))
tc.Handler().CopyObjectHandler(w, r)
assertStatus(t, w, http.StatusOK)
dstObjInfo, err := tc.Layer().GetObjectInfo(tc.Context(), &layer.HeadObjectParams{BktInfo: bktInfo, Object: dstObjName})
require.NoError(t, err)
require.True(t, containEncryptionMetadataHeaders(dstObjInfo.Headers))
require.Equal(t, strconv.Itoa(int(srcObjInfo.Size)), dstObjInfo.Headers[layer.AttributeDecryptedSize])
}
func TestCopyEncryptedToEncryptedWithAnotherKey(t *testing.T) {
tc := prepareHandlerContext(t)
bktName, srcObjName := "bucket-for-copy", "object-for-copy"
key1 := []byte("firstencriptionkeyofsourceobject")
key1Md5 := md5.Sum(key1)
key2 := []byte("anotherencriptionkeysourceobject")
key2Md5 := md5.Sum(key2)
bktInfo := createTestBucket(tc, bktName)
srcEnc, err := encryption.NewParams(key1)
require.NoError(t, err)
srcObjInfo := createTestObject(tc, bktInfo, srcObjName, *srcEnc)
require.True(t, containEncryptionMetadataHeaders(srcObjInfo.Headers))
dstObjName := "copy-object"
w, r := prepareTestRequest(tc, bktName, dstObjName, nil)
r.TLS = &tls.ConnectionState{}
r.Header.Set(api.AmzCopySource, bktName+"/"+srcObjName)
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerAlgorithm, layer.AESEncryptionAlgorithm)
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerKey, base64.StdEncoding.EncodeToString(key1))
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerKeyMD5, base64.StdEncoding.EncodeToString(key1Md5[:]))
r.Header.Set(api.AmzServerSideEncryptionCustomerAlgorithm, layer.AESEncryptionAlgorithm)
r.Header.Set(api.AmzServerSideEncryptionCustomerKey, base64.StdEncoding.EncodeToString(key2))
r.Header.Set(api.AmzServerSideEncryptionCustomerKeyMD5, base64.StdEncoding.EncodeToString(key2Md5[:]))
tc.Handler().CopyObjectHandler(w, r)
assertStatus(t, w, http.StatusOK)
dstObjInfo, err := tc.Layer().GetObjectInfo(tc.Context(), &layer.HeadObjectParams{BktInfo: bktInfo, Object: dstObjName})
require.NoError(t, err)
require.True(t, containEncryptionMetadataHeaders(dstObjInfo.Headers))
require.Equal(t, srcObjInfo.Headers[layer.AttributeDecryptedSize], dstObjInfo.Headers[layer.AttributeDecryptedSize])
}
func containEncryptionMetadataHeaders(headers map[string]string) bool {
for k := range headers {
if _, ok := layer.EncryptionMetadata[k]; ok {
return true
}
}
return false
}
func copyObject(hc *handlerContext, bktName, fromObject, toObject string, copyMeta CopyMeta, statusCode int) {
w, r := prepareTestRequest(hc, bktName, toObject, nil)
r.Header.Set(api.AmzCopySource, bktName+"/"+fromObject)

View file

@ -52,6 +52,7 @@ func (h *handler) PutBucketCorsHandler(w http.ResponseWriter, r *http.Request) {
p := &layer.PutCORSParams{
BktInfo: bktInfo,
Reader: r.Body,
NewDecoder: h.cfg.NewXMLDecoder,
}
p.CopiesNumbers, err = h.pickCopiesNumbers(parseMetadata(r), bktInfo.LocationConstraint)
@ -194,7 +195,7 @@ func (h *handler) Preflight(w http.ResponseWriter, r *http.Request) {
if rule.MaxAgeSeconds > 0 || rule.MaxAgeSeconds == -1 {
w.Header().Set(api.AccessControlMaxAge, strconv.Itoa(rule.MaxAgeSeconds))
} else {
w.Header().Set(api.AccessControlMaxAge, strconv.Itoa(h.cfg.DefaultMaxAge))
w.Header().Set(api.AccessControlMaxAge, strconv.Itoa(h.cfg.DefaultMaxAge()))
}
if o != wildcard {
w.Header().Set(api.AccessControlAllowCredentials, "true")

View file

@ -24,8 +24,9 @@ const maxObjectsToDelete = 1000
// DeleteObjectsRequest -- xml carrying the object key names which should be deleted.
type DeleteObjectsRequest struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ Delete" json:"-"`
// Element to enable quiet mode for the request
Quiet bool
Quiet bool `xml:"Quiet,omitempty"`
// List of objects to be deleted
Objects []ObjectIdentifier `xml:"Object"`
}
@ -45,10 +46,10 @@ type DeletedObject struct {
// DeleteError structure.
type DeleteError struct {
Code string
Message string
Key string
VersionID string `xml:"versionId,omitempty"`
Code string `xml:"Code,omitempty"`
Message string `xml:"Message,omitempty"`
Key string `xml:"Key,omitempty"`
VersionID string `xml:"VersionId,omitempty"`
}
// DeleteObjectsResponse container for multiple object deletes.
@ -177,7 +178,7 @@ func (h *handler) DeleteMultipleObjectsHandler(w http.ResponseWriter, r *http.Re
// Unmarshal list of keys to be deleted.
requested := &DeleteObjectsRequest{}
if err := xml.NewDecoder(r.Body).Decode(requested); err != nil {
if err := h.cfg.NewXMLDecoder(r.Body).Decode(requested); err != nil {
h.logAndSendError(w, "couldn't decode body", reqInfo, errors.GetAPIError(errors.ErrMalformedXML))
return
}

View file

@ -2,6 +2,7 @@ package handler
import (
"bytes"
"encoding/xml"
"net/http"
"net/http/httptest"
"net/url"
@ -10,8 +11,12 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
apiErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer/encryption"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/stretchr/testify/require"
)
@ -80,6 +85,38 @@ func TestDeleteBucketOnNotFoundError(t *testing.T) {
deleteBucket(t, hc, bktName, http.StatusNoContent)
}
func TestDeleteObjectsError(t *testing.T) {
hc := prepareHandlerContext(t)
bktName, objName := "bucket-for-removal", "object-to-delete"
bktInfo := createTestBucket(hc, bktName)
putBucketVersioning(t, hc, bktName, true)
putObject(hc, bktName, objName)
nodeVersion, err := hc.tree.GetLatestVersion(hc.context, bktInfo, objName)
require.NoError(t, err)
var addr oid.Address
addr.SetContainer(bktInfo.CID)
addr.SetObject(nodeVersion.OID)
expectedError := apiErrors.GetAPIError(apiErrors.ErrAccessDenied)
hc.tp.SetObjectError(addr, expectedError)
w := deleteObjectsBase(hc, bktName, [][2]string{{objName, nodeVersion.OID.EncodeToString()}})
res := &s3.DeleteObjectsOutput{}
err = xmlutil.UnmarshalXML(res, xml.NewDecoder(w.Result().Body), "")
require.NoError(t, err)
require.ElementsMatch(t, []*s3.Error{{
Code: aws.String(expectedError.Code),
Key: aws.String(objName),
Message: aws.String(expectedError.Error()),
VersionId: aws.String(nodeVersion.OID.EncodeToString()),
}}, res.Errors)
}
func TestDeleteObject(t *testing.T) {
tc := prepareHandlerContext(t)
@ -327,6 +364,27 @@ func TestDeleteMarkers(t *testing.T) {
require.Len(t, listOIDsFromMockedFrostFS(t, tc, bktName), 0, "shouldn't be any object in frostfs")
}
func TestGetHeadDeleteMarker(t *testing.T) {
hc := prepareHandlerContext(t)
bktName, objName := "bucket-for-removal", "object-to-delete"
createTestBucket(hc, bktName)
putBucketVersioning(t, hc, bktName, true)
putObject(hc, bktName, objName)
deleteMarkerVersionID, _ := deleteObject(t, hc, bktName, objName, emptyVersion)
w := headObjectBase(hc, bktName, objName, deleteMarkerVersionID)
require.Equal(t, w.Code, http.StatusMethodNotAllowed)
require.Equal(t, w.Result().Header.Get(api.AmzDeleteMarker), "true")
w, r := prepareTestRequest(hc, bktName, objName, nil)
hc.Handler().GetObjectHandler(w, r)
assertStatus(hc.t, w, http.StatusNotFound)
require.Equal(t, w.Result().Header.Get(api.AmzDeleteMarker), "true")
}
func TestDeleteObjectFromListCache(t *testing.T) {
tc := prepareHandlerContext(t)
@ -370,7 +428,7 @@ func TestDeleteObjectCheckMarkerReturn(t *testing.T) {
func createBucketAndObject(tc *handlerContext, bktName, objName string) (*data.BucketInfo, *data.ObjectInfo) {
bktInfo := createTestBucket(tc, bktName)
objInfo := createTestObject(tc, bktInfo, objName)
objInfo := createTestObject(tc, bktInfo, objName, encryption.Params{})
return bktInfo, objInfo
}
@ -381,7 +439,7 @@ func createVersionedBucketAndObject(t *testing.T, tc *handlerContext, bktName, o
require.NoError(t, err)
putBucketVersioning(t, tc, bktName, true)
objInfo := createTestObject(tc, bktInfo, objName)
objInfo := createTestObject(tc, bktInfo, objName, encryption.Params{})
return bktInfo, objInfo
}
@ -408,6 +466,14 @@ func deleteObject(t *testing.T, tc *handlerContext, bktName, objName, version st
}
func deleteObjects(t *testing.T, tc *handlerContext, bktName string, objVersions [][2]string) *DeleteObjectsResponse {
w := deleteObjectsBase(tc, bktName, objVersions)
res := &DeleteObjectsResponse{}
parseTestResponse(t, w, res)
return res
}
func deleteObjectsBase(hc *handlerContext, bktName string, objVersions [][2]string) *httptest.ResponseRecorder {
req := &DeleteObjectsRequest{}
for _, version := range objVersions {
req.Objects = append(req.Objects, ObjectIdentifier{
@ -416,14 +482,12 @@ func deleteObjects(t *testing.T, tc *handlerContext, bktName string, objVersions
})
}
w, r := prepareTestRequest(tc, bktName, "", req)
w, r := prepareTestRequest(hc, bktName, "", req)
r.Header.Set(api.ContentMD5, "")
tc.Handler().DeleteMultipleObjectsHandler(w, r)
assertStatus(t, w, http.StatusOK)
hc.Handler().DeleteMultipleObjectsHandler(w, r)
assertStatus(hc.t, w, http.StatusOK)
res := &DeleteObjectsResponse{}
parseTestResponse(t, w, res)
return res
return w
}
func deleteBucket(t *testing.T, tc *handlerContext, bktName string, code int) {
@ -456,13 +520,8 @@ func headObjectBase(hc *handlerContext, bktName, objName, version string) *httpt
return w
}
func listVersions(t *testing.T, tc *handlerContext, bktName string) *ListObjectsVersionsResponse {
w, r := prepareTestRequest(tc, bktName, "", nil)
tc.Handler().ListBucketObjectVersionsHandler(w, r)
assertStatus(t, w, http.StatusOK)
res := &ListObjectsVersionsResponse{}
parseTestResponse(t, w, res)
return res
func listVersions(_ *testing.T, tc *handlerContext, bktName string) *ListObjectsVersionsResponse {
return listObjectsVersions(tc, bktName, "", "", "", "", -1)
}
func getVersion(resp *ListObjectsVersionsResponse, objName string) []*ObjectVersionResponse {

View file

@ -78,7 +78,8 @@ func addSSECHeaders(responseHeader http.Header, requestHeader http.Header) {
responseHeader.Set(api.AmzServerSideEncryptionCustomerKeyMD5, requestHeader.Get(api.AmzServerSideEncryptionCustomerKeyMD5))
}
func writeHeaders(h http.Header, requestHeader http.Header, extendedInfo *data.ExtendedObjectInfo, tagSetLength int, isBucketUnversioned bool) {
func writeHeaders(h http.Header, requestHeader http.Header, extendedInfo *data.ExtendedObjectInfo, tagSetLength int,
isBucketUnversioned, md5Enabled bool) {
info := extendedInfo.ObjectInfo
if len(info.ContentType) > 0 && h.Get(api.ContentType) == "" {
h.Set(api.ContentType, info.ContentType)
@ -94,8 +95,10 @@ func writeHeaders(h http.Header, requestHeader http.Header, extendedInfo *data.E
h.Set(api.ContentLength, strconv.FormatUint(info.Size, 10))
}
h.Set(api.ETag, info.HashSum)
h.Set(api.ETag, info.ETag(md5Enabled))
h.Set(api.AmzTaggingCount, strconv.Itoa(tagSetLength))
h.Set(api.AmzStorageClass, api.DefaultStorageClass)
if !isBucketUnversioned {
h.Set(api.AmzVersionID, extendedInfo.Version())
@ -110,6 +113,9 @@ func writeHeaders(h http.Header, requestHeader http.Header, extendedInfo *data.E
if encodings := info.Headers[api.ContentEncoding]; encodings != "" {
h.Set(api.ContentEncoding, encodings)
}
if contentLanguage := info.Headers[api.ContentLanguage]; contentLanguage != "" {
h.Set(api.ContentLanguage, contentLanguage)
}
for key, val := range info.Headers {
if layer.IsSystemHeader(key) {
@ -219,7 +225,7 @@ func (h *handler) GetObjectHandler(w http.ResponseWriter, r *http.Request) {
return
}
writeHeaders(w.Header(), r.Header, extendedInfo, len(tagSet), bktSettings.Unversioned())
writeHeaders(w.Header(), r.Header, extendedInfo, len(tagSet), bktSettings.Unversioned(), h.cfg.MD5Enabled())
if params != nil {
writeRangeHeaders(w, params, fullSize)
} else {

View file

@ -14,8 +14,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
apiErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
s3errors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
"github.com/stretchr/testify/require"
@ -195,8 +193,21 @@ func TestGetObject(t *testing.T) {
hc.tp.SetObjectError(addr, &apistatus.ObjectNotFound{})
hc.tp.SetObjectError(objInfo.Address(), &apistatus.ObjectNotFound{})
getObjectAssertS3Error(hc, bktName, objName, objInfo.VersionID(), s3errors.ErrNoSuchVersion)
getObjectAssertS3Error(hc, bktName, objName, emptyVersion, s3errors.ErrNoSuchKey)
getObjectAssertS3Error(hc, bktName, objName, objInfo.VersionID(), errors.ErrNoSuchVersion)
getObjectAssertS3Error(hc, bktName, objName, emptyVersion, errors.ErrNoSuchKey)
}
func TestGetObjectEnabledMD5(t *testing.T) {
hc := prepareHandlerContext(t)
bktName, objName := "bucket", "obj"
_, objInfo := createBucketAndObject(hc, bktName, objName)
_, headers := getObject(hc, bktName, objName)
require.Equal(t, objInfo.HashSum, headers.Get(api.ETag))
hc.config.md5Enabled = true
_, headers = getObject(hc, bktName, objName)
require.Equal(t, objInfo.MD5Sum, headers.Get(api.ETag))
}
func putObjectContent(hc *handlerContext, bktName, objName, content string) {
@ -216,9 +227,9 @@ func getObjectRange(t *testing.T, tc *handlerContext, bktName, objName string, s
return content
}
func getObjectAssertS3Error(hc *handlerContext, bktName, objName, version string, code apiErrors.ErrorCode) {
func getObjectAssertS3Error(hc *handlerContext, bktName, objName, version string, code errors.ErrorCode) {
w := getObjectBaseResponse(hc, bktName, objName, version)
assertS3Error(hc.t, w, apiErrors.GetAPIError(code))
assertS3Error(hc.t, w, errors.GetAPIError(code))
}
func getObjectBaseResponse(hc *handlerContext, bktName, objName, version string) *httptest.ResponseRecorder {

View file

@ -16,6 +16,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/cache"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer/encryption"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/resolver"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/pkg/service/tree"
@ -37,7 +38,7 @@ type handlerContext struct {
tp *layer.TestFrostFS
tree *tree.Tree
context context.Context
kludge *kludgeSettingsMock
config *configMock
layerFeatures *layer.FeatureSettingsMock
}
@ -58,41 +59,61 @@ func (hc *handlerContext) Context() context.Context {
return hc.context
}
type placementPolicyMock struct {
type configMock struct {
defaultPolicy netmap.PlacementPolicy
copiesNumbers map[string][]uint32
defaultCopiesNumbers []uint32
bypassContentEncodingInChunks bool
md5Enabled bool
}
func (p *placementPolicyMock) DefaultPlacementPolicy() netmap.PlacementPolicy {
return p.defaultPolicy
func (c *configMock) DefaultPlacementPolicy() netmap.PlacementPolicy {
return c.defaultPolicy
}
func (p *placementPolicyMock) PlacementPolicy(string) (netmap.PlacementPolicy, bool) {
func (c *configMock) PlacementPolicy(string) (netmap.PlacementPolicy, bool) {
return netmap.PlacementPolicy{}, false
}
func (p *placementPolicyMock) CopiesNumbers(locationConstraint string) ([]uint32, bool) {
result, ok := p.copiesNumbers[locationConstraint]
func (c *configMock) CopiesNumbers(locationConstraint string) ([]uint32, bool) {
result, ok := c.copiesNumbers[locationConstraint]
return result, ok
}
func (p *placementPolicyMock) DefaultCopiesNumbers() []uint32 {
return p.defaultCopiesNumbers
func (c *configMock) DefaultCopiesNumbers() []uint32 {
return c.defaultCopiesNumbers
}
type xmlDecoderProviderMock struct{}
func (p *xmlDecoderProviderMock) NewCompleteMultipartDecoder(r io.Reader) *xml.Decoder {
func (c *configMock) NewXMLDecoder(r io.Reader) *xml.Decoder {
return xml.NewDecoder(r)
}
type kludgeSettingsMock struct {
bypassContentEncodingInChunks bool
func (c *configMock) BypassContentEncodingInChunks() bool {
return c.bypassContentEncodingInChunks
}
func (k *kludgeSettingsMock) BypassContentEncodingInChunks() bool {
return k.bypassContentEncodingInChunks
func (c *configMock) DefaultMaxAge() int {
return 0
}
func (c *configMock) NotificatorEnabled() bool {
return false
}
func (c *configMock) ResolveZoneList() []string {
return []string{}
}
func (c *configMock) IsResolveListAllow() bool {
return false
}
func (c *configMock) CompleteMultipartKeepalive() time.Duration {
return time.Duration(0)
}
func (c *configMock) MD5Enabled() bool {
return c.md5Enabled
}
func prepareHandlerContext(t *testing.T) *handlerContext {
@ -139,16 +160,13 @@ func prepareHandlerContextBase(t *testing.T, minCache bool) *handlerContext {
err = pp.DecodeString("REP 1")
require.NoError(t, err)
kludge := &kludgeSettingsMock{}
cfg := &configMock{
defaultPolicy: pp,
}
h := &handler{
log: l,
obj: layer.NewLayer(l, tp, layerCfg),
cfg: &Config{
Policy: &placementPolicyMock{defaultPolicy: pp},
XMLDecoder: &xmlDecoderProviderMock{},
Kludge: kludge,
},
cfg: cfg,
}
return &handlerContext{
@ -158,7 +176,7 @@ func prepareHandlerContextBase(t *testing.T, minCache bool) *handlerContext {
tp: tp,
tree: treeMock,
context: middleware.SetBoxData(context.Background(), newTestAccessBox(t, key)),
kludge: kludge,
config: cfg,
layerFeatures: features,
}
@ -201,7 +219,7 @@ func createTestBucket(hc *handlerContext, bktName string) *data.BucketInfo {
}
func createTestBucketWithLock(hc *handlerContext, bktName string, conf *data.ObjectLockConfiguration) *data.BucketInfo {
cnrID, err := hc.MockedPool().CreateContainer(hc.Context(), layer.PrmContainerCreate{
res, err := hc.MockedPool().CreateContainer(hc.Context(), layer.PrmContainerCreate{
Creator: hc.owner,
Name: bktName,
AdditionalAttributes: [][2]string{{layer.AttributeLockEnabled, "true"}},
@ -211,10 +229,11 @@ func createTestBucketWithLock(hc *handlerContext, bktName string, conf *data.Obj
var ownerID user.ID
bktInfo := &data.BucketInfo{
CID: cnrID,
CID: res.ContainerID,
Name: bktName,
ObjectLockEnabled: true,
Owner: ownerID,
HomomorphicHashDisabled: res.HomomorphicHashDisabled,
}
sp := &layer.PutSettingsParams{
@ -231,7 +250,7 @@ func createTestBucketWithLock(hc *handlerContext, bktName string, conf *data.Obj
return bktInfo
}
func createTestObject(hc *handlerContext, bktInfo *data.BucketInfo, objName string) *data.ObjectInfo {
func createTestObject(hc *handlerContext, bktInfo *data.BucketInfo, objName string, encryption encryption.Params) *data.ObjectInfo {
content := make([]byte, 1024)
_, err := rand.Read(content)
require.NoError(hc.t, err)
@ -246,6 +265,7 @@ func createTestObject(hc *handlerContext, bktInfo *data.BucketInfo, objName stri
Size: uint64(len(content)),
Reader: bytes.NewReader(content),
Header: header,
Encryption: encryption,
})
require.NoError(hc.t, err)
@ -323,6 +343,8 @@ func assertStatus(t *testing.T, w *httptest.ResponseRecorder, status int) {
func readResponse(t *testing.T, w *httptest.ResponseRecorder, status int, model interface{}) {
assertStatus(t, w, status)
if status == http.StatusOK {
err := xml.NewDecoder(w.Result().Body).Decode(model)
require.NoError(t, err)
}
}

View file

@ -118,7 +118,7 @@ func (h *handler) HeadObjectHandler(w http.ResponseWriter, r *http.Request) {
return
}
writeHeaders(w.Header(), r.Header, extendedInfo, len(tagSet), bktSettings.Unversioned())
writeHeaders(w.Header(), r.Header, extendedInfo, len(tagSet), bktSettings.Unversioned(), h.cfg.MD5Enabled())
w.WriteHeader(http.StatusOK)
}
@ -135,7 +135,7 @@ func (h *handler) HeadBucketHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContainerID, bktInfo.CID.EncodeToString())
w.Header().Set(api.AmzBucketRegion, bktInfo.LocationConstraint)
if isAvailableToResolve(bktInfo.Zone, h.cfg.ResolveZoneList, h.cfg.IsResolveListAllow) {
if isAvailableToResolve(bktInfo.Zone, h.cfg.ResolveZoneList(), h.cfg.IsResolveListAllow()) {
w.Header().Set(api.ContainerName, bktInfo.Name)
w.Header().Set(api.ContainerZone, bktInfo.Zone)
}

View file

@ -2,7 +2,6 @@ package handler
import (
"context"
"encoding/xml"
"fmt"
"net/http"
"strconv"
@ -42,7 +41,7 @@ func (h *handler) PutBucketObjectLockConfigHandler(w http.ResponseWriter, r *htt
}
lockingConf := &data.ObjectLockConfiguration{}
if err = xml.NewDecoder(r.Body).Decode(lockingConf); err != nil {
if err = h.cfg.NewXMLDecoder(r.Body).Decode(lockingConf); err != nil {
h.logAndSendError(w, "couldn't parse locking configuration", reqInfo, err)
return
}
@ -122,7 +121,7 @@ func (h *handler) PutObjectLegalHoldHandler(w http.ResponseWriter, r *http.Reque
}
legalHold := &data.LegalHold{}
if err = xml.NewDecoder(r.Body).Decode(legalHold); err != nil {
if err = h.cfg.NewXMLDecoder(r.Body).Decode(legalHold); err != nil {
h.logAndSendError(w, "couldn't parse legal hold configuration", reqInfo, err)
return
}
@ -210,7 +209,7 @@ func (h *handler) PutObjectRetentionHandler(w http.ResponseWriter, r *http.Reque
}
retention := &data.Retention{}
if err = xml.NewDecoder(r.Body).Decode(retention); err != nil {
if err = h.cfg.NewXMLDecoder(r.Body).Decode(retention); err != nil {
h.logAndSendError(w, "couldn't parse object retention", reqInfo, err)
return
}

View file

@ -13,6 +13,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
apiErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer/encryption"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/middleware"
"github.com/stretchr/testify/require"
)
@ -426,7 +427,7 @@ func TestObjectLegalHold(t *testing.T) {
bktInfo := createTestBucketWithLock(hc, bktName, nil)
objName := "obj-for-legal-hold"
createTestObject(hc, bktInfo, objName)
createTestObject(hc, bktInfo, objName, encryption.Params{})
getObjectLegalHold(hc, bktName, objName, legalHoldOff)
@ -470,7 +471,7 @@ func TestObjectRetention(t *testing.T) {
bktInfo := createTestBucketWithLock(hc, bktName, nil)
objName := "obj-for-retention"
createTestObject(hc, bktInfo, objName)
createTestObject(hc, bktInfo, objName, encryption.Params{})
getObjectRetention(hc, bktName, objName, nil, apiErrors.ErrNoSuchKey)

View file

@ -3,7 +3,6 @@ package handler
import (
"encoding/xml"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
@ -61,7 +60,7 @@ type (
Owner Owner `xml:"Owner"`
Parts []*layer.Part `xml:"Part"`
PartNumberMarker int `xml:"PartNumberMarker,omitempty"`
StorageClass string `xml:"StorageClass,omitempty"`
StorageClass string `xml:"StorageClass"`
UploadID string `xml:"UploadId"`
}
@ -70,7 +69,7 @@ type (
Initiator Initiator `xml:"Initiator"`
Key string `xml:"Key"`
Owner Owner `xml:"Owner"`
StorageClass string `xml:"StorageClass,omitempty"`
StorageClass string `xml:"StorageClass"`
UploadID string `xml:"UploadId"`
}
@ -154,6 +153,9 @@ func (h *handler) CreateMultipartUploadHandler(w http.ResponseWriter, r *http.Re
if contentType := r.Header.Get(api.ContentType); len(contentType) > 0 {
p.Header[api.ContentType] = contentType
}
if contentLanguage := r.Header.Get(api.ContentLanguage); len(contentLanguage) > 0 {
p.Header[api.ContentLanguage] = contentLanguage
}
p.CopiesNumbers, err = h.pickCopiesNumbers(p.Header, bktInfo.LocationConstraint)
if err != nil {
@ -243,6 +245,7 @@ func (h *handler) UploadPartHandler(w http.ResponseWriter, r *http.Request) {
PartNumber: partNumber,
Size: size,
Reader: body,
ContentMD5: r.Header.Get(api.ContentMD5),
}
p.Info.Encryption, err = formEncryptionParams(r)
@ -342,6 +345,17 @@ func (h *handler) UploadPartCopy(w http.ResponseWriter, r *http.Request) {
return
}
srcEncryptionParams, err := formCopySourceEncryptionParams(r)
if err != nil {
h.logAndSendError(w, "invalid sse headers", reqInfo, err)
return
}
if err = srcEncryptionParams.MatchObjectEncryption(layer.FormEncryptionInfo(srcInfo.Headers)); err != nil {
h.logAndSendError(w, "encryption doesn't match object", reqInfo, fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrBadRequest), err), additional...)
return
}
p := &layer.UploadCopyParams{
Versioned: headPrm.Versioned(),
Info: &layer.UploadInfoParams{
@ -351,6 +365,7 @@ func (h *handler) UploadPartCopy(w http.ResponseWriter, r *http.Request) {
},
SrcObjInfo: srcInfo,
SrcBktInfo: srcBktInfo,
SrcEncryption: srcEncryptionParams,
PartNumber: partNumber,
Range: srcRange,
}
@ -361,11 +376,6 @@ func (h *handler) UploadPartCopy(w http.ResponseWriter, r *http.Request) {
return
}
if err = p.Info.Encryption.MatchObjectEncryption(layer.FormEncryptionInfo(srcInfo.Headers)); err != nil {
h.logAndSendError(w, "encryption doesn't match object", reqInfo, fmt.Errorf("%w: %s", errors.GetAPIError(errors.ErrBadRequest), err), additional...)
return
}
info, err := h.obj.UploadPartCopy(ctx, p)
if err != nil {
h.logAndSendError(w, "could not upload part copy", reqInfo, err, additional...)
@ -373,8 +383,8 @@ func (h *handler) UploadPartCopy(w http.ResponseWriter, r *http.Request) {
}
response := UploadPartCopyResponse{
ETag: info.HashSum,
LastModified: info.Created.UTC().Format(time.RFC3339),
ETag: info.ETag(h.cfg.MD5Enabled()),
}
if p.Info.Encryption.Enabled() {
@ -395,6 +405,12 @@ func (h *handler) CompleteMultipartUploadHandler(w http.ResponseWriter, r *http.
return
}
settings, err := h.obj.GetBucketSettings(r.Context(), bktInfo)
if err != nil {
h.logAndSendError(w, "could not get bucket settings", reqInfo, err)
return
}
var (
uploadID = r.URL.Query().Get(uploadIDHeaderName)
uploadInfo = &layer.UploadInfoParams{
@ -406,7 +422,7 @@ func (h *handler) CompleteMultipartUploadHandler(w http.ResponseWriter, r *http.
)
reqBody := new(CompleteMultipartUpload)
if err = h.cfg.XMLDecoder.NewCompleteMultipartDecoder(r.Body).Decode(reqBody); err != nil {
if err = h.cfg.NewXMLDecoder(r.Body).Decode(reqBody); err != nil {
h.logAndSendError(w, "could not read complete multipart upload xml", reqInfo,
errors.GetAPIError(errors.ErrMalformedXML), additional...)
return
@ -421,44 +437,27 @@ func (h *handler) CompleteMultipartUploadHandler(w http.ResponseWriter, r *http.
Parts: reqBody.Parts,
}
// Next operations might take some time, so we want to keep client's
// connection alive. To do so, gateway sends periodic white spaces
// back to the client the same way as Amazon S3 service does.
stopPeriodicResponseWriter := periodicXMLWriter(w, h.cfg.CompleteMultipartKeepalive)
// Start complete multipart upload which may take some time to fetch object
// and re-upload it part by part.
objInfo, err := h.completeMultipartUpload(r, c, bktInfo, reqInfo)
// Stop periodic writer as complete multipart upload is finished
// successfully or not.
headerIsWritten := stopPeriodicResponseWriter()
responseWriter := middleware.EncodeToResponse
errLogger := h.logAndSendError
// Do not send XML and HTTP headers if periodic writer was invoked at this point.
if headerIsWritten {
responseWriter = middleware.EncodeToResponseNoHeader
errLogger = h.logAndSendErrorNoHeader
}
if err != nil {
errLogger(w, "complete multipart error", reqInfo, err, additional...)
h.logAndSendError(w, "complete multipart error", reqInfo, err, additional...)
return
}
response := CompleteMultipartUploadResponse{
Bucket: objInfo.Bucket,
ETag: objInfo.HashSum,
Key: objInfo.Name,
ETag: objInfo.ETag(h.cfg.MD5Enabled()),
}
// Here we previously set api.AmzVersionID header for versioned bucket.
// It is not possible after #60, because of periodic white
// space XML writer to keep connection with the client.
if settings.VersioningEnabled() {
w.Header().Set(api.AmzVersionID, objInfo.VersionID())
}
if err = responseWriter(w, response); err != nil {
errLogger(w, "something went wrong", reqInfo, err, additional...)
if err = middleware.EncodeToResponse(w, response); err != nil {
h.logAndSendError(w, "something went wrong", reqInfo, err, additional...)
}
}
@ -600,7 +599,7 @@ func (h *handler) ListPartsHandler(w http.ResponseWriter, r *http.Request) {
}
if queryValues.Get("part-number-marker") != "" {
if partNumberMarker, err = strconv.Atoi(queryValues.Get("part-number-marker")); err != nil || partNumberMarker <= 0 {
if partNumberMarker, err = strconv.Atoi(queryValues.Get("part-number-marker")); err != nil || partNumberMarker < 0 {
h.logAndSendError(w, "invalid PartNumberMarker", reqInfo, err, additional...)
return
}
@ -694,6 +693,7 @@ func encodeListMultipartUploadsToResponse(info *layer.ListMultipartUploadsInfo,
DisplayName: u.Owner.String(),
},
UploadID: u.UploadID,
StorageClass: api.DefaultStorageClass,
}
uploads = append(uploads, m)
}
@ -722,55 +722,6 @@ func encodeListPartsToResponse(info *layer.ListPartsInfo, params *layer.ListPart
PartNumberMarker: params.PartNumberMarker,
UploadID: params.Info.UploadID,
Parts: info.Parts,
StorageClass: api.DefaultStorageClass,
}
}
// periodicXMLWriter creates go routine to write xml header and whitespaces
// over time to avoid connection drop from the client. To work properly,
// pass `http.ResponseWriter` with implemented `http.Flusher` interface.
// Returns stop function which returns boolean if writer has been used
// during goroutine execution. To disable writer, pass 0 duration value.
func periodicXMLWriter(w io.Writer, dur time.Duration) (stop func() bool) {
if dur == 0 { // 0 duration disables periodic writer
return func() bool { return false }
}
whitespaceChar := []byte(" ")
closer := make(chan struct{})
done := make(chan struct{})
headerWritten := false
go func() {
defer close(done)
tick := time.NewTicker(dur)
defer tick.Stop()
for {
select {
case <-tick.C:
if !headerWritten {
_, err := w.Write([]byte(xml.Header))
headerWritten = err == nil
}
_, err := w.Write(whitespaceChar)
if err != nil {
return // is there anything we can do better than ignore error?
}
if buffered, ok := w.(http.Flusher); ok {
buffered.Flush()
}
case <-closer:
return
}
}
}()
stop = func() bool {
close(closer)
<-done // wait for goroutine to stop
return headerWritten
}
return stop
}

View file

@ -1,58 +1,27 @@
package handler
import (
"bytes"
"crypto/md5"
"crypto/tls"
"encoding/base64"
"encoding/hex"
"encoding/xml"
"fmt"
"net/http"
"net/url"
"strconv"
"testing"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
s3Errors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer/encryption"
"github.com/stretchr/testify/require"
)
func TestPeriodicWriter(t *testing.T) {
const dur = 100 * time.Millisecond
const whitespaces = 8
expected := []byte(xml.Header)
for i := 0; i < whitespaces; i++ {
expected = append(expected, []byte(" ")...)
}
t.Run("writes data", func(t *testing.T) {
buf := bytes.NewBuffer(nil)
stop := periodicXMLWriter(buf, dur)
// N number of whitespaces + half durations to guarantee at least N writes in buffer
time.Sleep(whitespaces*dur + dur/2)
require.True(t, stop())
require.Equal(t, expected, buf.Bytes())
t.Run("no additional data after stop", func(t *testing.T) {
time.Sleep(2 * dur)
require.Equal(t, expected, buf.Bytes())
})
})
t.Run("does not write data", func(t *testing.T) {
buf := bytes.NewBuffer(nil)
stop := periodicXMLWriter(buf, dur)
time.Sleep(dur / 2)
require.False(t, stop())
require.Empty(t, buf.Bytes())
t.Run("disabled", func(t *testing.T) {
stop = periodicXMLWriter(buf, 0)
require.False(t, stop())
require.Empty(t, buf.Bytes())
})
})
}
const (
partNumberMarkerQuery = "part-number-marker"
)
func TestMultipartUploadInvalidPart(t *testing.T) {
hc := prepareHandlerContext(t)
@ -80,7 +49,7 @@ func TestMultipartReUploadPart(t *testing.T) {
etag1, _ := uploadPart(hc, bktName, objName, uploadInfo.UploadID, 1, partSizeLast)
etag2, _ := uploadPart(hc, bktName, objName, uploadInfo.UploadID, 2, partSizeFirst)
list := listParts(hc, bktName, objName, uploadInfo.UploadID)
list := listParts(hc, bktName, objName, uploadInfo.UploadID, "0", http.StatusOK)
require.Len(t, list.Parts, 2)
require.Equal(t, etag1, list.Parts[0].ETag)
require.Equal(t, etag2, list.Parts[1].ETag)
@ -91,7 +60,7 @@ func TestMultipartReUploadPart(t *testing.T) {
etag1, data1 := uploadPart(hc, bktName, objName, uploadInfo.UploadID, 1, partSizeFirst)
etag2, data2 := uploadPart(hc, bktName, objName, uploadInfo.UploadID, 2, partSizeLast)
list = listParts(hc, bktName, objName, uploadInfo.UploadID)
list = listParts(hc, bktName, objName, uploadInfo.UploadID, "0", http.StatusOK)
require.Len(t, list.Parts, 2)
require.Equal(t, etag1, list.Parts[0].ETag)
require.Equal(t, etag2, list.Parts[1].ETag)
@ -217,6 +186,131 @@ func TestMultipartUploadSize(t *testing.T) {
uploadPartCopy(hc, bktName, objName2, uploadInfo.UploadID, 1, sourceCopy, 0, 0)
uploadPartCopy(hc, bktName, objName2, uploadInfo.UploadID, 2, sourceCopy, 0, partSize)
})
t.Run("check correct size when copy part from encrypted source", func(t *testing.T) {
newBucket, newObjName := "new-bucket", "new-object-multipart"
bktInfo := createTestBucket(hc, newBucket)
srcObjName := "source-object"
key := []byte("firstencriptionkeyofsourceobject")
keyMd5 := md5.Sum(key)
srcEnc, err := encryption.NewParams(key)
require.NoError(t, err)
srcObjInfo := createTestObject(hc, bktInfo, srcObjName, *srcEnc)
multipartInfo := createMultipartUpload(hc, newBucket, newObjName, headers)
sourceCopy := newBucket + "/" + srcObjName
query := make(url.Values)
query.Set(uploadIDQuery, multipartInfo.UploadID)
query.Set(partNumberQuery, "1")
// empty copy-source-sse headers
w, r := prepareTestRequestWithQuery(hc, newBucket, newObjName, query, nil)
r.TLS = &tls.ConnectionState{}
r.Header.Set(api.AmzCopySource, sourceCopy)
hc.Handler().UploadPartCopy(w, r)
assertStatus(t, w, http.StatusBadRequest)
// success copy
w, r = prepareTestRequestWithQuery(hc, newBucket, newObjName, query, nil)
r.TLS = &tls.ConnectionState{}
r.Header.Set(api.AmzCopySource, sourceCopy)
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerAlgorithm, layer.AESEncryptionAlgorithm)
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerKey, base64.StdEncoding.EncodeToString(key))
r.Header.Set(api.AmzCopySourceServerSideEncryptionCustomerKeyMD5, base64.StdEncoding.EncodeToString(keyMd5[:]))
hc.Handler().UploadPartCopy(w, r)
uploadPartCopyResponse := &UploadPartCopyResponse{}
readResponse(hc.t, w, http.StatusOK, uploadPartCopyResponse)
completeMultipartUpload(hc, newBucket, newObjName, multipartInfo.UploadID, []string{uploadPartCopyResponse.ETag})
attr := getObjectAttributes(hc, newBucket, newObjName, objectParts)
require.Equal(t, 1, attr.ObjectParts.PartsCount)
require.Equal(t, srcObjInfo.Headers[layer.AttributeDecryptedSize], strconv.Itoa(attr.ObjectParts.Parts[0].Size))
})
}
func TestListParts(t *testing.T) {
hc := prepareHandlerContext(t)
bktName, objName := "bucket-for-test-list-parts", "object-multipart"
_ = createTestBucket(hc, bktName)
partSize := 5 * 1024 * 1024
uploadInfo := createMultipartUpload(hc, bktName, objName, map[string]string{})
etag1, _ := uploadPart(hc, bktName, objName, uploadInfo.UploadID, 1, partSize)
etag2, _ := uploadPart(hc, bktName, objName, uploadInfo.UploadID, 2, partSize)
list := listParts(hc, bktName, objName, uploadInfo.UploadID, "0", http.StatusOK)
require.Len(t, list.Parts, 2)
require.Equal(t, etag1, list.Parts[0].ETag)
require.Equal(t, etag2, list.Parts[1].ETag)
list = listParts(hc, bktName, objName, uploadInfo.UploadID, "1", http.StatusOK)
require.Len(t, list.Parts, 1)
require.Equal(t, etag2, list.Parts[0].ETag)
list = listParts(hc, bktName, objName, uploadInfo.UploadID, "2", http.StatusOK)
require.Len(t, list.Parts, 0)
list = listParts(hc, bktName, objName, uploadInfo.UploadID, "7", http.StatusOK)
require.Len(t, list.Parts, 0)
list = listParts(hc, bktName, objName, uploadInfo.UploadID, "-1", http.StatusInternalServerError)
require.Len(t, list.Parts, 0)
}
func TestMultipartUploadWithContentLanguage(t *testing.T) {
hc := prepareHandlerContext(t)
bktName, objName := "bucket-1", "object-1"
createTestBucket(hc, bktName)
partSize := 5 * 1024 * 1024
exceptedContentLanguage := "en"
headers := map[string]string{
api.ContentLanguage: exceptedContentLanguage,
}
multipartUpload := createMultipartUpload(hc, bktName, objName, headers)
etag1, _ := uploadPart(hc, bktName, objName, multipartUpload.UploadID, 1, partSize)
etag2, _ := uploadPart(hc, bktName, objName, multipartUpload.UploadID, 2, partSize)
w := completeMultipartUploadBase(hc, bktName, objName, multipartUpload.UploadID, []string{etag1, etag2})
assertStatus(t, w, http.StatusOK)
w, r := prepareTestRequest(hc, bktName, objName, nil)
hc.Handler().HeadObjectHandler(w, r)
require.Equal(t, exceptedContentLanguage, w.Header().Get(api.ContentLanguage))
}
func TestMultipartUploadEnabledMD5(t *testing.T) {
hc := prepareHandlerContext(t)
hc.config.md5Enabled = true
hc.layerFeatures.SetMD5Enabled(true)
bktName, objName := "bucket-md5", "object-md5"
createTestBucket(hc, bktName)
partSize := 5 * 1024 * 1024
multipartUpload := createMultipartUpload(hc, bktName, objName, map[string]string{})
etag1, partBody1 := uploadPart(hc, bktName, objName, multipartUpload.UploadID, 1, partSize)
md5Sum1 := md5.Sum(partBody1)
require.Equal(t, hex.EncodeToString(md5Sum1[:]), etag1)
etag2, partBody2 := uploadPart(hc, bktName, objName, multipartUpload.UploadID, 2, partSize)
md5Sum2 := md5.Sum(partBody2)
require.Equal(t, hex.EncodeToString(md5Sum2[:]), etag2)
w := completeMultipartUploadBase(hc, bktName, objName, multipartUpload.UploadID, []string{etag1, etag2})
assertStatus(t, w, http.StatusOK)
resp := &CompleteMultipartUploadResponse{}
err := xml.NewDecoder(w.Result().Body).Decode(resp)
require.NoError(t, err)
completeMD5Sum := md5.Sum(append(md5Sum1[:], md5Sum2[:]...))
require.Equal(t, hex.EncodeToString(completeMD5Sum[:])+"-2", resp.ETag)
}
func uploadPartCopy(hc *handlerContext, bktName, objName, uploadID string, num int, srcObj string, start, end int) *UploadPartCopyResponse {
@ -267,13 +361,14 @@ func listMultipartUploadsBase(hc *handlerContext, bktName, prefix, delimiter, up
return listPartsResponse
}
func listParts(hc *handlerContext, bktName, objName string, uploadID string) *ListPartsResponse {
return listPartsBase(hc, bktName, objName, false, uploadID)
func listParts(hc *handlerContext, bktName, objName string, uploadID, partNumberMarker string, status int) *ListPartsResponse {
return listPartsBase(hc, bktName, objName, false, uploadID, partNumberMarker, status)
}
func listPartsBase(hc *handlerContext, bktName, objName string, encrypted bool, uploadID string) *ListPartsResponse {
func listPartsBase(hc *handlerContext, bktName, objName string, encrypted bool, uploadID, partNumberMarker string, status int) *ListPartsResponse {
query := make(url.Values)
query.Set(uploadIDQuery, uploadID)
query.Set(partNumberMarkerQuery, partNumberMarker)
w, r := prepareTestRequestWithQuery(hc, bktName, objName, query, nil)
if encrypted {
@ -282,7 +377,7 @@ func listPartsBase(hc *handlerContext, bktName, objName string, encrypted bool,
hc.Handler().ListPartsHandler(w, r)
listPartsResponse := &ListPartsResponse{}
readResponse(hc.t, w, http.StatusOK, listPartsResponse)
readResponse(hc.t, w, status, listPartsResponse)
return listPartsResponse
}

View file

@ -2,7 +2,6 @@ package handler
import (
"context"
"encoding/xml"
"fmt"
"net/http"
"strings"
@ -26,11 +25,6 @@ type (
User string
Time time.Time
}
NotificationConfiguration struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ NotificationConfiguation"`
NotificationConfiguration data.NotificationConfiguration
}
)
const (
@ -105,7 +99,7 @@ func (h *handler) PutBucketNotificationHandler(w http.ResponseWriter, r *http.Re
}
conf := &data.NotificationConfiguration{}
if err = xml.NewDecoder(r.Body).Decode(conf); err != nil {
if err = h.cfg.NewXMLDecoder(r.Body).Decode(conf); err != nil {
h.logAndSendError(w, "couldn't decode notification configuration", reqInfo, errors.GetAPIError(errors.ErrMalformedXML))
return
}
@ -155,7 +149,7 @@ func (h *handler) GetBucketNotificationHandler(w http.ResponseWriter, r *http.Re
}
func (h *handler) sendNotifications(ctx context.Context, p *SendNotificationParams) error {
if !h.cfg.NotificatorEnabled {
if !h.cfg.NotificatorEnabled() {
return nil
}
@ -198,7 +192,7 @@ func (h *handler) checkBucketConfiguration(ctx context.Context, conf *data.Notif
return
}
if h.cfg.NotificatorEnabled {
if h.cfg.NotificatorEnabled() {
if err = h.notificator.SendTestNotification(q.QueueArn, r.BucketName, r.RequestID, r.Host, layer.TimeNow(ctx)); err != nil {
return
}

View file

@ -6,6 +6,7 @@ import (
"strconv"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer"
@ -196,6 +197,7 @@ func fillContents(src []*data.ObjectInfo, encode string, fetchOwner bool) []Obje
Size: obj.Size,
LastModified: obj.Created.UTC().Format(time.RFC3339),
ETag: obj.HashSum,
StorageClass: api.DefaultStorageClass,
}
if size, err := layer.GetObjectSize(obj); err == nil {
@ -233,7 +235,7 @@ func (h *handler) ListBucketObjectVersionsHandler(w http.ResponseWriter, r *http
return
}
response := encodeListObjectVersionsToResponse(info, p.BktInfo.Name)
response := encodeListObjectVersionsToResponse(info, p.BktInfo.Name, h.cfg.MD5Enabled())
if err = middleware.EncodeToResponse(w, response); err != nil {
h.logAndSendError(w, "something went wrong", reqInfo, err)
}
@ -253,15 +255,19 @@ func parseListObjectVersionsRequest(reqInfo *middleware.ReqInfo) (*layer.ListObj
}
res.Prefix = queryValues.Get("prefix")
res.KeyMarker = queryValues.Get("marker")
res.KeyMarker = queryValues.Get("key-marker")
res.Delimiter = queryValues.Get("delimiter")
res.Encode = queryValues.Get("encoding-type")
res.VersionIDMarker = queryValues.Get("version-id-marker")
if res.VersionIDMarker != "" && res.KeyMarker == "" {
return nil, errors.GetAPIError(errors.VersionIDMarkerWithoutKeyMarker)
}
return &res, nil
}
func encodeListObjectVersionsToResponse(info *layer.ListObjectVersionsInfo, bucketName string) *ListObjectsVersionsResponse {
func encodeListObjectVersionsToResponse(info *layer.ListObjectVersionsInfo, bucketName string, md5Enabled bool) *ListObjectsVersionsResponse {
res := ListObjectsVersionsResponse{
Name: bucketName,
IsTruncated: info.IsTruncated,
@ -286,7 +292,8 @@ func encodeListObjectVersionsToResponse(info *layer.ListObjectVersionsInfo, buck
},
Size: ver.ObjectInfo.Size,
VersionID: ver.Version(),
ETag: ver.ObjectInfo.HashSum,
ETag: ver.ObjectInfo.ETag(md5Enabled),
StorageClass: api.DefaultStorageClass,
})
}
// this loop is not starting till versioning is not implemented

View file

@ -3,10 +3,12 @@ package handler
import (
"net/http"
"net/url"
"sort"
"strconv"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/layer/encryption"
"github.com/stretchr/testify/require"
)
@ -56,6 +58,36 @@ func TestListObjectNullVersions(t *testing.T) {
require.Equal(t, data.UnversionedObjectVersionID, result.Version[1].VersionID)
}
func TestListObjectsPaging(t *testing.T) {
hc := prepareHandlerContext(t)
bktName := "bucket-versioning-enabled"
createTestBucket(hc, bktName)
n := 10
var objects []string
for i := 0; i < n; i++ {
objects = append(objects, "objects"+strconv.Itoa(i))
putObjectContent(hc, bktName, objects[i], "content")
}
sort.Strings(objects)
result := &ListObjectsVersionsResponse{IsTruncated: true}
for result.IsTruncated {
result = listObjectsVersions(hc, bktName, "", "", result.NextKeyMarker, result.NextVersionIDMarker, n/3)
for i, version := range result.Version {
if objects[i] != version.Key {
t.Errorf("expected: '%s', got: '%s'", objects[i], version.Key)
}
}
objects = objects[len(result.Version):]
}
require.Empty(t, objects)
}
func TestS3CompatibilityBucketListV2BothContinuationTokenStartAfter(t *testing.T) {
tc := prepareHandlerContext(t)
@ -64,7 +96,7 @@ func TestS3CompatibilityBucketListV2BothContinuationTokenStartAfter(t *testing.T
bktInfo, _ := createBucketAndObject(tc, bktName, objects[0])
for _, objName := range objects[1:] {
createTestObject(tc, bktInfo, objName)
createTestObject(tc, bktInfo, objName, encryption.Params{})
}
listV2Response1 := listObjectsV2(tc, bktName, "", "", "bar", "", 1)
@ -81,6 +113,36 @@ func TestS3CompatibilityBucketListV2BothContinuationTokenStartAfter(t *testing.T
require.Equal(t, "quxx", listV2Response2.Contents[1].Key)
}
func TestS3BucketListV2EncodingBasic(t *testing.T) {
hc := prepareHandlerContext(t)
bktName := "bucket-for-listing-v1-encoding"
bktInfo := createTestBucket(hc, bktName)
objects := []string{"foo+1/bar", "foo/bar/xyzzy", "quux ab/thud", "asdf+b"}
for _, objName := range objects {
createTestObject(hc, bktInfo, objName, encryption.Params{})
}
query := make(url.Values)
query.Add("delimiter", "/")
query.Add("encoding-type", "url")
w, r := prepareTestFullRequest(hc, bktName, "", query, nil)
hc.Handler().ListObjectsV2Handler(w, r)
assertStatus(hc.t, w, http.StatusOK)
listV2Response := &ListObjectsV2Response{}
parseTestResponse(hc.t, w, listV2Response)
require.Equal(t, "/", listV2Response.Delimiter)
require.Len(t, listV2Response.Contents, 1)
require.Equal(t, "asdf%2Bb", listV2Response.Contents[0].Key)
require.Len(t, listV2Response.CommonPrefixes, 3)
require.Equal(t, "foo%2B1/", listV2Response.CommonPrefixes[0].Prefix)
require.Equal(t, "foo/", listV2Response.CommonPrefixes[1].Prefix)
require.Equal(t, "quux%20ab/", listV2Response.CommonPrefixes[2].Prefix)
}
func TestS3BucketListDelimiterBasic(t *testing.T) {
tc := prepareHandlerContext(t)
@ -89,7 +151,7 @@ func TestS3BucketListDelimiterBasic(t *testing.T) {
bktInfo, _ := createBucketAndObject(tc, bktName, objects[0])
for _, objName := range objects[1:] {
createTestObject(tc, bktInfo, objName)
createTestObject(tc, bktInfo, objName, encryption.Params{})
}
listV1Response := listObjectsV1(tc, bktName, "", "/", "", -1)
@ -108,7 +170,7 @@ func TestS3BucketListV2DelimiterPercentage(t *testing.T) {
bktInfo, _ := createBucketAndObject(tc, bktName, objects[0])
for _, objName := range objects[1:] {
createTestObject(tc, bktInfo, objName)
createTestObject(tc, bktInfo, objName, encryption.Params{})
}
listV2Response := listObjectsV2(tc, bktName, "", "%", "", "", -1)
@ -128,7 +190,7 @@ func TestS3BucketListV2DelimiterPrefix(t *testing.T) {
bktInfo, _ := createBucketAndObject(tc, bktName, objects[0])
for _, objName := range objects[1:] {
createTestObject(tc, bktInfo, objName)
createTestObject(tc, bktInfo, objName, encryption.Params{})
}
var empty []string
@ -149,6 +211,41 @@ func TestS3BucketListV2DelimiterPrefix(t *testing.T) {
validateListV2(t, tc, bktName, prefix, delim, "", 2, false, true, []string{"boo/bar"}, []string{"boo/baz/"})
}
func TestMintVersioningListObjectVersionsVersionIDContinuation(t *testing.T) {
hc := prepareHandlerContext(t)
bktName, objName := "mint-bucket-for-listing-versions", "objName"
createTestBucket(hc, bktName)
putBucketVersioning(t, hc, bktName, true)
length := 10
objects := make([]string, length)
for i := 0; i < length; i++ {
objects[i] = objName
putObject(hc, bktName, objName)
}
maxKeys := 5
page1 := listObjectsVersions(hc, bktName, "", "", "", "", maxKeys)
require.Len(t, page1.Version, maxKeys)
checkVersionsNames(t, page1, objects)
require.Equal(t, page1.Version[maxKeys-1].VersionID, page1.NextVersionIDMarker)
require.True(t, page1.IsTruncated)
page2 := listObjectsVersions(hc, bktName, "", "", page1.NextKeyMarker, page1.NextVersionIDMarker, maxKeys)
require.Len(t, page2.Version, maxKeys)
checkVersionsNames(t, page1, objects)
require.Empty(t, page2.NextVersionIDMarker)
require.False(t, page2.IsTruncated)
}
func checkVersionsNames(t *testing.T, versions *ListObjectsVersionsResponse, names []string) {
for i, v := range versions.Version {
require.Equal(t, names[i], v.Key)
}
}
func listObjectsV2(hc *handlerContext, bktName, prefix, delimiter, startAfter, continuationToken string, maxKeys int) *ListObjectsV2Response {
query := prepareCommonListObjectsQuery(prefix, delimiter, maxKeys)
if len(startAfter) != 0 {
@ -215,3 +312,20 @@ func listObjectsV1(hc *handlerContext, bktName, prefix, delimiter, marker string
parseTestResponse(hc.t, w, res)
return res
}
func listObjectsVersions(hc *handlerContext, bktName, prefix, delimiter, keyMarker, versionIDMarker string, maxKeys int) *ListObjectsVersionsResponse {
query := prepareCommonListObjectsQuery(prefix, delimiter, maxKeys)
if len(keyMarker) != 0 {
query.Add("key-marker", keyMarker)
}
if len(versionIDMarker) != 0 {
query.Add("version-id-marker", versionIDMarker)
}
w, r := prepareTestFullRequest(hc, bktName, "", query, nil)
hc.Handler().ListBucketObjectVersionsHandler(w, r)
assertStatus(hc.t, w, http.StatusOK)
res := &ListObjectsVersionsResponse{}
parseTestResponse(hc.t, w, res)
return res
}

View file

@ -6,7 +6,6 @@ import (
"encoding/base64"
"encoding/json"
"encoding/xml"
errorsStd "errors"
"fmt"
"io"
"net"
@ -214,6 +213,9 @@ func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
if expires := r.Header.Get(api.Expires); len(expires) > 0 {
metadata[api.Expires] = expires
}
if contentLanguage := r.Header.Get(api.ContentLanguage); len(contentLanguage) > 0 {
metadata[api.ContentLanguage] = contentLanguage
}
encryptionParams, err := formEncryptionParams(r)
if err != nil {
@ -242,6 +244,7 @@ func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
Size: size,
Header: metadata,
Encryption: encryptionParams,
ContentMD5: r.Header.Get(api.ContentMD5),
}
params.CopiesNumbers, err = h.pickCopiesNumbers(metadata, bktInfo.LocationConstraint)
@ -324,7 +327,8 @@ func (h *handler) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
addSSECHeaders(w.Header(), r.Header)
}
w.Header().Set(api.ETag, objInfo.HashSum)
w.Header().Set(api.ETag, objInfo.ETag(h.cfg.MD5Enabled()))
middleware.WriteSuccessResponseHeadersOnly(w)
}
@ -348,7 +352,7 @@ func (h *handler) getBodyReader(r *http.Request) (io.ReadCloser, error) {
}
r.Header.Set(api.ContentEncoding, strings.Join(resultContentEncoding, ","))
if !chunkedEncoding && !h.cfg.Kludge.BypassContentEncodingInChunks() {
if !chunkedEncoding && !h.cfg.BypassContentEncodingInChunks() {
return nil, fmt.Errorf("%w: request is not chunk encoded, encodings '%s'",
errors.GetAPIError(errors.ErrInvalidEncodingMethod), strings.Join(encodings, ","))
}
@ -371,16 +375,38 @@ func (h *handler) getBodyReader(r *http.Request) (io.ReadCloser, error) {
}
func formEncryptionParams(r *http.Request) (enc encryption.Params, err error) {
sseCustomerAlgorithm := r.Header.Get(api.AmzServerSideEncryptionCustomerAlgorithm)
sseCustomerKey := r.Header.Get(api.AmzServerSideEncryptionCustomerKey)
sseCustomerKeyMD5 := r.Header.Get(api.AmzServerSideEncryptionCustomerKeyMD5)
return formEncryptionParamsBase(r, false)
}
func formCopySourceEncryptionParams(r *http.Request) (enc encryption.Params, err error) {
return formEncryptionParamsBase(r, true)
}
func formEncryptionParamsBase(r *http.Request, isCopySource bool) (enc encryption.Params, err error) {
var sseCustomerAlgorithm, sseCustomerKey, sseCustomerKeyMD5 string
if isCopySource {
sseCustomerAlgorithm = r.Header.Get(api.AmzCopySourceServerSideEncryptionCustomerAlgorithm)
sseCustomerKey = r.Header.Get(api.AmzCopySourceServerSideEncryptionCustomerKey)
sseCustomerKeyMD5 = r.Header.Get(api.AmzCopySourceServerSideEncryptionCustomerKeyMD5)
} else {
sseCustomerAlgorithm = r.Header.Get(api.AmzServerSideEncryptionCustomerAlgorithm)
sseCustomerKey = r.Header.Get(api.AmzServerSideEncryptionCustomerKey)
sseCustomerKeyMD5 = r.Header.Get(api.AmzServerSideEncryptionCustomerKeyMD5)
}
if len(sseCustomerAlgorithm) == 0 && len(sseCustomerKey) == 0 && len(sseCustomerKeyMD5) == 0 {
return
}
if r.TLS == nil {
return enc, errorsStd.New("encryption available only when TLS is enabled")
return enc, errors.GetAPIError(errors.ErrInsecureSSECustomerRequest)
}
if len(sseCustomerKey) > 0 && len(sseCustomerAlgorithm) == 0 {
return enc, errors.GetAPIError(errors.ErrMissingSSECustomerAlgorithm)
}
if len(sseCustomerAlgorithm) > 0 && len(sseCustomerKey) == 0 {
return enc, errors.GetAPIError(errors.ErrMissingSSECustomerKey)
}
if sseCustomerAlgorithm != layer.AESEncryptionAlgorithm {
@ -389,10 +415,16 @@ func formEncryptionParams(r *http.Request) (enc encryption.Params, err error) {
key, err := base64.StdEncoding.DecodeString(sseCustomerKey)
if err != nil {
if isCopySource {
return enc, errors.GetAPIError(errors.ErrInvalidSSECustomerParameters)
}
return enc, errors.GetAPIError(errors.ErrInvalidSSECustomerKey)
}
if len(key) != layer.AESKeySize {
if isCopySource {
return enc, errors.GetAPIError(errors.ErrInvalidSSECustomerParameters)
}
return enc, errors.GetAPIError(errors.ErrInvalidSSECustomerKey)
}
@ -433,7 +465,7 @@ func (h *handler) PostObject(w http.ResponseWriter, r *http.Request) {
if tagging := auth.MultipartFormValue(r, "tagging"); tagging != "" {
buffer := bytes.NewBufferString(tagging)
tagSet, err = readTagSet(buffer)
tagSet, err = h.readTagSet(buffer)
if err != nil {
h.logAndSendError(w, "could not read tag set", reqInfo, err)
return
@ -559,7 +591,7 @@ func (h *handler) PostObject(w http.ResponseWriter, r *http.Request) {
resp := &PostResponse{
Bucket: objInfo.Bucket,
Key: objInfo.Name,
ETag: objInfo.HashSum,
ETag: objInfo.ETag(h.cfg.MD5Enabled()),
}
w.WriteHeader(status)
if _, err = w.Write(middleware.EncodeResponse(resp)); err != nil {
@ -742,7 +774,7 @@ func (h *handler) CreateBucketHandler(w http.ResponseWriter, r *http.Request) {
return
}
createParams, err := parseLocationConstraint(r)
createParams, err := h.parseLocationConstraint(r)
if err != nil {
h.logAndSendError(w, "could not parse body", reqInfo, err)
return
@ -797,7 +829,7 @@ func (h *handler) CreateBucketHandler(w http.ResponseWriter, r *http.Request) {
}
func (h handler) setPolicy(prm *layer.CreateBucketParams, locationConstraint string, userPolicies []*accessbox.ContainerPolicy) error {
prm.Policy = h.cfg.Policy.DefaultPlacementPolicy()
prm.Policy = h.cfg.DefaultPlacementPolicy()
prm.LocationConstraint = locationConstraint
if locationConstraint == "" {
@ -811,7 +843,7 @@ func (h handler) setPolicy(prm *layer.CreateBucketParams, locationConstraint str
}
}
if policy, ok := h.cfg.Policy.PlacementPolicy(locationConstraint); ok {
if policy, ok := h.cfg.PlacementPolicy(locationConstraint); ok {
prm.Policy = policy
return nil
}
@ -859,13 +891,13 @@ func isAlphaNum(char int32) bool {
return 'a' <= char && char <= 'z' || '0' <= char && char <= '9'
}
func parseLocationConstraint(r *http.Request) (*createBucketParams, error) {
func (h *handler) parseLocationConstraint(r *http.Request) (*createBucketParams, error) {
if r.ContentLength == 0 {
return new(createBucketParams), nil
}
params := new(createBucketParams)
if err := xml.NewDecoder(r.Body).Decode(params); err != nil {
if err := h.cfg.NewXMLDecoder(r.Body).Decode(params); err != nil {
return nil, errors.GetAPIError(errors.ErrMalformedXML)
}
return params, nil

View file

@ -3,14 +3,14 @@ package handler
import (
"bytes"
"context"
"crypto/rand"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"encoding/json"
"errors"
"io"
"mime/multipart"
"net/http"
"net/http/httptest"
"runtime"
"strconv"
"strings"
"testing"
@ -176,22 +176,35 @@ func TestPutObjectWithStreamBodyError(t *testing.T) {
checkNotFound(t, tc, bktName, objName, emptyVersion)
}
func TestPutObjectWithWrapReaderDiscardOnError(t *testing.T) {
func TestPutObjectWithInvalidContentMD5(t *testing.T) {
tc := prepareHandlerContext(t)
tc.config.md5Enabled = true
bktName, objName := "bucket-for-put", "object-for-put"
createTestBucket(tc, bktName)
content := make([]byte, 128*1024)
_, err := rand.Read(content)
require.NoError(t, err)
content := []byte("content")
w, r := prepareTestPayloadRequest(tc, bktName, objName, bytes.NewReader(content))
tc.tp.SetObjectPutError(objName, errors.New("some error"))
numGoroutineBefore := runtime.NumGoroutine()
r.Header.Set(api.ContentMD5, base64.StdEncoding.EncodeToString([]byte("invalid")))
tc.Handler().PutObjectHandler(w, r)
numGoroutineAfter := runtime.NumGoroutine()
require.Equal(t, numGoroutineBefore, numGoroutineAfter, "goroutines shouldn't leak during put object")
assertS3Error(t, w, s3errors.GetAPIError(s3errors.ErrInvalidDigest))
checkNotFound(t, tc, bktName, objName, emptyVersion)
}
func TestPutObjectWithEnabledMD5(t *testing.T) {
tc := prepareHandlerContext(t)
tc.config.md5Enabled = true
bktName, objName := "bucket-for-put", "object-for-put"
createTestBucket(tc, bktName)
content := []byte("content")
md5Hash := md5.New()
md5Hash.Write(content)
w, r := prepareTestPayloadRequest(tc, bktName, objName, bytes.NewReader(content))
tc.Handler().PutObjectHandler(w, r)
require.Equal(t, hex.EncodeToString(md5Hash.Sum(nil)), w.Header().Get(api.ETag))
}
func TestPutObjectWithStreamBodyAWSExample(t *testing.T) {
@ -230,7 +243,7 @@ func TestPutChunkedTestContentEncoding(t *testing.T) {
hc.Handler().PutObjectHandler(w, req)
assertS3Error(t, w, s3errors.GetAPIError(s3errors.ErrInvalidEncodingMethod))
hc.kludge.bypassContentEncodingInChunks = true
hc.config.bypassContentEncodingInChunks = true
w, req, _ = getChunkedRequest(hc.context, t, bktName, objName)
req.Header.Set(api.ContentEncoding, "gzip")
hc.Handler().PutObjectHandler(w, req)
@ -292,7 +305,7 @@ func getChunkedRequest(ctx context.Context, t *testing.T, bktName, objName strin
}))
req = req.WithContext(middleware.SetBoxData(req.Context(), &accessbox.Box{
Gate: &accessbox.GateData{
AccessKey: AWSSecretAccessKey,
SecretKey: AWSSecretAccessKey,
},
}))
@ -344,3 +357,18 @@ func getObjectAttribute(obj *object.Object, attrName string) string {
}
return ""
}
func TestPutObjectWithContentLanguage(t *testing.T) {
tc := prepareHandlerContext(t)
exceptedContentLanguage := "en"
bktName, objName := "bucket-1", "object-1"
createTestBucket(tc, bktName)
w, r := prepareTestRequest(tc, bktName, objName, nil)
r.Header.Set(api.ContentLanguage, exceptedContentLanguage)
tc.Handler().PutObjectHandler(w, r)
tc.Handler().HeadObjectHandler(w, r)
require.Equal(t, exceptedContentLanguage, w.Header().Get(api.ContentLanguage))
}

View file

@ -110,7 +110,7 @@ type Object struct {
Owner *Owner `xml:"Owner,omitempty"`
// Class of storage used to store the object.
StorageClass string `xml:"StorageClass,omitempty"`
StorageClass string `xml:"StorageClass"`
}
// ObjectVersionResponse container for object version in the response of ListBucketObjectVersionsHandler.
@ -121,7 +121,7 @@ type ObjectVersionResponse struct {
LastModified string `xml:"LastModified"`
Owner Owner `xml:"Owner"`
Size uint64 `xml:"Size"`
StorageClass string `xml:"StorageClass,omitempty"` // is empty!!
StorageClass string `xml:"StorageClass"`
VersionID string `xml:"VersionId"`
}

View file

@ -199,7 +199,7 @@ func newSignV4ChunkedReader(req *http.Request) (io.ReadCloser, error) {
return nil, errs.GetAPIError(errs.ErrAuthorizationHeaderMalformed)
}
currentCredentials := credentials.NewStaticCredentials(authHeaders.AccessKeyID, box.Gate.AccessKey, "")
currentCredentials := credentials.NewStaticCredentials(authHeaders.AccessKeyID, box.Gate.SecretKey, "")
seed, err := hex.DecodeString(authHeaders.SignatureV4)
if err != nil {
return nil, errs.GetAPIError(errs.ErrSignatureDoesNotMatch)

View file

@ -1,7 +1,6 @@
package handler
import (
"encoding/xml"
"io"
"net/http"
"sort"
@ -29,7 +28,7 @@ func (h *handler) PutObjectTaggingHandler(w http.ResponseWriter, r *http.Request
ctx := r.Context()
reqInfo := middleware.GetReqInfo(ctx)
tagSet, err := readTagSet(r.Body)
tagSet, err := h.readTagSet(r.Body)
if err != nil {
h.logAndSendError(w, "could not read tag set", reqInfo, err)
return
@ -153,7 +152,7 @@ func (h *handler) DeleteObjectTaggingHandler(w http.ResponseWriter, r *http.Requ
func (h *handler) PutBucketTaggingHandler(w http.ResponseWriter, r *http.Request) {
reqInfo := middleware.GetReqInfo(r.Context())
tagSet, err := readTagSet(r.Body)
tagSet, err := h.readTagSet(r.Body)
if err != nil {
h.logAndSendError(w, "could not read tag set", reqInfo, err)
return
@ -208,9 +207,9 @@ func (h *handler) DeleteBucketTaggingHandler(w http.ResponseWriter, r *http.Requ
w.WriteHeader(http.StatusNoContent)
}
func readTagSet(reader io.Reader) (map[string]string, error) {
func (h *handler) readTagSet(reader io.Reader) (map[string]string, error) {
tagging := new(Tagging)
if err := xml.NewDecoder(reader).Decode(tagging); err != nil {
if err := h.cfg.NewXMLDecoder(reader).Decode(tagging); err != nil {
return nil, errors.GetAPIError(errors.ErrMalformedXML)
}
@ -220,6 +219,9 @@ func readTagSet(reader io.Reader) (map[string]string, error) {
tagSet := make(map[string]string, len(tagging.TagSet))
for _, tag := range tagging.TagSet {
if _, ok := tagSet[tag.Key]; ok {
return nil, errors.GetAPIError(errors.ErrInvalidTagKeyUniqueness)
}
tagSet[tag.Key] = tag.Value
}

View file

@ -1,9 +1,11 @@
package handler
import (
"net/http"
"strings"
"testing"
apiErrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"github.com/stretchr/testify/require"
)
@ -44,3 +46,66 @@ func TestTagsValidity(t *testing.T) {
}
}
}
func TestPutObjectTaggingCheckUniqueness(t *testing.T) {
hc := prepareHandlerContext(t)
bktName, objName := "bucket-1", "object-1"
createBucketAndObject(hc, bktName, objName)
for _, tc := range []struct {
name string
body *Tagging
error bool
}{
{
name: "Two tags with unique keys",
body: &Tagging{
TagSet: []Tag{
{
Key: "key-1",
Value: "val-1",
},
{
Key: "key-2",
Value: "val-2",
},
},
},
error: false,
},
{
name: "Two tags with the same keys",
body: &Tagging{
TagSet: []Tag{
{
Key: "key-1",
Value: "val-1",
},
{
Key: "key-1",
Value: "val-2",
},
},
},
error: true,
},
} {
t.Run(tc.name, func(t *testing.T) {
w, r := prepareTestRequest(hc, bktName, objName, tc.body)
hc.Handler().PutObjectTaggingHandler(w, r)
if tc.error {
assertS3Error(t, w, apiErrors.GetAPIError(apiErrors.ErrInvalidTagKeyUniqueness))
return
}
assertStatus(t, w, http.StatusOK)
tagging := getObjectTagging(t, hc, bktName, objName, emptyVersion)
require.Len(t, tagging.TagSet, 2)
require.Equal(t, "key-1", tagging.TagSet[0].Key)
require.Equal(t, "val-1", tagging.TagSet[0].Value)
require.Equal(t, "key-2", tagging.TagSet[1].Key)
require.Equal(t, "val-2", tagging.TagSet[1].Value)
})
}
}

View file

@ -3,6 +3,7 @@ package handler
import (
"context"
"errors"
"fmt"
"net/http"
"strconv"
"strings"
@ -15,6 +16,7 @@ import (
frosterrors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/frostfs/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
"go.opentelemetry.io/otel/trace"
"go.uber.org/zap"
)
@ -27,6 +29,7 @@ func (h *handler) reqLogger(ctx context.Context) *zap.Logger {
}
func (h *handler) logAndSendError(w http.ResponseWriter, logText string, reqInfo *middleware.ReqInfo, err error, additional ...zap.Field) {
err = handleDeleteMarker(w, err)
code := middleware.WriteErrorResponse(w, reqInfo, transformToS3Error(err))
fields := []zap.Field{
zap.Int("status", code),
@ -37,20 +40,20 @@ func (h *handler) logAndSendError(w http.ResponseWriter, logText string, reqInfo
zap.String("description", logText),
zap.Error(err)}
fields = append(fields, additional...)
if traceID, err := trace.TraceIDFromHex(reqInfo.TraceID); err == nil && traceID.IsValid() {
fields = append(fields, zap.String("trace_id", reqInfo.TraceID))
}
h.log.Error(logs.RequestFailed, fields...) // consider using h.reqLogger (it requires accept context.Context or http.Request)
}
func (h *handler) logAndSendErrorNoHeader(w http.ResponseWriter, logText string, reqInfo *middleware.ReqInfo, err error, additional ...zap.Field) {
middleware.WriteErrorResponseNoHeader(w, reqInfo, transformToS3Error(err))
fields := []zap.Field{
zap.String("request_id", reqInfo.RequestID),
zap.String("method", reqInfo.API),
zap.String("bucket", reqInfo.BucketName),
zap.String("object", reqInfo.ObjectName),
zap.String("description", logText),
zap.Error(err)}
fields = append(fields, additional...)
h.log.Error(logs.RequestFailed, fields...) // consider using h.reqLogger (it requires accept context.Context or http.Request)
func handleDeleteMarker(w http.ResponseWriter, err error) error {
var target layer.DeleteMarkerError
if !errors.As(err, &target) {
return err
}
w.Header().Set(api.AmzDeleteMarker, "true")
return fmt.Errorf("%w: %s", s3errors.GetAPIError(target.ErrorCode), err)
}
func transformToS3Error(err error) error {

View file

@ -1,7 +1,6 @@
package handler
import (
"encoding/xml"
"net/http"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
@ -14,7 +13,7 @@ func (h *handler) PutBucketVersioningHandler(w http.ResponseWriter, r *http.Requ
reqInfo := middleware.GetReqInfo(r.Context())
configuration := new(VersioningConfiguration)
if err := xml.NewDecoder(r.Body).Decode(configuration); err != nil {
if err := h.cfg.NewXMLDecoder(r.Body).Decode(configuration); err != nil {
h.logAndSendError(w, "couldn't decode versioning configuration", reqInfo, errors.GetAPIError(errors.ErrIllegalVersioningConfigurationException))
return
}

View file

@ -61,11 +61,16 @@ const (
AmzObjectAttributes = "X-Amz-Object-Attributes"
AmzMaxParts = "X-Amz-Max-Parts"
AmzPartNumberMarker = "X-Amz-Part-Number-Marker"
AmzStorageClass = "X-Amz-Storage-Class"
AmzServerSideEncryptionCustomerAlgorithm = "x-amz-server-side-encryption-customer-algorithm"
AmzServerSideEncryptionCustomerKey = "x-amz-server-side-encryption-customer-key"
AmzServerSideEncryptionCustomerKeyMD5 = "x-amz-server-side-encryption-customer-key-MD5"
AmzCopySourceServerSideEncryptionCustomerAlgorithm = "x-amz-copy-source-server-side-encryption-customer-algorithm"
AmzCopySourceServerSideEncryptionCustomerKey = "x-amz-copy-source-server-side-encryption-customer-key"
AmzCopySourceServerSideEncryptionCustomerKeyMD5 = "x-amz-copy-source-server-side-encryption-customer-key-MD5"
OwnerID = "X-Owner-Id"
ContainerID = "X-Container-Id"
ContainerName = "X-Container-Name"
@ -89,6 +94,8 @@ const (
DefaultLocationConstraint = "default"
StreamingContentSHA256 = "STREAMING-AWS4-HMAC-SHA256-PAYLOAD"
DefaultStorageClass = "STANDARD"
)
// S3 request query params.
@ -114,6 +121,7 @@ var SystemMetadata = map[string]struct{}{
ContentType: {},
LastModified: {},
ETag: {},
ContentLanguage: {},
}
func IsSignedStreamingV4(r *http.Request) bool {

View file

@ -59,6 +59,7 @@ func (n *layer) containerInfo(ctx context.Context, idCnr cid.ID) (*data.BucketIn
}
info.Created = container.CreatedAt(cnr)
info.LocationConstraint = cnr.Attribute(attributeLocationConstraint)
info.HomomorphicHashDisabled = container.IsHomomorphicHashingDisabled(cnr)
attrLockEnabled := cnr.Attribute(AttributeLockEnabled)
if len(attrLockEnabled) > 0 {
@ -122,7 +123,7 @@ func (n *layer) createContainer(ctx context.Context, p *CreateBucketParams) (*da
})
}
idCnr, err := n.frostFS.CreateContainer(ctx, PrmContainerCreate{
res, err := n.frostFS.CreateContainer(ctx, PrmContainerCreate{
Creator: bktInfo.Owner,
Policy: p.Policy,
Name: p.Name,
@ -134,7 +135,8 @@ func (n *layer) createContainer(ctx context.Context, p *CreateBucketParams) (*da
return nil, fmt.Errorf("create container: %w", err)
}
bktInfo.CID = idCnr
bktInfo.CID = res.ContainerID
bktInfo.HomomorphicHashDisabled = res.HomomorphicHashDisabled
if err = n.setContainerEACLTable(ctx, bktInfo.CID, p.EACL, p.SessionEACL); err != nil {
return nil, fmt.Errorf("set container eacl: %w", err)

View file

@ -3,7 +3,6 @@ package layer
import (
"bytes"
"context"
"encoding/xml"
errorsStd "errors"
"fmt"
"io"
@ -25,7 +24,7 @@ func (n *layer) PutBucketCORS(ctx context.Context, p *PutCORSParams) error {
cors = &data.CORSConfiguration{}
)
if err := xml.NewDecoder(tee).Decode(cors); err != nil {
if err := p.NewDecoder(tee).Decode(cors); err != nil {
return fmt.Errorf("xml decode cors: %w", err)
}
@ -45,7 +44,7 @@ func (n *layer) PutBucketCORS(ctx context.Context, p *PutCORSParams) error {
CopiesNumber: p.CopiesNumbers,
}
_, objID, _, err := n.objectPutAndHash(ctx, prm, p.BktInfo)
_, objID, _, _, err := n.objectPutAndHash(ctx, prm, p.BktInfo)
if err != nil {
return fmt.Errorf("put system object: %w", err)
}

View file

@ -10,6 +10,7 @@ import (
"fmt"
"io"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"github.com/minio/sio"
)
@ -100,8 +101,11 @@ func (p Params) HMAC() ([]byte, []byte, error) {
// MatchObjectEncryption checks if encryption params are valid for provided object.
func (p Params) MatchObjectEncryption(encInfo ObjectEncryption) error {
if p.Enabled() != encInfo.Enabled {
return errorsStd.New("invalid encryption view")
if p.Enabled() && !encInfo.Enabled {
return errors.GetAPIError(errors.ErrInvalidEncryptionParameters)
}
if !p.Enabled() && encInfo.Enabled {
return errors.GetAPIError(errors.ErrSSEEncryptedObject)
}
if !encInfo.Enabled {
@ -122,7 +126,7 @@ func (p Params) MatchObjectEncryption(encInfo ObjectEncryption) error {
mac.Write(hmacSalt)
expectedHmacKey := mac.Sum(nil)
if !bytes.Equal(expectedHmacKey, hmacKey) {
return errorsStd.New("mismatched hmac key")
return errors.GetAPIError(errors.ErrInvalidSSECustomerParameters)
}
return nil

View file

@ -43,6 +43,12 @@ type PrmContainerCreate struct {
AdditionalAttributes [][2]string
}
// ContainerCreateResult is a result parameter of FrostFS.CreateContainer operation.
type ContainerCreateResult struct {
ContainerID cid.ID
HomomorphicHashDisabled bool
}
// PrmAuth groups authentication parameters for the FrostFS operation.
type PrmAuth struct {
// Bearer token to be used for the operation. Overlaps PrivateKey. Optional.
@ -114,6 +120,12 @@ type PrmObjectCreate struct {
// Enables client side object preparing.
ClientCut bool
// Disables using Tillich-Zémor hash for payload.
WithoutHomomorphicHash bool
// Sets max buffer size to read payload.
BufferMaxSize uint64
}
// PrmObjectDelete groups parameters of FrostFS.DeleteObject operation.
@ -162,7 +174,7 @@ type FrostFS interface {
//
// It returns exactly one non-zero value. It returns any error encountered which
// prevented the container from being created.
CreateContainer(context.Context, PrmContainerCreate) (cid.ID, error)
CreateContainer(context.Context, PrmContainerCreate) (*ContainerCreateResult, error)
// Container reads a container from FrostFS by ID.
//

View file

@ -27,6 +27,11 @@ import (
type FeatureSettingsMock struct {
clientCut bool
md5Enabled bool
}
func (k *FeatureSettingsMock) BufferMaxSizeForPut() uint64 {
return 0
}
func (k *FeatureSettingsMock) ClientCut() bool {
@ -37,6 +42,14 @@ func (k *FeatureSettingsMock) SetClientCut(clientCut bool) {
k.clientCut = clientCut
}
func (k *FeatureSettingsMock) MD5Enabled() bool {
return k.md5Enabled
}
func (k *FeatureSettingsMock) SetMD5Enabled(md5Enabled bool) {
k.md5Enabled = md5Enabled
}
type TestFrostFS struct {
FrostFS
@ -114,7 +127,7 @@ func (t *TestFrostFS) ContainerID(name string) (cid.ID, error) {
return cid.ID{}, fmt.Errorf("not found")
}
func (t *TestFrostFS) CreateContainer(_ context.Context, prm PrmContainerCreate) (cid.ID, error) {
func (t *TestFrostFS) CreateContainer(_ context.Context, prm PrmContainerCreate) (*ContainerCreateResult, error) {
var cnr container.Container
cnr.Init()
cnr.SetOwner(prm.Creator)
@ -141,14 +154,14 @@ func (t *TestFrostFS) CreateContainer(_ context.Context, prm PrmContainerCreate)
b := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, b); err != nil {
return cid.ID{}, err
return nil, err
}
var id cid.ID
id.SetSHA256(sha256.Sum256(b))
t.containers[id.EncodeToString()] = &cnr
return id, nil
return &ContainerCreateResult{ContainerID: id}, nil
}
func (t *TestFrostFS) DeleteContainer(_ context.Context, cnrID cid.ID, _ *session.Container) error {

View file

@ -4,6 +4,7 @@ import (
"context"
"crypto/ecdsa"
"crypto/rand"
"encoding/xml"
"fmt"
"io"
"net/url"
@ -48,6 +49,8 @@ type (
FeatureSettings interface {
ClientCut() bool
BufferMaxSizeForPut() uint64
MD5Enabled() bool
}
layer struct {
@ -117,6 +120,8 @@ type (
Lock *data.ObjectLock
Encryption encryption.Params
CopiesNumbers []uint32
CompleteMD5Hash string
ContentMD5 string
}
PutCombinedObjectParams struct {
@ -145,6 +150,7 @@ type (
BktInfo *data.BucketInfo
Reader io.Reader
CopiesNumbers []uint32
NewDecoder func(io.Reader) *xml.Decoder
}
// CopyObjectParams stores object copy request parameters.
@ -154,11 +160,12 @@ type (
ScrBktInfo *data.BucketInfo
DstBktInfo *data.BucketInfo
DstObject string
SrcSize uint64
DstSize uint64
Header map[string]string
Range *RangeParams
Lock *data.ObjectLock
Encryption encryption.Params
SrcEncryption encryption.Params
DstEncryption encryption.Params
CopiesNumbers []uint32
}
// CreateBucketParams stores bucket create request parameters.
@ -285,6 +292,13 @@ const (
AttributeFrostfsCopiesNumber = "frostfs-copies-number" // such format to match X-Amz-Meta-Frostfs-Copies-Number header
)
var EncryptionMetadata = map[string]struct{}{
AttributeEncryptionAlgorithm: {},
AttributeDecryptedSize: {},
AttributeHMACSalt: {},
AttributeHMACKey: {},
}
func (t *VersionedObject) String() string {
return t.Name + ":" + t.VersionID
}
@ -577,7 +591,7 @@ func (n *layer) CopyObject(ctx context.Context, p *CopyObjectParams) (*data.Exte
Versioned: p.SrcVersioned,
Range: p.Range,
BucketInfo: p.ScrBktInfo,
Encryption: p.Encryption,
Encryption: p.SrcEncryption,
})
if err != nil {
return nil, fmt.Errorf("get object to copy: %w", err)
@ -586,10 +600,10 @@ func (n *layer) CopyObject(ctx context.Context, p *CopyObjectParams) (*data.Exte
return n.PutObject(ctx, &PutObjectParams{
BktInfo: p.DstBktInfo,
Object: p.DstObject,
Size: p.SrcSize,
Size: p.DstSize,
Reader: objPayload,
Header: p.Header,
Encryption: p.Encryption,
Encryption: p.DstEncryption,
CopiesNumbers: p.CopiesNumbers,
})
}

View file

@ -93,15 +93,22 @@ func newMultiObjectReader(ctx context.Context, cfg multiObjectReaderConfig) (*mu
}
func findStartPart(cfg multiObjectReaderConfig) (index int, offset uint64) {
return findPartByPosition(cfg.off, cfg.parts)
position := cfg.off
for i, part := range cfg.parts {
// Strict inequality when searching for start position to avoid reading zero length part.
if position < part.Size {
return i, position
}
position -= part.Size
}
return -1, 0
}
func findEndPart(cfg multiObjectReaderConfig) (index int, length uint64) {
return findPartByPosition(cfg.off+cfg.ln, cfg.parts)
}
func findPartByPosition(position uint64, parts []partObj) (index int, positionInPart uint64) {
for i, part := range parts {
position := cfg.off + cfg.ln
for i, part := range cfg.parts {
// Non-strict inequality when searching for end position to avoid out of payload range error.
if position <= part.Size {
return i, position
}

View file

@ -90,6 +90,16 @@ func TestMultiReader(t *testing.T) {
off: parts[0].Size - 4,
ln: parts[1].Size + 8,
},
{
name: "second part",
off: parts[0].Size,
ln: parts[1].Size,
},
{
name: "second and third",
off: parts[0].Size,
ln: parts[1].Size + parts[2].Size,
},
{
name: "offset out of range",
off: uint64(len(fullPayload) + 1),

View file

@ -3,6 +3,8 @@ package layer
import (
"bytes"
"context"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"encoding/json"
"errors"
@ -68,6 +70,7 @@ type (
PartNumber int
Size uint64
Reader io.Reader
ContentMD5 string
}
UploadCopyParams struct {
@ -75,6 +78,7 @@ type (
Info *UploadInfoParams
SrcObjInfo *data.ObjectInfo
SrcBktInfo *data.BucketInfo
SrcEncryption encryption.Params
PartNumber int
Range *RangeParams
}
@ -197,7 +201,7 @@ func (n *layer) UploadPart(ctx context.Context, p *UploadPartParams) (string, er
return "", err
}
return objInfo.HashSum, nil
return objInfo.ETag(n.features.MD5Enabled()), nil
}
func (n *layer) uploadPart(ctx context.Context, multipartInfo *data.MultipartInfo, p *UploadPartParams) (*data.ObjectInfo, error) {
@ -230,10 +234,28 @@ func (n *layer) uploadPart(ctx context.Context, multipartInfo *data.MultipartInf
prm.Attributes[0][0], prm.Attributes[0][1] = UploadIDAttributeName, p.Info.UploadID
prm.Attributes[1][0], prm.Attributes[1][1] = UploadPartNumberAttributeName, strconv.Itoa(p.PartNumber)
size, id, hash, err := n.objectPutAndHash(ctx, prm, bktInfo)
size, id, hash, md5Hash, err := n.objectPutAndHash(ctx, prm, bktInfo)
if err != nil {
return nil, err
}
if len(p.ContentMD5) > 0 {
hashBytes, err := base64.StdEncoding.DecodeString(p.ContentMD5)
if err != nil {
return nil, s3errors.GetAPIError(s3errors.ErrInvalidDigest)
}
if hex.EncodeToString(hashBytes) != hex.EncodeToString(md5Hash) {
prm := PrmObjectDelete{
Object: id,
Container: bktInfo.CID,
}
n.prepareAuthParameters(ctx, &prm.PrmAuth, bktInfo.Owner)
err = n.frostFS.DeleteObject(ctx, prm)
if err != nil {
n.reqLogger(ctx).Debug(logs.FailedToDeleteObject, zap.Stringer("cid", bktInfo.CID), zap.Stringer("oid", id))
}
return nil, s3errors.GetAPIError(s3errors.ErrInvalidDigest)
}
}
if p.Info.Encryption.Enabled() {
size = decSize
}
@ -250,6 +272,7 @@ func (n *layer) uploadPart(ctx context.Context, multipartInfo *data.MultipartInf
Size: size,
ETag: hex.EncodeToString(hash),
Created: prm.CreationTime,
MD5: hex.EncodeToString(md5Hash),
}
oldPartID, err := n.treeService.AddPart(ctx, bktInfo, multipartInfo.ID, partInfo)
@ -274,6 +297,7 @@ func (n *layer) uploadPart(ctx context.Context, multipartInfo *data.MultipartInf
Size: partInfo.Size,
Created: partInfo.Created,
HashSum: partInfo.ETag,
MD5Sum: partInfo.MD5,
}
return objInfo, nil
@ -293,6 +317,7 @@ func (n *layer) UploadPartCopy(ctx context.Context, p *UploadCopyParams) (*data.
if objSize, err := GetObjectSize(p.SrcObjInfo); err == nil {
srcObjectSize = objSize
size = objSize
}
if p.Range != nil {
@ -310,6 +335,7 @@ func (n *layer) UploadPartCopy(ctx context.Context, p *UploadCopyParams) (*data.
Versioned: p.Versioned,
Range: p.Range,
BucketInfo: p.SrcBktInfo,
Encryption: p.SrcEncryption,
})
if err != nil {
return nil, fmt.Errorf("get object to upload copy: %w", err)
@ -347,9 +373,10 @@ func (n *layer) CompleteMultipartUpload(ctx context.Context, p *CompleteMultipar
parts := make([]*data.PartInfo, 0, len(p.Parts))
var completedPartsHeader strings.Builder
md5Hash := md5.New()
for i, part := range p.Parts {
partInfo := partsInfo[part.PartNumber]
if partInfo == nil || part.ETag != partInfo.ETag {
if partInfo == nil || (part.ETag != partInfo.ETag && part.ETag != partInfo.MD5) {
return nil, nil, fmt.Errorf("%w: unknown part %d or etag mismatched", s3errors.GetAPIError(s3errors.ErrInvalidPart), part.PartNumber)
}
delete(partsInfo, part.PartNumber)
@ -376,6 +403,12 @@ func (n *layer) CompleteMultipartUpload(ctx context.Context, p *CompleteMultipar
if _, err = completedPartsHeader.WriteString(partInfoStr); err != nil {
return nil, nil, err
}
bytesHash, err := hex.DecodeString(partInfo.MD5)
if err != nil {
return nil, nil, fmt.Errorf("couldn't decode MD5 checksum of part: %w", err)
}
md5Hash.Write(bytesHash)
}
initMetadata := make(map[string]string, len(multipartInfo.Meta)+1)
@ -417,6 +450,7 @@ func (n *layer) CompleteMultipartUpload(ctx context.Context, p *CompleteMultipar
Size: multipartObjetSize,
Encryption: p.Info.Encryption,
CopiesNumbers: multipartInfo.CopiesNumbers,
CompleteMD5Hash: hex.EncodeToString(md5Hash.Sum(nil)) + "-" + strconv.Itoa(len(p.Parts)),
})
if err != nil {
n.reqLogger(ctx).Error(logs.CouldNotPutCompletedObject,
@ -548,6 +582,10 @@ func (n *layer) ListParts(ctx context.Context, p *ListPartsParams) (*ListPartsIn
return parts[i].PartNumber < parts[j].PartNumber
})
if len(parts) == 0 || p.PartNumberMarker >= parts[len(parts)-1].PartNumber {
res.Parts = make([]*Part, 0)
return &res, nil
}
if p.PartNumberMarker != 0 {
for i, part := range parts {
if part.PartNumber > p.PartNumberMarker {

View file

@ -34,7 +34,7 @@ func (n *layer) PutBucketNotificationConfiguration(ctx context.Context, p *PutBu
CopiesNumber: p.CopiesNumbers,
}
_, objID, _, err := n.objectPutAndHash(ctx, prm, p.BktInfo)
_, objID, _, _, err := n.objectPutAndHash(ctx, prm, p.BktInfo)
if err != nil {
return err
}

View file

@ -1,8 +1,11 @@
package layer
import (
"bytes"
"context"
"crypto/md5"
"crypto/sha256"
"encoding/base64"
"encoding/hex"
"encoding/json"
"errors"
@ -77,8 +80,16 @@ type (
Marker string
ContinuationToken string
}
DeleteMarkerError struct {
ErrorCode apiErrors.ErrorCode
}
)
func (e DeleteMarkerError) Error() string {
return "object is delete marker"
}
const (
continuationToken = "<continuation-token>"
)
@ -287,10 +298,23 @@ func (n *layer) PutObject(ctx context.Context, p *PutObjectParams) (*data.Extend
prm.Attributes = append(prm.Attributes, [2]string{k, v})
}
size, id, hash, err := n.objectPutAndHash(ctx, prm, p.BktInfo)
size, id, hash, md5Hash, err := n.objectPutAndHash(ctx, prm, p.BktInfo)
if err != nil {
return nil, err
}
if len(p.ContentMD5) > 0 {
headerMd5Hash, err := base64.StdEncoding.DecodeString(p.ContentMD5)
if err != nil {
return nil, apiErrors.GetAPIError(apiErrors.ErrInvalidDigest)
}
if !bytes.Equal(headerMd5Hash, md5Hash) {
err = n.objectDelete(ctx, p.BktInfo, id)
if err != nil {
n.reqLogger(ctx).Debug(logs.FailedToDeleteObject, zap.Stringer("cid", p.BktInfo.CID), zap.Stringer("oid", id))
}
return nil, apiErrors.GetAPIError(apiErrors.ErrInvalidDigest)
}
}
n.reqLogger(ctx).Debug(logs.PutObject, zap.Stringer("cid", p.BktInfo.CID), zap.Stringer("oid", id))
@ -304,6 +328,11 @@ func (n *layer) PutObject(ctx context.Context, p *PutObjectParams) (*data.Extend
IsUnversioned: !bktSettings.VersioningEnabled(),
IsCombined: p.Header[MultipartObjectSize] != "",
}
if len(p.CompleteMD5Hash) > 0 {
newVersion.MD5 = p.CompleteMD5Hash
} else {
newVersion.MD5 = hex.EncodeToString(md5Hash)
}
if newVersion.ID, err = n.treeService.AddVersion(ctx, p.BktInfo, newVersion); err != nil {
return nil, fmt.Errorf("couldn't add new verion to tree service: %w", err)
@ -340,6 +369,7 @@ func (n *layer) PutObject(ctx context.Context, p *PutObjectParams) (*data.Extend
Headers: p.Header,
ContentType: p.Header[api.ContentType],
HashSum: newVersion.ETag,
MD5Sum: newVersion.MD5,
}
extendedObjInfo := &data.ExtendedObjectInfo{
@ -367,7 +397,7 @@ func (n *layer) headLastVersionIfNotDeleted(ctx context.Context, bkt *data.Bucke
}
if node.IsDeleteMarker() {
return nil, fmt.Errorf("%w: found version is delete marker", apiErrors.GetAPIError(apiErrors.ErrNoSuchKey))
return nil, DeleteMarkerError{ErrorCode: apiErrors.ErrNoSuchKey}
}
meta, err := n.objectHead(ctx, bkt, node.OID)
@ -378,6 +408,7 @@ func (n *layer) headLastVersionIfNotDeleted(ctx context.Context, bkt *data.Bucke
return nil, err
}
objInfo := objectInfoFromMeta(bkt, meta)
objInfo.MD5Sum = node.MD5
extObjInfo := &data.ExtendedObjectInfo{
ObjectInfo: objInfo,
@ -422,6 +453,10 @@ func (n *layer) headVersion(ctx context.Context, bkt *data.BucketInfo, p *HeadOb
return extObjInfo, nil
}
if foundVersion.IsDeleteMarker() {
return nil, DeleteMarkerError{ErrorCode: apiErrors.ErrMethodNotAllowed}
}
meta, err := n.objectHead(ctx, bkt, foundVersion.OID)
if err != nil {
if client.IsErrObjectNotFound(err) {
@ -430,6 +465,7 @@ func (n *layer) headVersion(ctx context.Context, bkt *data.BucketInfo, p *HeadOb
return nil, err
}
objInfo := objectInfoFromMeta(bkt, meta)
objInfo.MD5Sum = foundVersion.MD5
extObjInfo := &data.ExtendedObjectInfo{
ObjectInfo: objInfo,
@ -457,14 +493,18 @@ func (n *layer) objectDelete(ctx context.Context, bktInfo *data.BucketInfo, idOb
// objectPutAndHash prepare auth parameters and invoke frostfs.CreateObject.
// Returns object ID and payload sha256 hash.
func (n *layer) objectPutAndHash(ctx context.Context, prm PrmObjectCreate, bktInfo *data.BucketInfo) (uint64, oid.ID, []byte, error) {
func (n *layer) objectPutAndHash(ctx context.Context, prm PrmObjectCreate, bktInfo *data.BucketInfo) (uint64, oid.ID, []byte, []byte, error) {
n.prepareAuthParameters(ctx, &prm.PrmAuth, bktInfo.Owner)
prm.ClientCut = n.features.ClientCut()
prm.BufferMaxSize = n.features.BufferMaxSizeForPut()
prm.WithoutHomomorphicHash = bktInfo.HomomorphicHashDisabled
var size uint64
hash := sha256.New()
md5Hash := md5.New()
prm.Payload = wrapReader(prm.Payload, 64*1024, func(buf []byte) {
size += uint64(len(buf))
hash.Write(buf)
md5Hash.Write(buf)
})
id, err := n.frostFS.CreateObject(ctx, prm)
if err != nil {
@ -472,9 +512,9 @@ func (n *layer) objectPutAndHash(ctx context.Context, prm PrmObjectCreate, bktIn
n.reqLogger(ctx).Warn(logs.FailedToDiscardPutPayloadProbablyGoroutineLeaks, zap.Error(errDiscard))
}
return 0, oid.ID{}, nil, err
return 0, oid.ID{}, nil, nil, err
}
return size, id, hash.Sum(nil), nil
return size, id, hash.Sum(nil), md5Hash.Sum(nil), nil
}
// ListObjectsV1 returns objects in a bucket for requests of Version 1.
@ -805,6 +845,7 @@ func (n *layer) objectInfoFromObjectsCacheOrFrostFS(ctx context.Context, bktInfo
}
oi = objectInfoFromMeta(bktInfo, meta)
oi.MD5Sum = node.MD5
n.cache.PutObject(owner, &data.ExtendedObjectInfo{ObjectInfo: oi, NodeVersion: node})
return oi

View file

@ -4,6 +4,7 @@ import (
"bytes"
"crypto/rand"
"crypto/sha256"
"errors"
"io"
"testing"
@ -27,3 +28,25 @@ func TestWrapReader(t *testing.T) {
require.Equal(t, src, dst)
require.Equal(t, h[:], streamHash.Sum(nil))
}
func TestGoroutinesDontLeakInPutAndHash(t *testing.T) {
tc := prepareContext(t)
l, ok := tc.layer.(*layer)
require.True(t, ok)
content := make([]byte, 128*1024)
_, err := rand.Read(content)
require.NoError(t, err)
payload := bytes.NewReader(content)
prm := PrmObjectCreate{
Filepath: tc.obj,
Payload: payload,
}
expErr := errors.New("some error")
tc.testFrostFS.SetObjectPutError(tc.obj, expErr)
_, _, _, _, err = l.objectPutAndHash(tc.ctx, prm, tc.bktInfo)
require.ErrorIs(t, err, expErr)
require.Empty(t, payload.Len(), "body must be read out otherwise goroutines can leak in wrapReader")
}

View file

@ -125,7 +125,7 @@ func (n *layer) putLockObject(ctx context.Context, bktInfo *data.BucketInfo, obj
return oid.ID{}, err
}
_, id, _, err := n.objectPutAndHash(ctx, prm, bktInfo)
_, id, _, _, err := n.objectPutAndHash(ctx, prm, bktInfo)
return id, err
}

View file

@ -5,6 +5,7 @@ import (
"sort"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/data"
s3errors "git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
)
func (n *layer) ListObjectVersions(ctx context.Context, p *ListObjectVersionsParams) (*ListObjectVersionsInfo, error) {
@ -36,29 +37,58 @@ func (n *layer) ListObjectVersions(ctx context.Context, p *ListObjectVersionsPar
}
}
for i, obj := range allObjects {
if obj.ObjectInfo.Name >= p.KeyMarker && obj.ObjectInfo.VersionID() >= p.VersionIDMarker {
allObjects = allObjects[i:]
break
}
if allObjects, err = filterVersionsByMarker(allObjects, p); err != nil {
return nil, err
}
res.CommonPrefixes, allObjects = triageExtendedObjects(allObjects)
if len(allObjects) > p.MaxKeys {
res.IsTruncated = true
res.NextKeyMarker = allObjects[p.MaxKeys].ObjectInfo.Name
res.NextVersionIDMarker = allObjects[p.MaxKeys].ObjectInfo.VersionID()
res.NextKeyMarker = allObjects[p.MaxKeys-1].ObjectInfo.Name
res.NextVersionIDMarker = allObjects[p.MaxKeys-1].ObjectInfo.VersionID()
allObjects = allObjects[:p.MaxKeys]
res.KeyMarker = allObjects[p.MaxKeys-1].ObjectInfo.Name
res.VersionIDMarker = allObjects[p.MaxKeys-1].ObjectInfo.VersionID()
res.KeyMarker = p.KeyMarker
res.VersionIDMarker = p.VersionIDMarker
}
res.Version, res.DeleteMarker = triageVersions(allObjects)
return res, nil
}
func filterVersionsByMarker(objects []*data.ExtendedObjectInfo, p *ListObjectVersionsParams) ([]*data.ExtendedObjectInfo, error) {
if p.KeyMarker == "" {
return objects, nil
}
for i, obj := range objects {
if obj.ObjectInfo.Name == p.KeyMarker {
for j := i; j < len(objects); j++ {
if objects[j].ObjectInfo.Name != obj.ObjectInfo.Name {
if p.VersionIDMarker == "" {
return objects[j:], nil
}
break
}
if objects[j].ObjectInfo.VersionID() == p.VersionIDMarker {
return objects[j+1:], nil
}
}
return nil, s3errors.GetAPIError(s3errors.ErrInvalidVersion)
} else if obj.ObjectInfo.Name > p.KeyMarker {
if p.VersionIDMarker != "" {
return nil, s3errors.GetAPIError(s3errors.ErrInvalidVersion)
}
return objects[i:], nil
}
}
// don't use nil as empty slice to be consistent with `return objects[j+1:], nil` above
// that can be empty
return []*data.ExtendedObjectInfo{}, nil
}
func triageVersions(objVersions []*data.ExtendedObjectInfo) ([]*data.ExtendedObjectInfo, []*data.ExtendedObjectInfo) {
if len(objVersions) == 0 {
return nil, nil

View file

@ -12,6 +12,7 @@ import (
bearertest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer/test"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
oidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id/test"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/stretchr/testify/require"
@ -153,7 +154,7 @@ func prepareContext(t *testing.T, cachesConfig ...*CachesConfig) *testContext {
tp := NewTestFrostFS(key)
bktName := "testbucket1"
bktID, err := tp.CreateContainer(ctx, PrmContainerCreate{
res, err := tp.CreateContainer(ctx, PrmContainerCreate{
Name: bktName,
})
require.NoError(t, err)
@ -179,7 +180,8 @@ func prepareContext(t *testing.T, cachesConfig ...*CachesConfig) *testContext {
bktInfo: &data.BucketInfo{
Name: bktName,
Owner: owner,
CID: bktID,
CID: res.ContainerID,
HomomorphicHashDisabled: res.HomomorphicHashDisabled,
},
obj: "obj1",
t: t,
@ -310,3 +312,133 @@ func TestNoVersioningDeleteObject(t *testing.T) {
tc.getObject(tc.obj, "", true)
tc.checkListObjects()
}
func TestFilterVersionsByMarker(t *testing.T) {
n := 10
testOIDs := make([]oid.ID, n)
for i := 0; i < n; i++ {
testOIDs[i] = oidtest.ID()
}
for _, tc := range []struct {
name string
objects []*data.ExtendedObjectInfo
params *ListObjectVersionsParams
expected []*data.ExtendedObjectInfo
error bool
}{
{
name: "missed key marker",
objects: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[0]}},
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[1]}},
},
params: &ListObjectVersionsParams{KeyMarker: "", VersionIDMarker: "dummy"},
expected: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[0]}},
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[1]}},
},
},
{
name: "last version id",
objects: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[0]}},
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[1]}},
},
params: &ListObjectVersionsParams{KeyMarker: "obj0", VersionIDMarker: testOIDs[1].EncodeToString()},
expected: []*data.ExtendedObjectInfo{},
},
{
name: "same name, different versions",
objects: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[0]}},
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[1]}},
},
params: &ListObjectVersionsParams{KeyMarker: "obj0", VersionIDMarker: testOIDs[0].EncodeToString()},
expected: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[1]}},
},
},
{
name: "different name, different versions",
objects: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[0]}},
{ObjectInfo: &data.ObjectInfo{Name: "obj1", ID: testOIDs[1]}},
},
params: &ListObjectVersionsParams{KeyMarker: "obj0", VersionIDMarker: testOIDs[0].EncodeToString()},
expected: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj1", ID: testOIDs[1]}},
},
},
{
name: "not matched name alphabetically less",
objects: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[0]}},
{ObjectInfo: &data.ObjectInfo{Name: "obj1", ID: testOIDs[1]}},
},
params: &ListObjectVersionsParams{KeyMarker: "obj", VersionIDMarker: ""},
expected: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[0]}},
{ObjectInfo: &data.ObjectInfo{Name: "obj1", ID: testOIDs[1]}},
},
},
{
name: "not matched name alphabetically less with dummy version id",
objects: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[0]}},
},
params: &ListObjectVersionsParams{KeyMarker: "obj", VersionIDMarker: "dummy"},
error: true,
},
{
name: "not matched name alphabetically greater",
objects: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[0]}},
{ObjectInfo: &data.ObjectInfo{Name: "obj1", ID: testOIDs[1]}},
},
params: &ListObjectVersionsParams{KeyMarker: "obj2", VersionIDMarker: testOIDs[2].EncodeToString()},
expected: []*data.ExtendedObjectInfo{},
},
{
name: "not found version id",
objects: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[0]}},
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[1]}},
{ObjectInfo: &data.ObjectInfo{Name: "obj1", ID: testOIDs[2]}},
},
params: &ListObjectVersionsParams{KeyMarker: "obj0", VersionIDMarker: "dummy"},
error: true,
},
{
name: "not found version id, obj last",
objects: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[0]}},
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[1]}},
},
params: &ListObjectVersionsParams{KeyMarker: "obj0", VersionIDMarker: "dummy"},
error: true,
},
{
name: "not found version id, obj last",
objects: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[0]}},
{ObjectInfo: &data.ObjectInfo{Name: "obj0", ID: testOIDs[1]}},
{ObjectInfo: &data.ObjectInfo{Name: "obj1", ID: testOIDs[2]}},
},
params: &ListObjectVersionsParams{KeyMarker: "obj0", VersionIDMarker: ""},
expected: []*data.ExtendedObjectInfo{
{ObjectInfo: &data.ObjectInfo{Name: "obj1", ID: testOIDs[2]}},
},
},
} {
t.Run(tc.name, func(t *testing.T) {
actual, err := filterVersionsByMarker(tc.objects, tc.params)
if tc.error {
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, tc.expected, actual)
}
})
}
}

View file

@ -34,6 +34,7 @@ type (
API string // API name -- GetObject PutObject NewMultipartUpload etc.
BucketName string // Bucket name
ObjectName string // Object name
TraceID string // Trace ID
URL *url.URL // Request url
tags []KeyVal // Any additional info not accommodated by above fields
}
@ -240,12 +241,23 @@ func AddObjectName(l *zap.Logger) Func {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
reqInfo := GetReqInfo(ctx)
reqLogger := reqLogOrDefault(ctx, l)
rctx := chi.RouteContext(ctx)
// trim leading slash (always present)
reqInfo.ObjectName = rctx.RoutePath[1:]
reqLogger := reqLogOrDefault(ctx, l)
if r.URL.RawPath != "" {
// we have to do this because of
// https://github.com/go-chi/chi/issues/641
// https://github.com/go-chi/chi/issues/642
if obj, err := url.PathUnescape(reqInfo.ObjectName); err != nil {
reqLogger.Warn(logs.FailedToUnescapeObjectName, zap.Error(err))
} else {
reqInfo.ObjectName = obj
}
}
r = r.WithContext(SetReqLogger(ctx, reqLogger.With(zap.String("object", reqInfo.ObjectName))))
h.ServeHTTP(w, r)

View file

@ -11,6 +11,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/api/errors"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/version"
"go.opentelemetry.io/otel/trace"
"go.uber.org/zap"
)
@ -140,13 +141,6 @@ func WriteErrorResponse(w http.ResponseWriter, reqInfo *ReqInfo, err error) int
return code
}
// WriteErrorResponseNoHeader writes XML encoded error to the response body.
func WriteErrorResponseNoHeader(w http.ResponseWriter, reqInfo *ReqInfo, err error) {
errorResponse := getAPIErrorResponse(reqInfo, err)
encodedErrorResponse := EncodeResponse(errorResponse)
WriteResponseBody(w, encodedErrorResponse)
}
// Write http common headers.
func setCommonHeaders(w http.ResponseWriter) {
w.Header().Set(hdrServerInfo, version.Server)
@ -320,13 +314,17 @@ func LogSuccessResponse(l *zap.Logger) Func {
reqLogger := reqLogOrDefault(ctx, l)
reqInfo := GetReqInfo(ctx)
reqLogger.Info(logs.RequestEnd,
fields := []zap.Field{
zap.String("method", reqInfo.API),
zap.String("bucket", reqInfo.BucketName),
zap.String("object", reqInfo.ObjectName),
zap.Int("status", lw.statusCode),
zap.String("description", http.StatusText(lw.statusCode)),
)
zap.String("description", http.StatusText(lw.statusCode))}
if traceID, err := trace.TraceIDFromHex(reqInfo.TraceID); err == nil && traceID.IsValid() {
fields = append(fields, zap.String("trace_id", reqInfo.TraceID))
}
reqLogger.Info(logs.RequestEnd, fields...)
})
}
}

View file

@ -17,6 +17,8 @@ func Tracing() Func {
return func(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
appCtx, span := StartHTTPServerSpan(r, "REQUEST S3")
reqInfo := GetReqInfo(r.Context())
reqInfo.TraceID = span.SpanContext().TraceID().String()
lw := &traceResponseWriter{ResponseWriter: w, ctx: appCtx, span: span}
h.ServeHTTP(lw, r.WithContext(appCtx))
})

View file

@ -82,15 +82,20 @@ func TestRouterObjectEscaping(t *testing.T) {
objName: "fix/object",
},
{
name: "with percentage",
expectedObjName: "fix/object%ac",
objName: "fix/object%ac",
name: "with slash escaped",
expectedObjName: "/foo/bar",
objName: "/foo%2fbar",
},
{
name: "with percentage escaped",
expectedObjName: "fix/object%ac",
objName: "fix/object%25ac",
},
{
name: "with awful mint name",
expectedObjName: "äöüex ®©µÄÆÐÕæŒƕƩDž 01000000 0x40 \u0040 amȡȹɆple&0a!-_.*'()&$@=;:+,?<>.pdf",
objName: "%C3%A4%C3%B6%C3%BCex%20%C2%AE%C2%A9%C2%B5%C3%84%C3%86%C3%90%C3%95%C3%A6%C5%92%C6%95%C6%A9%C7%85%2001000000%200x40%20%40%20am%C8%A1%C8%B9%C9%86ple%260a%21-_.%2A%27%28%29%26%24%40%3D%3B%3A%2B%2C%3F%3C%3E.pdf",
},
} {
t.Run(tc.name, func(t *testing.T) {
target := fmt.Sprintf("/%s/%s", bktName, tc.objName)

View file

@ -282,7 +282,7 @@ func (a *Agent) IssueSecret(ctx context.Context, w io.Writer, options *IssueSecr
ir := &issuingResult{
InitialAccessKeyID: accessKeyID,
AccessKeyID: accessKeyID,
SecretAccessKey: secrets.AccessKey,
SecretAccessKey: secrets.SecretKey,
OwnerPrivateKey: hex.EncodeToString(secrets.EphemeralKey.Bytes()),
WalletPublicKey: hex.EncodeToString(options.FrostFSKey.PublicKey().Bytes()),
ContainerID: id.EncodeToString(),
@ -305,7 +305,7 @@ func (a *Agent) IssueSecret(ctx context.Context, w io.Writer, options *IssueSecr
}
defer file.Close()
if _, err = file.WriteString(fmt.Sprintf("\n[%s]\naws_access_key_id = %s\naws_secret_access_key = %s\n",
profileName, accessKeyID, secrets.AccessKey)); err != nil {
profileName, accessKeyID, secrets.SecretKey)); err != nil {
return fmt.Errorf("fails to write to file: %w", err)
}
}
@ -321,7 +321,7 @@ func (a *Agent) UpdateSecret(ctx context.Context, w io.Writer, options *UpdateSe
return fmt.Errorf("get accessbox: %w", err)
}
secret, err := hex.DecodeString(box.Gate.AccessKey)
secret, err := hex.DecodeString(box.Gate.SecretKey)
if err != nil {
return fmt.Errorf("failed to decode secret key access box: %w", err)
}
@ -358,7 +358,7 @@ func (a *Agent) UpdateSecret(ctx context.Context, w io.Writer, options *UpdateSe
ir := &issuingResult{
AccessKeyID: accessKeyIDFromAddr(addr),
InitialAccessKeyID: accessKeyIDFromAddr(oldAddr),
SecretAccessKey: secrets.AccessKey,
SecretAccessKey: secrets.SecretKey,
OwnerPrivateKey: hex.EncodeToString(secrets.EphemeralKey.Bytes()),
WalletPublicKey: hex.EncodeToString(options.FrostFSKey.PublicKey().Bytes()),
ContainerID: addr.Container().EncodeToString(),
@ -396,7 +396,7 @@ func (a *Agent) ObtainSecret(ctx context.Context, w io.Writer, options *ObtainSe
or := &obtainingResult{
BearerToken: box.Gate.BearerToken,
SecretAccessKey: box.Gate.AccessKey,
SecretAccessKey: box.Gate.SecretKey,
}
enc := json.NewEncoder(w)

View file

@ -15,6 +15,6 @@ func main() {
if cmd, err := modules.Execute(ctx); err != nil {
cmd.PrintErrln("Error:", err.Error())
cmd.PrintErrf("Run '%v --help' for usage.\n", cmd.CommandPath())
os.Exit(1)
os.Exit(modules.ExitCode(err))
}
}

View file

@ -0,0 +1,53 @@
package modules
type (
preparationError struct {
err error
}
frostFSInitError struct {
err error
}
businessLogicError struct {
err error
}
)
func wrapPreparationError(e error) error {
return preparationError{e}
}
func (e preparationError) Error() string {
return e.err.Error()
}
func wrapFrostFSInitError(e error) error {
return frostFSInitError{e}
}
func (e frostFSInitError) Error() string {
return e.err.Error()
}
func wrapBusinessLogicError(e error) error {
return businessLogicError{e}
}
func (e businessLogicError) Error() string {
return e.err.Error()
}
// ExitCode picks corresponding error code depending on the type of error provided.
// Returns 1 if error type is unknown.
func ExitCode(e error) int {
switch e.(type) {
case preparationError:
return 2
case frostFSInitError:
return 3
case businessLogicError:
return 4
}
return 1
}

View file

@ -76,7 +76,7 @@ func runGeneratePresignedURLCmd(*cobra.Command, []string) error {
SharedConfigState: session.SharedConfigEnable,
})
if err != nil {
return fmt.Errorf("couldn't get aws credentials: %w", err)
return wrapPreparationError(fmt.Errorf("couldn't get aws credentials: %w", err))
}
reqData := auth.RequestData{
@ -94,7 +94,7 @@ func runGeneratePresignedURLCmd(*cobra.Command, []string) error {
req, err := auth.PresignRequest(sess.Config.Credentials, reqData, presignData)
if err != nil {
return err
return wrapBusinessLogicError(err)
}
res := &struct{ URL string }{
@ -104,5 +104,9 @@ func runGeneratePresignedURLCmd(*cobra.Command, []string) error {
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
enc.SetEscapeHTML(false)
return enc.Encode(res)
err = enc.Encode(res)
if err != nil {
return wrapBusinessLogicError(err)
}
return nil
}

View file

@ -92,14 +92,14 @@ func runIssueSecretCmd(cmd *cobra.Command, _ []string) error {
password := wallet.GetPassword(viper.GetViper(), walletPassphraseCfg)
key, err := wallet.GetKeyFromPath(viper.GetString(walletFlag), viper.GetString(addressFlag), password)
if err != nil {
return fmt.Errorf("failed to load frostfs private key: %s", err)
return wrapPreparationError(fmt.Errorf("failed to load frostfs private key: %s", err))
}
var cnrID cid.ID
containerID := viper.GetString(containerIDFlag)
if len(containerID) > 0 {
if err = cnrID.DecodeString(containerID); err != nil {
return fmt.Errorf("failed to parse auth container id: %s", err)
return wrapPreparationError(fmt.Errorf("failed to parse auth container id: %s", err))
}
}
@ -107,35 +107,35 @@ func runIssueSecretCmd(cmd *cobra.Command, _ []string) error {
for _, keyStr := range viper.GetStringSlice(gatePublicKeyFlag) {
gpk, err := keys.NewPublicKeyFromString(keyStr)
if err != nil {
return fmt.Errorf("failed to load gate's public key: %s", err)
return wrapPreparationError(fmt.Errorf("failed to load gate's public key: %s", err))
}
gatesPublicKeys = append(gatesPublicKeys, gpk)
}
lifetime := viper.GetDuration(lifetimeFlag)
if lifetime <= 0 {
return fmt.Errorf("lifetime must be greater 0, current value: %d", lifetime)
return wrapPreparationError(fmt.Errorf("lifetime must be greater 0, current value: %d", lifetime))
}
policies, err := parsePolicies(viper.GetString(containerPolicyFlag))
if err != nil {
return fmt.Errorf("couldn't parse container policy: %s", err.Error())
return wrapPreparationError(fmt.Errorf("couldn't parse container policy: %s", err.Error()))
}
disableImpersonate := viper.GetBool(disableImpersonateFlag)
eaclRules := viper.GetString(bearerRulesFlag)
if !disableImpersonate && eaclRules != "" {
return errors.New("--bearer-rules flag can be used only with --disable-impersonate")
return wrapPreparationError(errors.New("--bearer-rules flag can be used only with --disable-impersonate"))
}
bearerRules, err := getJSONRules(eaclRules)
if err != nil {
return fmt.Errorf("couldn't parse 'bearer-rules' flag: %s", err.Error())
return wrapPreparationError(fmt.Errorf("couldn't parse 'bearer-rules' flag: %s", err.Error()))
}
sessionRules, skipSessionRules, err := getSessionRules(viper.GetString(sessionTokensFlag))
if err != nil {
return fmt.Errorf("couldn't parse 'session-tokens' flag: %s", err.Error())
return wrapPreparationError(fmt.Errorf("couldn't parse 'session-tokens' flag: %s", err.Error()))
}
poolCfg := PoolConfig{
@ -149,7 +149,7 @@ func runIssueSecretCmd(cmd *cobra.Command, _ []string) error {
frostFS, err := createFrostFS(ctx, log, poolCfg)
if err != nil {
return fmt.Errorf("failed to create FrostFS component: %s", err)
return wrapFrostFSInitError(fmt.Errorf("failed to create FrostFS component: %s", err))
}
issueSecretOptions := &authmate.IssueSecretOptions{
@ -170,7 +170,7 @@ func runIssueSecretCmd(cmd *cobra.Command, _ []string) error {
}
if err = authmate.New(log, frostFS).IssueSecret(ctx, os.Stdout, issueSecretOptions); err != nil {
return fmt.Errorf("failed to issue secret: %s", err)
return wrapBusinessLogicError(fmt.Errorf("failed to issue secret: %s", err))
}
return nil
}

View file

@ -58,13 +58,13 @@ func runObtainSecretCmd(cmd *cobra.Command, _ []string) error {
password := wallet.GetPassword(viper.GetViper(), walletPassphraseCfg)
key, err := wallet.GetKeyFromPath(viper.GetString(walletFlag), viper.GetString(addressFlag), password)
if err != nil {
return fmt.Errorf("failed to load frostfs private key: %s", err)
return wrapPreparationError(fmt.Errorf("failed to load frostfs private key: %s", err))
}
gatePassword := wallet.GetPassword(viper.GetViper(), walletGatePassphraseCfg)
gateKey, err := wallet.GetKeyFromPath(viper.GetString(gateWalletFlag), viper.GetString(gateAddressFlag), gatePassword)
if err != nil {
return fmt.Errorf("failed to load s3 gate private key: %s", err)
return wrapPreparationError(fmt.Errorf("failed to load s3 gate private key: %s", err))
}
poolCfg := PoolConfig{
@ -78,7 +78,7 @@ func runObtainSecretCmd(cmd *cobra.Command, _ []string) error {
frostFS, err := createFrostFS(ctx, log, poolCfg)
if err != nil {
return cli.Exit(fmt.Sprintf("failed to create FrostFS component: %s", err), 2)
return wrapFrostFSInitError(cli.Exit(fmt.Sprintf("failed to create FrostFS component: %s", err), 2))
}
obtainSecretOptions := &authmate.ObtainSecretOptions{
@ -87,7 +87,7 @@ func runObtainSecretCmd(cmd *cobra.Command, _ []string) error {
}
if err = authmate.New(log, frostFS).ObtainSecret(ctx, os.Stdout, obtainSecretOptions); err != nil {
return fmt.Errorf("failed to obtain secret: %s", err)
return wrapBusinessLogicError(fmt.Errorf("failed to obtain secret: %s", err))
}
return nil

View file

@ -56,26 +56,26 @@ func runUpdateSecretCmd(cmd *cobra.Command, _ []string) error {
password := wallet.GetPassword(viper.GetViper(), walletPassphraseCfg)
key, err := wallet.GetKeyFromPath(viper.GetString(walletFlag), viper.GetString(addressFlag), password)
if err != nil {
return fmt.Errorf("failed to load frostfs private key: %s", err)
return wrapPreparationError(fmt.Errorf("failed to load frostfs private key: %s", err))
}
gatePassword := wallet.GetPassword(viper.GetViper(), walletGatePassphraseCfg)
gateKey, err := wallet.GetKeyFromPath(viper.GetString(gateWalletFlag), viper.GetString(gateAddressFlag), gatePassword)
if err != nil {
return fmt.Errorf("failed to load s3 gate private key: %s", err)
return wrapPreparationError(fmt.Errorf("failed to load s3 gate private key: %s", err))
}
var accessBoxAddress oid.Address
credAddr := strings.Replace(viper.GetString(accessKeyIDFlag), "0", "/", 1)
if err = accessBoxAddress.DecodeString(credAddr); err != nil {
return fmt.Errorf("failed to parse creds address: %w", err)
return wrapPreparationError(fmt.Errorf("failed to parse creds address: %w", err))
}
var gatesPublicKeys []*keys.PublicKey
for _, keyStr := range viper.GetStringSlice(gatePublicKeyFlag) {
gpk, err := keys.NewPublicKeyFromString(keyStr)
if err != nil {
return fmt.Errorf("failed to load gate's public key: %s", err)
return wrapPreparationError(fmt.Errorf("failed to load gate's public key: %s", err))
}
gatesPublicKeys = append(gatesPublicKeys, gpk)
}
@ -91,7 +91,7 @@ func runUpdateSecretCmd(cmd *cobra.Command, _ []string) error {
frostFS, err := createFrostFS(ctx, log, poolCfg)
if err != nil {
return fmt.Errorf("failed to create FrostFS component: %s", err)
return wrapFrostFSInitError(fmt.Errorf("failed to create FrostFS component: %s", err))
}
updateSecretOptions := &authmate.UpdateSecretOptions{
@ -102,7 +102,7 @@ func runUpdateSecretCmd(cmd *cobra.Command, _ []string) error {
}
if err = authmate.New(log, frostFS).UpdateSecret(ctx, os.Stdout, updateSecretOptions); err != nil {
return fmt.Errorf("failed to update secret: %s", err)
return wrapBusinessLogicError(fmt.Errorf("failed to update secret: %s", err))
}
return nil
}

View file

@ -3,12 +3,14 @@ package main
import (
"context"
"encoding/hex"
"encoding/xml"
"fmt"
"io"
"net/http"
"os"
"os/signal"
"runtime/debug"
"sync"
"sync/atomic"
"syscall"
"time"
@ -26,7 +28,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/version"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/wallet"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/xml"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/metrics"
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/pkg/service/tree"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
@ -41,6 +42,8 @@ import (
"google.golang.org/grpc"
)
const awsDefaultNamespace = "http://s3.amazonaws.com/doc/2006-03-01/"
type (
// App is the main application structure.
App struct {
@ -67,11 +70,22 @@ type (
appSettings struct {
logLevel zap.AtomicLevel
policies *placementPolicy
xmlDecoder *xml.DecoderProvider
maxClient maxClientsConfig
bypassContentEncodingInChunks atomic.Bool
clientCut atomic.Bool
defaultMaxAge int
notificatorEnabled bool
resolveZoneList []string
isResolveListAllow bool // True if ResolveZoneList contains allowed zones
mu sync.RWMutex
defaultPolicy netmap.PlacementPolicy
regionMap map[string]netmap.PlacementPolicy
copiesNumbers map[string][]uint32
defaultCopiesNumbers []uint32
defaultXMLNS bool
bypassContentEncodingInChunks bool
clientCut bool
maxBufferSizeForPut uint64
md5Enabled bool
}
maxClientsConfig struct {
@ -83,14 +97,6 @@ type (
logger *zap.Logger
lvl zap.AtomicLevel
}
placementPolicy struct {
mu sync.RWMutex
defaultPolicy netmap.PlacementPolicy
regionMap map[string]netmap.PlacementPolicy
copiesNumbers map[string][]uint32
defaultCopiesNumbers []uint32
}
)
func newApp(ctx context.Context, log *Logger, v *viper.Viper) *App {
@ -119,6 +125,7 @@ func newApp(ctx context.Context, log *Logger, v *viper.Viper) *App {
}
func (a *App) init(ctx context.Context) {
a.setRuntimeParameters()
a.initAPI(ctx)
a.initMetrics()
a.initServers(ctx)
@ -167,31 +174,150 @@ func (a *App) initLayer(ctx context.Context) {
func newAppSettings(log *Logger, v *viper.Viper) *appSettings {
settings := &appSettings{
logLevel: log.lvl,
policies: newPlacementPolicy(log.logger, v),
xmlDecoder: xml.NewDecoderProvider(v.GetBool(cfgKludgeUseDefaultXMLNSForCompleteMultipartUpload)),
maxClient: newMaxClients(v),
defaultXMLNS: v.GetBool(cfgKludgeUseDefaultXMLNS),
defaultMaxAge: fetchDefaultMaxAge(v, log.logger),
notificatorEnabled: v.GetBool(cfgEnableNATS),
}
settings.resolveZoneList = v.GetStringSlice(cfgResolveBucketAllow)
settings.isResolveListAllow = len(settings.resolveZoneList) > 0
if !settings.isResolveListAllow {
settings.resolveZoneList = v.GetStringSlice(cfgResolveBucketDeny)
}
settings.setBypassContentEncodingInChunks(v.GetBool(cfgKludgeBypassContentEncodingCheckInChunks))
settings.setClientCut(v.GetBool(cfgClientCut))
settings.initPlacementPolicy(log.logger, v)
settings.setBufferMaxSizeForPut(v.GetUint64(cfgBufferMaxSizeForPut))
settings.setMD5Enabled(v.GetBool(cfgMD5Enabled))
return settings
}
func (s *appSettings) BypassContentEncodingInChunks() bool {
return s.bypassContentEncodingInChunks.Load()
s.mu.RLock()
defer s.mu.RUnlock()
return s.bypassContentEncodingInChunks
}
func (s *appSettings) setBypassContentEncodingInChunks(bypass bool) {
s.bypassContentEncodingInChunks.Store(bypass)
s.mu.Lock()
s.bypassContentEncodingInChunks = bypass
s.mu.Unlock()
}
func (s *appSettings) ClientCut() bool {
return s.clientCut.Load()
s.mu.RLock()
defer s.mu.RUnlock()
return s.clientCut
}
func (s *appSettings) setClientCut(clientCut bool) {
s.clientCut.Store(clientCut)
s.mu.Lock()
s.clientCut = clientCut
s.mu.Unlock()
}
func (s *appSettings) BufferMaxSizeForPut() uint64 {
s.mu.RLock()
defer s.mu.RUnlock()
return s.maxBufferSizeForPut
}
func (s *appSettings) setBufferMaxSizeForPut(size uint64) {
s.mu.Lock()
s.maxBufferSizeForPut = size
s.mu.Unlock()
}
func (s *appSettings) initPlacementPolicy(l *zap.Logger, v *viper.Viper) {
defaultPolicy := fetchDefaultPolicy(l, v)
regionMap := fetchRegionMappingPolicies(l, v)
defaultCopies := fetchDefaultCopiesNumbers(l, v)
copiesNumbers := fetchCopiesNumbers(l, v)
s.mu.Lock()
defer s.mu.Unlock()
s.defaultPolicy = defaultPolicy
s.regionMap = regionMap
s.defaultCopiesNumbers = defaultCopies
s.copiesNumbers = copiesNumbers
}
func (s *appSettings) DefaultPlacementPolicy() netmap.PlacementPolicy {
s.mu.RLock()
defer s.mu.RUnlock()
return s.defaultPolicy
}
func (s *appSettings) PlacementPolicy(name string) (netmap.PlacementPolicy, bool) {
s.mu.RLock()
policy, ok := s.regionMap[name]
s.mu.RUnlock()
return policy, ok
}
func (s *appSettings) CopiesNumbers(locationConstraint string) ([]uint32, bool) {
s.mu.RLock()
copiesNumbers, ok := s.copiesNumbers[locationConstraint]
s.mu.RUnlock()
return copiesNumbers, ok
}
func (s *appSettings) DefaultCopiesNumbers() []uint32 {
s.mu.RLock()
defer s.mu.RUnlock()
return s.defaultCopiesNumbers
}
func (s *appSettings) NewXMLDecoder(r io.Reader) *xml.Decoder {
dec := xml.NewDecoder(r)
s.mu.RLock()
if s.defaultXMLNS {
dec.DefaultSpace = awsDefaultNamespace
}
s.mu.RUnlock()
return dec
}
func (s *appSettings) useDefaultXMLNamespace(useDefaultNamespace bool) {
s.mu.Lock()
s.defaultXMLNS = useDefaultNamespace
s.mu.Unlock()
}
func (s *appSettings) DefaultMaxAge() int {
return s.defaultMaxAge
}
func (s *appSettings) NotificatorEnabled() bool {
return s.notificatorEnabled
}
func (s *appSettings) ResolveZoneList() []string {
return s.resolveZoneList
}
func (s *appSettings) IsResolveListAllow() bool {
return s.isResolveListAllow
}
func (s *appSettings) MD5Enabled() bool {
s.mu.RLock()
defer s.mu.RUnlock()
return s.md5Enabled
}
func (s *appSettings) setMD5Enabled(md5Enabled bool) {
s.mu.Lock()
s.md5Enabled = md5Enabled
s.mu.Unlock()
}
func (a *App) initAPI(ctx context.Context) {
@ -346,55 +472,6 @@ func getPools(ctx context.Context, logger *zap.Logger, cfg *viper.Viper) (*pool.
return p, treePool, key
}
func newPlacementPolicy(l *zap.Logger, v *viper.Viper) *placementPolicy {
var policies placementPolicy
policies.update(l, v)
return &policies
}
func (p *placementPolicy) DefaultPlacementPolicy() netmap.PlacementPolicy {
p.mu.RLock()
defer p.mu.RUnlock()
return p.defaultPolicy
}
func (p *placementPolicy) PlacementPolicy(name string) (netmap.PlacementPolicy, bool) {
p.mu.RLock()
policy, ok := p.regionMap[name]
p.mu.RUnlock()
return policy, ok
}
func (p *placementPolicy) CopiesNumbers(locationConstraint string) ([]uint32, bool) {
p.mu.RLock()
copiesNumbers, ok := p.copiesNumbers[locationConstraint]
p.mu.RUnlock()
return copiesNumbers, ok
}
func (p *placementPolicy) DefaultCopiesNumbers() []uint32 {
p.mu.RLock()
defer p.mu.RUnlock()
return p.defaultCopiesNumbers
}
func (p *placementPolicy) update(l *zap.Logger, v *viper.Viper) {
defaultPolicy := fetchDefaultPolicy(l, v)
regionMap := fetchRegionMappingPolicies(l, v)
defaultCopies := fetchDefaultCopiesNumbers(l, v)
copiesNumbers := fetchCopiesNumbers(l, v)
p.mu.Lock()
defer p.mu.Unlock()
p.defaultPolicy = defaultPolicy
p.regionMap = regionMap
p.defaultCopiesNumbers = defaultCopies
p.copiesNumbers = copiesNumbers
}
func remove(list []string, element string) []string {
for i, item := range list {
if item == element {
@ -445,6 +522,10 @@ func (a *App) Serve(ctx context.Context) {
srv := new(http.Server)
srv.Handler = chiRouter
srv.ErrorLog = zap.NewStdLog(a.log)
srv.ReadTimeout = a.cfg.GetDuration(cfgWebReadTimeout)
srv.ReadHeaderTimeout = a.cfg.GetDuration(cfgWebReadHeaderTimeout)
srv.WriteTimeout = a.cfg.GetDuration(cfgWebWriteTimeout)
srv.IdleTimeout = a.cfg.GetDuration(cfgWebIdleTimeout)
a.startServices()
@ -453,6 +534,7 @@ func (a *App) Serve(ctx context.Context) {
a.log.Info(logs.StartingServer, zap.String("address", a.servers[i].Address()))
if err := srv.Serve(a.servers[i].Listener()); err != nil && err != http.ErrServerClosed {
a.metrics.MarkUnhealthy(a.servers[i].Address())
a.log.Fatal(logs.ListenAndServe, zap.Error(err))
}
}(i)
@ -507,6 +589,8 @@ func (a *App) configReload(ctx context.Context) {
a.log.Warn(logs.FailedToReloadServerParameters, zap.Error(err))
}
a.setRuntimeParameters()
a.stopServices()
a.startServices()
@ -526,11 +610,13 @@ func (a *App) updateSettings() {
a.settings.logLevel.SetLevel(lvl)
}
a.settings.policies.update(a.log, a.cfg)
a.settings.initPlacementPolicy(a.log, a.cfg)
a.settings.xmlDecoder.UseDefaultNamespaceForCompleteMultipart(a.cfg.GetBool(cfgKludgeUseDefaultXMLNSForCompleteMultipartUpload))
a.settings.useDefaultXMLNamespace(a.cfg.GetBool(cfgKludgeUseDefaultXMLNS))
a.settings.setBypassContentEncodingInChunks(a.cfg.GetBool(cfgKludgeBypassContentEncodingCheckInChunks))
a.settings.setClientCut(a.cfg.GetBool(cfgClientCut))
a.settings.setBufferMaxSizeForPut(a.cfg.GetUint64(cfgBufferMaxSizeForPut))
a.settings.setMD5Enabled(a.cfg.GetBool(cfgMD5Enabled))
}
func (a *App) startServices() {
@ -556,9 +642,11 @@ func (a *App) initServers(ctx context.Context) {
}
srv, err := newServer(ctx, serverInfo)
if err != nil {
a.metrics.MarkUnhealthy(serverInfo.Address)
a.log.Warn(logs.FailedToAddServer, append(fields, zap.Error(err))...)
continue
}
a.metrics.MarkHealthy(serverInfo.Address)
a.servers = append(a.servers, srv)
a.log.Info(logs.AddServer, fields...)
@ -657,25 +745,25 @@ func getAccessBoxCacheConfig(v *viper.Viper, l *zap.Logger) *cache.Config {
}
func (a *App) initHandler() {
cfg := &handler.Config{
Policy: a.settings.policies,
DefaultMaxAge: fetchDefaultMaxAge(a.cfg, a.log),
NotificatorEnabled: a.cfg.GetBool(cfgEnableNATS),
XMLDecoder: a.settings.xmlDecoder,
}
cfg.ResolveZoneList = a.cfg.GetStringSlice(cfgResolveBucketAllow)
cfg.IsResolveListAllow = len(cfg.ResolveZoneList) > 0
if !cfg.IsResolveListAllow {
cfg.ResolveZoneList = a.cfg.GetStringSlice(cfgResolveBucketDeny)
}
cfg.CompleteMultipartKeepalive = a.cfg.GetDuration(cfgKludgeCompleteMultipartUploadKeepalive)
cfg.Kludge = a.settings
var err error
a.api, err = handler.New(a.log, a.obj, a.nc, cfg)
a.api, err = handler.New(a.log, a.obj, a.nc, a.settings)
if err != nil {
a.log.Fatal(logs.CouldNotInitializeAPIHandler, zap.Error(err))
}
}
func (a *App) setRuntimeParameters() {
if len(os.Getenv("GOMEMLIMIT")) != 0 {
// default limit < yaml limit < app env limit < GOMEMLIMIT
a.log.Warn(logs.RuntimeSoftMemoryDefinedWithGOMEMLIMIT)
return
}
softMemoryLimit := fetchSoftMemoryLimit(a.cfg)
previous := debug.SetMemoryLimit(softMemoryLimit)
if softMemoryLimit != previous {
a.log.Info(logs.RuntimeSoftMemoryLimitUpdated,
zap.Int64("new_value", softMemoryLimit),
zap.Int64("old_value", previous))
}
}

View file

@ -3,6 +3,7 @@ package main
import (
"encoding/json"
"fmt"
"math"
"os"
"path"
"runtime"
@ -19,12 +20,19 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-s3-gw/internal/version"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
"git.frostfs.info/TrueCloudLab/zapjournald"
"github.com/spf13/pflag"
"github.com/spf13/viper"
"github.com/ssgreg/journald"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
const (
destinationStdout = "stdout"
destinationJournald = "journald"
)
const (
defaultRebalanceInterval = 60 * time.Second
defaultHealthcheckTimeout = 15 * time.Second
@ -37,6 +45,11 @@ const (
defaultMaxClientsCount = 100
defaultMaxClientsDeadline = time.Second * 30
defaultSoftMemoryLimit = math.MaxInt64
defaultReadHeaderTimeout = 30 * time.Second
defaultIdleTimeout = 30 * time.Second
)
var defaultCopiesNumbers = []uint32{0}
@ -44,6 +57,7 @@ var defaultCopiesNumbers = []uint32{0}
const ( // Settings.
// Logger.
cfgLoggerLevel = "logger.level"
cfgLoggerDestination = "logger.destination"
// Wallet.
cfgWalletPath = "wallet.path"
@ -127,10 +141,15 @@ const ( // Settings.
cfgApplicationBuildTime = "app.build_time"
// Kludge.
cfgKludgeUseDefaultXMLNSForCompleteMultipartUpload = "kludge.use_default_xmlns_for_complete_multipart"
cfgKludgeCompleteMultipartUploadKeepalive = "kludge.complete_multipart_keepalive"
cfgKludgeUseDefaultXMLNS = "kludge.use_default_xmlns"
cfgKludgeBypassContentEncodingCheckInChunks = "kludge.bypass_content_encoding_check_in_chunks"
// Web.
cfgWebReadTimeout = "web.read_timeout"
cfgWebReadHeaderTimeout = "web.read_header_timeout"
cfgWebWriteTimeout = "web.write_timeout"
cfgWebIdleTimeout = "web.idle_timeout"
// Command line args.
cmdHelp = "help"
cmdVersion = "version"
@ -146,6 +165,8 @@ const ( // Settings.
cfgSetCopiesNumber = "frostfs.set_copies_number"
// Enabling client side object preparing for PUT operations.
cfgClientCut = "frostfs.client_cut"
// Sets max buffer size for read payload in put operations.
cfgBufferMaxSizeForPut = "frostfs.buffer_max_size_for_put"
// List of allowed AccessKeyID prefixes.
cfgAllowedAccessKeyIDPrefixes = "allowed_access_key_id_prefixes"
@ -154,6 +175,12 @@ const ( // Settings.
cfgResolveBucketAllow = "resolve_bucket.allow"
cfgResolveBucketDeny = "resolve_bucket.deny"
// Runtime.
cfgSoftMemoryLimit = "runtime.soft_memory_limit"
// Enable return MD5 checksum in ETag.
cfgMD5Enabled = "features.md5.enabled"
// envPrefix is an environment variables prefix used for configuration.
envPrefix = "S3_GW"
)
@ -230,6 +257,15 @@ func fetchMaxClientsDeadline(cfg *viper.Viper) time.Duration {
return maxClientsDeadline
}
func fetchSoftMemoryLimit(cfg *viper.Viper) int64 {
softMemoryLimit := cfg.GetSizeInBytes(cfgSoftMemoryLimit)
if softMemoryLimit <= 0 {
softMemoryLimit = defaultSoftMemoryLimit
}
return int64(softMemoryLimit)
}
func fetchDefaultPolicy(l *zap.Logger, cfg *viper.Viper) netmap.PlacementPolicy {
var policy netmap.PlacementPolicy
@ -505,6 +541,7 @@ func newSettings() *viper.Viper {
// logger:
v.SetDefault(cfgLoggerLevel, "debug")
v.SetDefault(cfgLoggerDestination, "stdout")
// pool:
v.SetDefault(cfgPoolErrorThreshold, defaultPoolErrorThreshold)
@ -513,11 +550,17 @@ func newSettings() *viper.Viper {
v.SetDefault(cfgPProfAddress, "localhost:8085")
v.SetDefault(cfgPrometheusAddress, "localhost:8086")
// frostfs
v.SetDefault(cfgBufferMaxSizeForPut, 1024*1024) // 1mb
// kludge
v.SetDefault(cfgKludgeUseDefaultXMLNSForCompleteMultipartUpload, false)
v.SetDefault(cfgKludgeCompleteMultipartUploadKeepalive, 10*time.Second)
v.SetDefault(cfgKludgeUseDefaultXMLNS, false)
v.SetDefault(cfgKludgeBypassContentEncodingCheckInChunks, false)
// web
v.SetDefault(cfgWebReadHeaderTimeout, defaultReadHeaderTimeout)
v.SetDefault(cfgWebIdleTimeout, defaultIdleTimeout)
// Bind flags
if err := bindFlags(v, flags); err != nil {
panic(fmt.Errorf("bind flags: %w", err))
@ -703,7 +746,25 @@ func mergeConfig(v *viper.Viper, fileName string) error {
return v.MergeConfig(cfgFile)
}
// newLogger constructs a Logger instance for the current application.
func pickLogger(v *viper.Viper) *Logger {
lvl, err := getLogLevel(v)
if err != nil {
panic(err)
}
dest := v.GetString(cfgLoggerDestination)
switch dest {
case destinationStdout:
return newStdoutLogger(lvl)
case destinationJournald:
return newJournaldLogger(lvl)
default:
panic(fmt.Sprintf("wrong destination for logger: %s", dest))
}
}
// newStdoutLogger constructs a Logger instance for the current application.
// Panics on failure.
//
// Logger contains a logger is built from zap's production logging configuration with:
@ -716,12 +777,7 @@ func mergeConfig(v *viper.Viper, fileName string) error {
// Logger records a stack trace for all messages at or above fatal level.
//
// See also zapcore.Level, zap.NewProductionConfig, zap.AddStacktrace.
func newLogger(v *viper.Viper) *Logger {
lvl, err := getLogLevel(v)
if err != nil {
panic(err)
}
func newStdoutLogger(lvl zapcore.Level) *Logger {
c := zap.NewProductionConfig()
c.Level = zap.NewAtomicLevelAt(lvl)
c.Encoding = "console"
@ -740,6 +796,28 @@ func newLogger(v *viper.Viper) *Logger {
}
}
func newJournaldLogger(lvl zapcore.Level) *Logger {
c := zap.NewProductionConfig()
c.EncoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder
c.Level = zap.NewAtomicLevelAt(lvl)
encoder := zapcore.NewConsoleEncoder(c.EncoderConfig)
core := zapjournald.NewCore(zap.NewAtomicLevelAt(lvl), encoder, &journald.Journal{}, zapjournald.SyslogFields)
coreWithContext := core.With([]zapcore.Field{
zapjournald.SyslogFacility(zapjournald.LogDaemon),
zapjournald.SyslogIdentifier(),
zapjournald.SyslogPid(),
})
l := zap.New(coreWithContext, zap.AddStacktrace(zap.NewAtomicLevelAt(zap.FatalLevel)))
return &Logger{
logger: l,
lvl: c.Level,
}
}
func getLogLevel(v *viper.Viper) (zapcore.Level, error) {
var lvl zapcore.Level
lvlStr := v.GetString(cfgLoggerLevel)

View file

@ -1,4 +1,4 @@
package xml
package main
import (
"bytes"
@ -35,44 +35,56 @@ func TestDefaultNamespace(t *testing.T) {
`
for _, tc := range []struct {
provider *DecoderProvider
settings *appSettings
input string
err bool
}{
{
provider: NewDecoderProvider(false),
settings: &appSettings{
defaultXMLNS: false,
},
input: xmlBodyWithNamespace,
err: false,
},
{
provider: NewDecoderProvider(false),
settings: &appSettings{
defaultXMLNS: false,
},
input: xmlBody,
err: true,
},
{
provider: NewDecoderProvider(false),
settings: &appSettings{
defaultXMLNS: false,
},
input: xmlBodyWithInvalidNamespace,
err: true,
},
{
provider: NewDecoderProvider(true),
settings: &appSettings{
defaultXMLNS: true,
},
input: xmlBodyWithNamespace,
err: false,
},
{
provider: NewDecoderProvider(true),
settings: &appSettings{
defaultXMLNS: true,
},
input: xmlBody,
err: false,
},
{
provider: NewDecoderProvider(true),
settings: &appSettings{
defaultXMLNS: true,
},
input: xmlBodyWithInvalidNamespace,
err: true,
},
} {
t.Run("", func(t *testing.T) {
model := new(handler.CompleteMultipartUpload)
err := tc.provider.NewCompleteMultipartDecoder(bytes.NewBufferString(tc.input)).Decode(model)
err := tc.settings.NewXMLDecoder(bytes.NewBufferString(tc.input)).Decode(model)
if tc.err {
require.Error(t, err)
} else {

View file

@ -10,7 +10,7 @@ func main() {
g, _ := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
v := newSettings()
l := newLogger(v)
l := pickLogger(v)
a := newApp(g, l, v)

View file

@ -127,6 +127,8 @@ S3_GW_CORS_DEFAULT_MAX_AGE=600
S3_GW_FROSTFS_SET_COPIES_NUMBER=0
# This flag enables client side object preparing.
S3_GW_FROSTFS_CLIENT_CUT=false
# Sets max buffer size for read payload in put operations.
S3_GW_FROSTFS_BUFFER_MAX_SIZE_FOR_PUT=1048576
# List of allowed AccessKeyID prefixes
# If not set, S3 GW will accept all AccessKeyIDs
@ -136,13 +138,38 @@ S3_GW_ALLOWED_ACCESS_KEY_ID_PREFIXES=Ck9BHsgKcnwfCTUSFm6pxhoNS4cBqgN2NQ8zVgPjqZD
S3_GW_RESOLVE_BUCKET_ALLOW=container
# S3_GW_RESOLVE_BUCKET_DENY=
# Enable using default xml namespace `http://s3.amazonaws.com/doc/2006-03-01/` when parse`CompleteMultipartUpload` xml body.
S3_GW_KLUDGE_USE_DEFAULT_XMLNS_FOR_COMPLETE_MULTIPART=false
# Set timeout between whitespace transmissions during CompleteMultipartUpload processing.
S3_GW_KLUDGE_COMPLETE_MULTIPART_KEEPALIVE=10s
# Enable using default xml namespace `http://s3.amazonaws.com/doc/2006-03-01/` when parse xml bodies.
S3_GW_KLUDGE_USE_DEFAULT_XMLNS=false
# Use this flag to be able to use chunked upload approach without having `aws-chunked` value in `Content-Encoding` header.
S3_GW_BYPASS_CONTENT_ENCODING_CHECK_IN_CHUNKS=false
S3_GW_TRACING_ENABLED=false
S3_GW_TRACING_ENDPOINT="localhost:4318"
S3_GW_TRACING_EXPORTER="otlp_grpc"
S3_GW_RUNTIME_SOFT_MEMORY_LIMIT=1073741824
S3_GW_FEATURES_MD5_ENABLED=false
# ReadTimeout is the maximum duration for reading the entire
# request, including the body. A zero or negative value means
# there will be no timeout.
S3_GW_WEB_READ_TIMEOUT=0
# ReadHeaderTimeout is the amount of time allowed to read
# request headers. The connection's read deadline is reset
# after reading the headers and the Handler can decide what
# is considered too slow for the body. If ReadHeaderTimeout
# is zero, the value of ReadTimeout is used. If both are
# zero, there is no timeout.
S3_GW_WEB_READ_HEADER_TIMEOUT=30s
# WriteTimeout is the maximum duration before timing out
# writes of the response. It is reset whenever a new
# request's header is read. Like ReadTimeout, it does not
# let Handlers make decisions on a per-request basis.
# A zero or negative value means there will be no timeout.
S3_GW_WEB_WRITE_TIMEOUT=0
# IdleTimeout is the maximum amount of time to wait for the
# next request when keep-alives are enabled. If IdleTimeout
# is zero, the value of ReadTimeout is used. If both are
# zero, there is no timeout.
S3_GW_WEB_IDLE_TIMEOUT=30s

View file

@ -43,6 +43,7 @@ listen_domains:
logger:
level: debug
destination: stdout
# RPC endpoint and order of resolving of bucket names
rpc_endpoint: http://morph-chain.frostfs.devenv:30333
@ -152,6 +153,8 @@ frostfs:
set_copies_number: [0]
# This flag enables client side object preparing.
client_cut: false
# Sets max buffer size for read payload in put operations.
buffer_max_size_for_put: 1048576
# List of allowed AccessKeyID prefixes
# If the parameter is omitted, S3 GW will accept all AccessKeyIDs
@ -165,9 +168,41 @@ resolve_bucket:
deny:
kludge:
# Enable using default xml namespace `http://s3.amazonaws.com/doc/2006-03-01/` when parse`CompleteMultipartUpload` xml body.
use_default_xmlns_for_complete_multipart: false
# Set timeout between whitespace transmissions during CompleteMultipartUpload processing.
complete_multipart_keepalive: 10s
# Enable using default xml namespace `http://s3.amazonaws.com/doc/2006-03-01/` when parse xml bodies.
use_default_xmlns: false
# Use this flag to be able to use chunked upload approach without having `aws-chunked` value in `Content-Encoding` header.
bypass_content_encoding_check_in_chunks: false
runtime:
soft_memory_limit: 1gb
features:
md5:
enabled: false
web:
# ReadTimeout is the maximum duration for reading the entire
# request, including the body. A zero or negative value means
# there will be no timeout.
read_timeout: 0
# ReadHeaderTimeout is the amount of time allowed to read
# request headers. The connection's read deadline is reset
# after reading the headers and the Handler can decide what
# is considered too slow for the body. If ReadHeaderTimeout
# is zero, the value of ReadTimeout is used. If both are
# zero, there is no timeout.
read_header_timeout: 30s
# WriteTimeout is the maximum duration before timing out
# writes of the response. It is reset whenever a new
# request's header is read. Like ReadTimeout, it does not
# let Handlers make decisions on a per-request basis.
# A zero or negative value means there will be no timeout.
write_timeout: 0
# IdleTimeout is the maximum amount of time to wait for the
# next request when keep-alives are enabled. If IdleTimeout
# is zero, the value of ReadTimeout is used. If both are
# zero, there is no timeout.
idle_timeout: 30s

View file

@ -33,7 +33,7 @@ type ContainerPolicy struct {
// GateData represents gate tokens in AccessBox.
type GateData struct {
AccessKey string
SecretKey string
BearerToken *bearer.Token
SessionTokens []*session.Container
GateKey *keys.PublicKey
@ -77,9 +77,9 @@ func isAppropriateContainerContext(tok *session.Container, verb session.Containe
}
}
// Secrets represents AccessKey and the key to encrypt gate tokens.
// Secrets represents SecretKey and the key to encrypt gate tokens.
type Secrets struct {
AccessKey string
SecretKey string
EphemeralKey *keys.PrivateKey
}
@ -102,7 +102,7 @@ func PackTokens(gatesData []*GateData, secret []byte) (*AccessBox, *Secrets, err
if err != nil {
return nil, nil, fmt.Errorf("create ephemeral key: %w", err)
}
box.OwnerPublicKey = ephemeralKey.PublicKey().Bytes()
box.SeedKey = ephemeralKey.PublicKey().Bytes()
if secret == nil {
secret, err = generateSecret()
@ -120,9 +120,9 @@ func PackTokens(gatesData []*GateData, secret []byte) (*AccessBox, *Secrets, err
// GetTokens returns gate tokens from AccessBox.
func (x *AccessBox) GetTokens(owner *keys.PrivateKey) (*GateData, error) {
sender, err := keys.NewPublicKeyFromBytes(x.OwnerPublicKey, elliptic.P256())
seedKey, err := keys.NewPublicKeyFromBytes(x.SeedKey, elliptic.P256())
if err != nil {
return nil, fmt.Errorf("couldn't unmarshal OwnerPublicKey: %w", err)
return nil, fmt.Errorf("couldn't unmarshal SeedKey: %w", err)
}
ownerKey := owner.PublicKey().Bytes()
for _, gate := range x.Gates {
@ -130,7 +130,7 @@ func (x *AccessBox) GetTokens(owner *keys.PrivateKey) (*GateData, error) {
continue
}
gateData, err := decodeGate(gate, owner, sender)
gateData, err := decodeGate(gate, owner, seedKey)
if err != nil {
return nil, fmt.Errorf("failed to decode gate: %w", err)
}
@ -184,7 +184,7 @@ func (x *AccessBox) addTokens(gatesData []*GateData, ephemeralKey *keys.PrivateK
}
tokens := new(Tokens)
tokens.AccessKey = secret
tokens.SecretKey = secret
tokens.BearerToken = encBearer
tokens.SessionTokens = encSessions
@ -197,25 +197,25 @@ func (x *AccessBox) addTokens(gatesData []*GateData, ephemeralKey *keys.PrivateK
return nil
}
func encodeGate(ephemeralKey *keys.PrivateKey, ownerKey *keys.PublicKey, tokens *Tokens) (*AccessBox_Gate, error) {
func encodeGate(ephemeralKey *keys.PrivateKey, seedKey *keys.PublicKey, tokens *Tokens) (*AccessBox_Gate, error) {
data, err := proto.Marshal(tokens)
if err != nil {
return nil, fmt.Errorf("encode tokens: %w", err)
}
encrypted, err := encrypt(ephemeralKey, ownerKey, data)
encrypted, err := encrypt(ephemeralKey, seedKey, data)
if err != nil {
return nil, fmt.Errorf("ecrypt tokens: %w", err)
}
gate := new(AccessBox_Gate)
gate.GatePublicKey = ownerKey.Bytes()
gate.GatePublicKey = seedKey.Bytes()
gate.Tokens = encrypted
return gate, nil
}
func decodeGate(gate *AccessBox_Gate, owner *keys.PrivateKey, sender *keys.PublicKey) (*GateData, error) {
data, err := decrypt(owner, sender, gate.Tokens)
func decodeGate(gate *AccessBox_Gate, owner *keys.PrivateKey, seedKey *keys.PublicKey) (*GateData, error) {
data, err := decrypt(owner, seedKey, gate.Tokens)
if err != nil {
return nil, fmt.Errorf("decrypt tokens: %w", err)
}
@ -240,7 +240,7 @@ func decodeGate(gate *AccessBox_Gate, owner *keys.PrivateKey, sender *keys.Publi
gateData := NewGateData(owner.PublicKey(), &bearerTkn)
gateData.SessionTokens = sessionTkns
gateData.AccessKey = hex.EncodeToString(tokens.AccessKey)
gateData.SecretKey = hex.EncodeToString(tokens.SecretKey)
return gateData, nil
}
@ -268,8 +268,8 @@ func deriveKey(secret []byte) ([]byte, error) {
return key, err
}
func encrypt(owner *keys.PrivateKey, sender *keys.PublicKey, data []byte) ([]byte, error) {
enc, err := getCipher(owner, sender)
func encrypt(owner *keys.PrivateKey, seedKey *keys.PublicKey, data []byte) ([]byte, error) {
enc, err := getCipher(owner, seedKey)
if err != nil {
return nil, fmt.Errorf("get chiper: %w", err)
}
@ -282,8 +282,8 @@ func encrypt(owner *keys.PrivateKey, sender *keys.PublicKey, data []byte) ([]byt
return enc.Seal(nonce, nonce, data, nil), nil
}
func decrypt(owner *keys.PrivateKey, sender *keys.PublicKey, data []byte) ([]byte, error) {
dec, err := getCipher(owner, sender)
func decrypt(owner *keys.PrivateKey, seedKey *keys.PublicKey, data []byte) ([]byte, error) {
dec, err := getCipher(owner, seedKey)
if err != nil {
return nil, fmt.Errorf("get chiper: %w", err)
}
@ -296,8 +296,8 @@ func decrypt(owner *keys.PrivateKey, sender *keys.PublicKey, data []byte) ([]byt
return dec.Open(nil, nonce, cypher, nil)
}
func getCipher(owner *keys.PrivateKey, sender *keys.PublicKey) (cipher.AEAD, error) {
secret, err := generateShared256(owner, sender)
func getCipher(owner *keys.PrivateKey, seedKey *keys.PublicKey) (cipher.AEAD, error) {
secret, err := generateShared256(owner, seedKey)
if err != nil {
return nil, fmt.Errorf("generate shared key: %w", err)
}

View file

@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.28.1
// protoc v3.21.12
// protoc-gen-go v1.30.0
// protoc v3.12.4
// source: creds/accessbox/accessbox.proto
package accessbox
@ -25,7 +25,7 @@ type AccessBox struct {
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
OwnerPublicKey []byte `protobuf:"bytes,1,opt,name=ownerPublicKey,proto3" json:"ownerPublicKey,omitempty"`
SeedKey []byte `protobuf:"bytes,1,opt,name=seedKey,proto3" json:"seedKey,omitempty"`
Gates []*AccessBox_Gate `protobuf:"bytes,2,rep,name=gates,proto3" json:"gates,omitempty"`
ContainerPolicy []*AccessBox_ContainerPolicy `protobuf:"bytes,3,rep,name=containerPolicy,proto3" json:"containerPolicy,omitempty"`
}
@ -62,9 +62,9 @@ func (*AccessBox) Descriptor() ([]byte, []int) {
return file_creds_accessbox_accessbox_proto_rawDescGZIP(), []int{0}
}
func (x *AccessBox) GetOwnerPublicKey() []byte {
func (x *AccessBox) GetSeedKey() []byte {
if x != nil {
return x.OwnerPublicKey
return x.SeedKey
}
return nil
}
@ -88,7 +88,7 @@ type Tokens struct {
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
AccessKey []byte `protobuf:"bytes,1,opt,name=accessKey,proto3" json:"accessKey,omitempty"`
SecretKey []byte `protobuf:"bytes,1,opt,name=secretKey,proto3" json:"secretKey,omitempty"`
BearerToken []byte `protobuf:"bytes,2,opt,name=bearerToken,proto3" json:"bearerToken,omitempty"`
SessionTokens [][]byte `protobuf:"bytes,3,rep,name=sessionTokens,proto3" json:"sessionTokens,omitempty"`
}
@ -125,9 +125,9 @@ func (*Tokens) Descriptor() ([]byte, []int) {
return file_creds_accessbox_accessbox_proto_rawDescGZIP(), []int{1}
}
func (x *Tokens) GetAccessKey() []byte {
func (x *Tokens) GetSecretKey() []byte {
if x != nil {
return x.AccessKey
return x.SecretKey
}
return nil
}
@ -261,41 +261,40 @@ var File_creds_accessbox_accessbox_proto protoreflect.FileDescriptor
var file_creds_accessbox_accessbox_proto_rawDesc = []byte{
0x0a, 0x1f, 0x63, 0x72, 0x65, 0x64, 0x73, 0x2f, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x62, 0x6f,
0x78, 0x2f, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x62, 0x6f, 0x78, 0x2e, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x12, 0x09, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x62, 0x6f, 0x78, 0x22, 0xd5, 0x02, 0x0a,
0x09, 0x41, 0x63, 0x63, 0x65, 0x73, 0x73, 0x42, 0x6f, 0x78, 0x12, 0x26, 0x0a, 0x0e, 0x6f, 0x77,
0x6e, 0x65, 0x72, 0x50, 0x75, 0x62, 0x6c, 0x69, 0x63, 0x4b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01,
0x28, 0x0c, 0x52, 0x0e, 0x6f, 0x77, 0x6e, 0x65, 0x72, 0x50, 0x75, 0x62, 0x6c, 0x69, 0x63, 0x4b,
0x65, 0x79, 0x12, 0x2f, 0x0a, 0x05, 0x67, 0x61, 0x74, 0x65, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28,
0x0b, 0x32, 0x19, 0x2e, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x62, 0x6f, 0x78, 0x2e, 0x41, 0x63,
0x63, 0x65, 0x73, 0x73, 0x42, 0x6f, 0x78, 0x2e, 0x47, 0x61, 0x74, 0x65, 0x52, 0x05, 0x67, 0x61,
0x74, 0x65, 0x73, 0x12, 0x4e, 0x0a, 0x0f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72,
0x50, 0x6f, 0x6c, 0x69, 0x63, 0x79, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x61,
0x63, 0x63, 0x65, 0x73, 0x73, 0x62, 0x6f, 0x78, 0x2e, 0x41, 0x63, 0x63, 0x65, 0x73, 0x73, 0x42,
0x6f, 0x78, 0x2e, 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x50, 0x6f, 0x6c, 0x69,
0x63, 0x79, 0x52, 0x0f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x50, 0x6f, 0x6c,
0x69, 0x63, 0x79, 0x1a, 0x44, 0x0a, 0x04, 0x47, 0x61, 0x74, 0x65, 0x12, 0x16, 0x0a, 0x06, 0x74,
0x6f, 0x6b, 0x65, 0x6e, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x74, 0x6f, 0x6b,
0x65, 0x6e, 0x73, 0x12, 0x24, 0x0a, 0x0d, 0x67, 0x61, 0x74, 0x65, 0x50, 0x75, 0x62, 0x6c, 0x69,
0x63, 0x4b, 0x65, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0d, 0x67, 0x61, 0x74, 0x65,
0x50, 0x75, 0x62, 0x6c, 0x69, 0x63, 0x4b, 0x65, 0x79, 0x1a, 0x59, 0x0a, 0x0f, 0x43, 0x6f, 0x6e,
0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x50, 0x6f, 0x6c, 0x69, 0x63, 0x79, 0x12, 0x2e, 0x0a, 0x12,
0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x43, 0x6f, 0x6e, 0x73, 0x74, 0x72, 0x61, 0x69,
0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x12, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x43, 0x6f, 0x6e, 0x73, 0x74, 0x72, 0x61, 0x69, 0x6e, 0x74, 0x12, 0x16, 0x0a, 0x06,
0x70, 0x6f, 0x6c, 0x69, 0x63, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x70, 0x6f,
0x6c, 0x69, 0x63, 0x79, 0x22, 0x6e, 0x0a, 0x06, 0x54, 0x6f, 0x6b, 0x65, 0x6e, 0x73, 0x12, 0x1c,
0x0a, 0x09, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x4b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28,
0x0c, 0x52, 0x09, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x4b, 0x65, 0x79, 0x12, 0x20, 0x0a, 0x0b,
0x62, 0x65, 0x61, 0x72, 0x65, 0x72, 0x54, 0x6f, 0x6b, 0x65, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28,
0x0c, 0x52, 0x0b, 0x62, 0x65, 0x61, 0x72, 0x65, 0x72, 0x54, 0x6f, 0x6b, 0x65, 0x6e, 0x12, 0x24,
0x0a, 0x0d, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x54, 0x6f, 0x6b, 0x65, 0x6e, 0x73, 0x18,
0x03, 0x20, 0x03, 0x28, 0x0c, 0x52, 0x0d, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x54, 0x6f,
0x6b, 0x65, 0x6e, 0x73, 0x42, 0x46, 0x5a, 0x44, 0x67, 0x69, 0x74, 0x2e, 0x66, 0x72, 0x6f, 0x73,
0x74, 0x66, 0x73, 0x2e, 0x69, 0x6e, 0x66, 0x6f, 0x2f, 0x54, 0x72, 0x75, 0x65, 0x43, 0x6c, 0x6f,
0x75, 0x64, 0x4c, 0x61, 0x62, 0x2f, 0x66, 0x72, 0x6f, 0x73, 0x74, 0x66, 0x73, 0x2d, 0x73, 0x33,
0x2d, 0x67, 0x77, 0x2f, 0x63, 0x72, 0x65, 0x64, 0x73, 0x2f, 0x74, 0x6f, 0x6b, 0x65, 0x6e, 0x62,
0x6f, 0x78, 0x3b, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x62, 0x6f, 0x78, 0x62, 0x06, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x33,
0x6f, 0x12, 0x09, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x62, 0x6f, 0x78, 0x22, 0xc7, 0x02, 0x0a,
0x09, 0x41, 0x63, 0x63, 0x65, 0x73, 0x73, 0x42, 0x6f, 0x78, 0x12, 0x18, 0x0a, 0x07, 0x73, 0x65,
0x65, 0x64, 0x4b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x73, 0x65, 0x65,
0x64, 0x4b, 0x65, 0x79, 0x12, 0x2f, 0x0a, 0x05, 0x67, 0x61, 0x74, 0x65, 0x73, 0x18, 0x02, 0x20,
0x03, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x62, 0x6f, 0x78, 0x2e,
0x41, 0x63, 0x63, 0x65, 0x73, 0x73, 0x42, 0x6f, 0x78, 0x2e, 0x47, 0x61, 0x74, 0x65, 0x52, 0x05,
0x67, 0x61, 0x74, 0x65, 0x73, 0x12, 0x4e, 0x0a, 0x0f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e,
0x65, 0x72, 0x50, 0x6f, 0x6c, 0x69, 0x63, 0x79, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24,
0x2e, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x62, 0x6f, 0x78, 0x2e, 0x41, 0x63, 0x63, 0x65, 0x73,
0x73, 0x42, 0x6f, 0x78, 0x2e, 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x50, 0x6f,
0x6c, 0x69, 0x63, 0x79, 0x52, 0x0f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x50,
0x6f, 0x6c, 0x69, 0x63, 0x79, 0x1a, 0x44, 0x0a, 0x04, 0x47, 0x61, 0x74, 0x65, 0x12, 0x16, 0x0a,
0x06, 0x74, 0x6f, 0x6b, 0x65, 0x6e, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x74,
0x6f, 0x6b, 0x65, 0x6e, 0x73, 0x12, 0x24, 0x0a, 0x0d, 0x67, 0x61, 0x74, 0x65, 0x50, 0x75, 0x62,
0x6c, 0x69, 0x63, 0x4b, 0x65, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0d, 0x67, 0x61,
0x74, 0x65, 0x50, 0x75, 0x62, 0x6c, 0x69, 0x63, 0x4b, 0x65, 0x79, 0x1a, 0x59, 0x0a, 0x0f, 0x43,
0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x50, 0x6f, 0x6c, 0x69, 0x63, 0x79, 0x12, 0x2e,
0x0a, 0x12, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x43, 0x6f, 0x6e, 0x73, 0x74, 0x72,
0x61, 0x69, 0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x12, 0x6c, 0x6f, 0x63, 0x61,
0x74, 0x69, 0x6f, 0x6e, 0x43, 0x6f, 0x6e, 0x73, 0x74, 0x72, 0x61, 0x69, 0x6e, 0x74, 0x12, 0x16,
0x0a, 0x06, 0x70, 0x6f, 0x6c, 0x69, 0x63, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06,
0x70, 0x6f, 0x6c, 0x69, 0x63, 0x79, 0x22, 0x6e, 0x0a, 0x06, 0x54, 0x6f, 0x6b, 0x65, 0x6e, 0x73,
0x12, 0x1c, 0x0a, 0x09, 0x73, 0x65, 0x63, 0x72, 0x65, 0x74, 0x4b, 0x65, 0x79, 0x18, 0x01, 0x20,
0x01, 0x28, 0x0c, 0x52, 0x09, 0x73, 0x65, 0x63, 0x72, 0x65, 0x74, 0x4b, 0x65, 0x79, 0x12, 0x20,
0x0a, 0x0b, 0x62, 0x65, 0x61, 0x72, 0x65, 0x72, 0x54, 0x6f, 0x6b, 0x65, 0x6e, 0x18, 0x02, 0x20,
0x01, 0x28, 0x0c, 0x52, 0x0b, 0x62, 0x65, 0x61, 0x72, 0x65, 0x72, 0x54, 0x6f, 0x6b, 0x65, 0x6e,
0x12, 0x24, 0x0a, 0x0d, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x54, 0x6f, 0x6b, 0x65, 0x6e,
0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0c, 0x52, 0x0d, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e,
0x54, 0x6f, 0x6b, 0x65, 0x6e, 0x73, 0x42, 0x46, 0x5a, 0x44, 0x67, 0x69, 0x74, 0x2e, 0x66, 0x72,
0x6f, 0x73, 0x74, 0x66, 0x73, 0x2e, 0x69, 0x6e, 0x66, 0x6f, 0x2f, 0x54, 0x72, 0x75, 0x65, 0x43,
0x6c, 0x6f, 0x75, 0x64, 0x4c, 0x61, 0x62, 0x2f, 0x66, 0x72, 0x6f, 0x73, 0x74, 0x66, 0x73, 0x2d,
0x73, 0x33, 0x2d, 0x67, 0x77, 0x2f, 0x63, 0x72, 0x65, 0x64, 0x73, 0x2f, 0x74, 0x6f, 0x6b, 0x65,
0x6e, 0x62, 0x6f, 0x78, 0x3b, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x62, 0x6f, 0x78, 0x62, 0x06,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (

View file

@ -17,13 +17,13 @@ message AccessBox {
bytes policy = 2;
}
bytes ownerPublicKey = 1 [json_name = "ownerPublicKey"];
bytes seedKey = 1 [json_name = "seedKey"];
repeated Gate gates = 2 [json_name = "gates"];
repeated ContainerPolicy containerPolicy = 3 [json_name = "containerPolicy"];
}
message Tokens {
bytes accessKey = 1 [json_name = "accessKey"];
bytes secretKey = 1 [json_name = "secretKey"];
bytes bearerToken = 2 [json_name = "bearerToken"];
repeated bytes sessionTokens = 3 [json_name = "sessionTokens"];
}

View file

@ -27,6 +27,7 @@ potentially).
3. [Obtainment of a secret](#obtaining-credential-secrets)
4. [Generate presigned url](#generate-presigned-url)
5. [Update secrets](#update-secret)
6. [Exit codes](#exit-codes)
## Generation of wallet
@ -371,3 +372,14 @@ Enter password for s3-wallet.json >
"container_id": "HwrdXgetdGcEWAQwi68r1PMvw4iSm1Y5Z1fsFNSD6sQP"
}
```
## Exit codes
There are several non-zero exit codes added at the moment.
| Code | Description |
|-------|--------------------------------------------------------------------------------------------|
| 1 | Any unknown errors, or errors generated by the parser of command line parameters. |
| 2 | Preparation errors: malformed configuration, issues with input data parsing. |
| 3 | FrostFS errors: connectivity problems, misconfiguration. |
| 4 | Business logic errors: `authmate` could not execute its task because of some restrictions. |

View file

@ -185,6 +185,9 @@ There are some custom types used for brevity:
| `frostfs` | [Parameters of requests to FrostFS](#frostfs-section) |
| `resolve_bucket` | [Bucket name resolving configuration](#resolve_bucket-section) |
| `kludge` | [Different kludge configuration](#kludge-section) |
| `runtime` | [Runtime configuration](#runtime-section) |
| `features` | [Features configuration](#features-section) |
| `web` | [Web server configuration](#web-section) |
### General section
@ -352,11 +355,13 @@ server:
```yaml
logger:
level: debug
destination: stdout
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|-----------|----------|---------------|---------------|----------------------------------------------------------------------------------------------------|
|---------------|----------|---------------|---------------|----------------------------------------------------------------------------------------------------|
| `level` | `string` | yes | `debug` | Logging level.<br/>Possible values: `debug`, `info`, `warn`, `error`, `dpanic`, `panic`, `fatal`. |
| `destination` | `string` | no | `stdout` | Destination for logger: `stdout` or `journald` |
### `cache` section
@ -508,12 +513,14 @@ header for `PutObject`, `CopyObject`, `CreateMultipartUpload`.
frostfs:
set_copies_number: [0]
client_cut: false
buffer_max_size_for_put: 1048576 # 1mb
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|---------------------|------------|---------------|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|---------------------------|------------|---------------|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `set_copies_number` | `[]uint32` | yes | `[0]` | Numbers of the object copies (for each replica) to consider PUT to FrostFS successful. <br/>Default value `[0]` or empty list means that object will be processed according to the container's placement policy |
| `client_cut` | `bool` | yes | `false` | This flag enables client side object preparing. |
| `buffer_max_size_for_put` | `uint64` | yes | `1048576` | Sets max buffer size for read payload in put operations. |
# `resolve_bucket` section
@ -537,13 +544,54 @@ Workarounds for non-standard use cases.
```yaml
kludge:
use_default_xmlns_for_complete_multipart: false
complete_multipart_keepalive: 10s
use_default_xmlns: false
bypass_content_encoding_check_in_chunks: false
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|--------------------------------------------|------------|---------------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `use_default_xmlns_for_complete_multipart` | `bool` | yes | false | Enable using default xml namespace `http://s3.amazonaws.com/doc/2006-03-01/` when parse `CompleteMultipartUpload` xml body. |
| `complete_multipart_keepalive` | `duration` | no | 10s | Set timeout between whitespace transmissions during CompleteMultipartUpload processing. |
|-------------------------------------------|------------|---------------|---------------|---------------------------------------------------------------------------------------------------------------------------------|
| `use_default_xmlns` | `bool` | yes | false | Enable using default xml namespace `http://s3.amazonaws.com/doc/2006-03-01/` when parse xml bodies. |
| `bypass_content_encoding_check_in_chunks` | `bool` | yes | false | Use this flag to be able to use [chunked upload approach](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html) without having `aws-chunked` value in `Content-Encoding` header. |
# `runtime` section
Contains runtime parameters.
```yaml
runtime:
soft_memory_limit: 1gb
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|---------------------|--------|---------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `soft_memory_limit` | `size` | yes | maxint64 | Soft memory limit for the runtime. Zero or no value stands for no limit. If `GOMEMLIMIT` environment variable is set, the value from the configuration file will be ignored. |
# `features` section
Contains parameters for enabling features.
```yaml
features:
md5:
enabled: false
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|---------------|--------|---------------|---------------|----------------------------------------------------------------|
| `md5.enabled` | `bool` | yes | false | Flag to enable return MD5 checksum in ETag headers and fields. |
# `web` section
Contains web server configuration parameters.
```yaml
web:
read_timeout: 0
read_header_timeout: 30s
write_timeout: 0
idle_timeout: 30s
```
| Parameter | Type | SIGHUP reload | Default value | Description |
|-----------------------|------------|---------------|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `read_timeout` | `duration` | no | `0` | The maximum duration for reading the entire request, including the body. A zero or negative value means there will be no timeout. |
| `read_header_timeout` | `duration` | no | `30s` | The amount of time allowed to read request headers. If `read_header_timeout` is zero, the value of `read_timeout` is used. If both are zero, there is no timeout. |
| `write_timeout` | `duration` | no | `0` | The maximum duration before timing out writes of the response. A zero or negative value means there will be no timeout. |
| `idle_timeout` | `duration` | no | `30s` | The maximum amount of time to wait for the next request when keep-alives are enabled. If `idle_timeout` is zero, the value of `read_timeout` is used. If both are zero, there is no timeout. |

View file

@ -0,0 +1,126 @@
# Release instructions
## Pre-release checks
These should run successfully:
* `make all`;
* `make test`;
* `make lint` (should not change any files);
* `go mod tidy` (should not change any files);
## Make release commit
Use `vX.Y.Z` tag for releases and `vX.Y.Z-rc.N` for release candidates
following the [semantic versioning](https://semver.org/) standard.
Create release branch from the master branch of the origin repository:
```shell
$ git checkout -b release/<vX.Y.Z>
```
### Update versions
Write new revision number into the root `VERSION` file:
```shell
$ echo <vX.Y.Z> > VERSION
```
### Writing changelog
Use [keepachangelog](https://keepachangelog.com/en/1.1.0/) as a reference.
Add an entry to the `CHANGELOG.md` following the style established there.
* copy `Unreleased` section (next steps relate to section below `Unreleased`)
* replace `Unreleased` link with the new revision number
* update `Unreleased...new` and `new...old` diff-links at the bottom of the file
* add optional codename and release date in the heading
* make sure all changes have references to issues in `#123` format (if possible)
* check master branch and milestone page for missing changes
* remove all empty subsections such as `Added`, `Removed`, etc.
* clean up `Unreleased` section and leave it empty
### Make release commit
Stage changed files for commit using `git add`. Commit the changes:
```shell
$ git commit -s -m 'Release <vX.Y.Z>'
```
### Open pull request
Push release branch:
```shell
$ git push <origin> release/<vX.Y.Z>
```
Open pull request to the master branch of the origin repository so that the
maintainers check the changes. Remove release branch after the merge.
## Tag the release
Pull the main branch with release commit created in previous step.
```shell
$ git checkout master && git pull
$ git tag -a <vX.Y.Z>
```
Write a short description for the tag, e.g. `Release vX.Y.Z`
## Push the release tag
```shell
$ git push <upstream> <vX.Y.Z>
```
## Post-release
### Prepare and push images to a Docker Hub (if not automated)
Create Docker images for all applications and push them into Docker Hub
(requires [organization](https://hub.docker.com/u/truecloudlab) privileges)
```shell
$ git checkout <vX.Y.Z>
$ make image
$ docker push truecloudlab/frostfs-s3-gw:<X.Y.Z>
```
### Make public release page (if not automated)
Create a new
[release page](https://git.frostfs.info/TrueCloudLab/frostfs-s3-gw/releases/new)
and copy description from `CHANGELOG.md`. Build release binaries and attach them
to the release. Publish the release.
### Update development environments
Prepare pull-request in
[frostfs-devenv](https://git.frostfs.info/TrueCloudLab/frostfs-dev-env)
with new versions.
Prepare pull-request in
[frostfs-aio](https://git.frostfs.info/TrueCloudLab/frostfs-aio)
with new versions.
### Close milestone
Look up forgejo
[milestones](https://git.frostfs.info/TrueCloudLab/frostfs-s3-gw/milestones)
and close the release one if exists.
### Create support branch
For major or minor release, create support branch in the upstream if it does
not exist yet.
```shell
$ git checkout <vX.Y.0>
$ git checkout -b support/<vX.Y>
$ git push <upstream> support/<vX.Y>
```

4
go.mod
View file

@ -5,7 +5,8 @@ go 1.20
require (
git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.15.1-0.20230802075510-964c3edb3f44
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20230821090303-202412230a05
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20231003164722-60463871dbc2
git.frostfs.info/TrueCloudLab/zapjournald v0.0.0-20231018083019-2b6d84de9a3d
github.com/aws/aws-sdk-go v1.44.6
github.com/bluele/gcache v0.0.2
github.com/go-chi/chi/v5 v5.0.8
@ -19,6 +20,7 @@ require (
github.com/spf13/cobra v1.7.0
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.15.0
github.com/ssgreg/journald v1.0.0
github.com/stretchr/testify v1.8.3
github.com/urfave/cli/v2 v2.3.0
go.opentelemetry.io/otel v1.16.0

8
go.sum
View file

@ -44,14 +44,16 @@ git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0 h1:FxqFDhQYYgpe41qsIHVOcdzSV
git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0/go.mod h1:RUIKZATQLJ+TaYQa60X2fTDwfuhMfm8Ar60bQ5fr+vU=
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6 h1:aGQ6QaAnTerQ5Dq5b2/f9DUQtSqPkZZ/bkMx/HKuLCo=
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6/go.mod h1:W8Nn08/l6aQ7UlIbpF7FsQou7TVpcRD1ZT1KG4TrFhE=
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20230821090303-202412230a05 h1:OuViMF54N87FXmaBEpYw3jhzaLrJ/EWOlPL1wUkimE0=
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20230821090303-202412230a05/go.mod h1:t1akKcUH7iBrFHX8rSXScYMP17k2kYQXMbZooiL5Juw=
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20231003164722-60463871dbc2 h1:PHZX/Gh59ZPNG10JtTjBkmKbhKNq84CKu+dJpbzPVOc=
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20231003164722-60463871dbc2/go.mod h1:t1akKcUH7iBrFHX8rSXScYMP17k2kYQXMbZooiL5Juw=
git.frostfs.info/TrueCloudLab/hrw v1.2.1 h1:ccBRK21rFvY5R1WotI6LNoPlizk7qSvdfD8lNIRudVc=
git.frostfs.info/TrueCloudLab/hrw v1.2.1/go.mod h1:C1Ygde2n843yTZEQ0FP69jYiuaYV0kriLvP4zm8JuvM=
git.frostfs.info/TrueCloudLab/rfc6979 v0.4.0 h1:M2KR3iBj7WpY3hP10IevfIB9MURr4O9mwVfJ+SjT3HA=
git.frostfs.info/TrueCloudLab/rfc6979 v0.4.0/go.mod h1:okpbKfVYf/BpejtfFTfhZqFP+sZ8rsHrP8Rr/jYPNRc=
git.frostfs.info/TrueCloudLab/tzhash v1.8.0 h1:UFMnUIk0Zh17m8rjGHJMqku2hCgaXDqjqZzS4gsb4UA=
git.frostfs.info/TrueCloudLab/tzhash v1.8.0/go.mod h1:dhY+oy274hV8wGvGL4MwwMpdL3GYvaX1a8GQZQHvlF8=
git.frostfs.info/TrueCloudLab/zapjournald v0.0.0-20231018083019-2b6d84de9a3d h1:Z9UuI+jxzPtwQZUMmATdTuA8/8l2jzBY1rVh/gwBDsw=
git.frostfs.info/TrueCloudLab/zapjournald v0.0.0-20231018083019-2b6d84de9a3d/go.mod h1:rQFJJdEOV7KbbMtQYR2lNfiZk+ONRDJSbMCTWxKt8Fw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/CityOfZion/neo-go v0.62.1-pre.0.20191114145240-e740fbe708f8/go.mod h1:MJCkWUBhi9pn/CrYO1Q3P687y2KeahrOPS9BD9LDGb0=
@ -443,6 +445,8 @@ github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.15.0 h1:js3yy885G8xwJa6iOISGFwd+qlUo5AvyXb7CiihdtiU=
github.com/spf13/viper v1.15.0/go.mod h1:fFcTBJxvhhzSJiZy8n+PeW6t8l+KeT/uTARa0jHOQLA=
github.com/ssgreg/journald v1.0.0 h1:0YmTDPJXxcWDPba12qNMdO6TxvfkFSYpFIJ31CwmLcU=
github.com/ssgreg/journald v1.0.0/go.mod h1:RUckwmTM8ghGWPslq2+ZBZzbb9/2KgjzYZ4JEP+oRt0=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=

View file

@ -57,12 +57,16 @@ func (x *AuthmateFrostFS) CreateContainer(ctx context.Context, prm authmate.PrmC
basicACL.AllowOp(acl.OpObjectHead, acl.RoleOthers)
basicACL.AllowOp(acl.OpObjectSearch, acl.RoleOthers)
return x.frostFS.CreateContainer(ctx, layer.PrmContainerCreate{
res, err := x.frostFS.CreateContainer(ctx, layer.PrmContainerCreate{
Creator: prm.Owner,
Policy: prm.Policy,
Name: prm.FriendlyName,
BasicACL: basicACL,
})
if err != nil {
return cid.ID{}, err
}
return res.ContainerID, nil
}
// GetCredsPayload implements authmate.FrostFS interface method.

View file

@ -106,7 +106,7 @@ var basicACLZero acl.Basic
// CreateContainer implements frostfs.FrostFS interface method.
//
// If prm.BasicACL is zero, 'eacl-public-read-write' is used.
func (x *FrostFS) CreateContainer(ctx context.Context, prm layer.PrmContainerCreate) (cid.ID, error) {
func (x *FrostFS) CreateContainer(ctx context.Context, prm layer.PrmContainerCreate) (*layer.ContainerCreateResult, error) {
if prm.BasicACL == basicACLZero {
prm.BasicACL = acl.PublicRWExtended
}
@ -137,7 +137,7 @@ func (x *FrostFS) CreateContainer(ctx context.Context, prm layer.PrmContainerCre
err := pool.SyncContainerWithNetwork(ctx, &cnr, x.pool)
if err != nil {
return cid.ID{}, handleObjectError("sync container with the network state", err)
return nil, handleObjectError("sync container with the network state", err)
}
prmPut := pool.PrmContainerPut{
@ -150,7 +150,10 @@ func (x *FrostFS) CreateContainer(ctx context.Context, prm layer.PrmContainerCre
// send request to save the container
idCnr, err := x.pool.PutContainer(ctx, prmPut)
return idCnr, handleObjectError("save container via connection pool", err)
return &layer.ContainerCreateResult{
ContainerID: idCnr,
HomomorphicHashDisabled: container.IsHomomorphicHashingDisabled(cnr),
}, handleObjectError("save container via connection pool", err)
}
// UserContainers implements frostfs.FrostFS interface method.
@ -244,6 +247,8 @@ func (x *FrostFS) CreateObject(ctx context.Context, prm layer.PrmObjectCreate) (
prmPut.SetPayload(prm.Payload)
prmPut.SetCopiesNumberVector(prm.CopiesNumber)
prmPut.SetClientCut(prm.ClientCut)
prmPut.WithoutHomomorphicHash(prm.WithoutHomomorphicHash)
prmPut.SetBufferMaxSize(prm.BufferMaxSize)
if prm.BearerToken != nil {
prmPut.UseBearer(*prm.BearerToken)

View file

@ -75,6 +75,7 @@ const (
ResolveBucket = "resolve bucket" // Info in ../../api/layer/layer.go
CouldntDeleteCorsObject = "couldn't delete cors object" // Error in ../../api/layer/cors.go
PutObject = "put object" // Debug in ../../api/layer/object.go
FailedToDeleteObject = "failed to delete object" // Debug in ../../api/layer/object.go
FailedToDiscardPutPayloadProbablyGoroutineLeaks = "failed to discard put payload, probably goroutine leaks" // Warn in ../../api/layer/object.go
FailedToSubmitTaskToPool = "failed to submit task to pool" // Warn in ../../api/layer/object.go
CouldNotFetchObjectMeta = "could not fetch object meta" // Warn in ../../api/layer/object.go
@ -94,6 +95,7 @@ const (
FailedToPassAuthentication = "failed to pass authentication" // Error in ../../api/middleware/auth.go
FailedToResolveCID = "failed to resolve CID" // Debug in ../../api/middleware/metrics.go
RequestStart = "request start" // Info in ../../api/middleware/reqinfo.go
FailedToUnescapeObjectName = "failed to unescape object name" // Warn in ../../api/middleware/reqinfo.go
CouldNotHandleMessage = "could not handle message" // Error in ../../api/notifications/controller.go
CouldNotACKMessage = "could not ACK message" // Error in ../../api/notifications/controller.go
CouldntMarshalAnEvent = "couldn't marshal an event" // Error in ../../api/notifications/controller.go
@ -112,4 +114,6 @@ const (
ListenAndServe = "listen and serve" // Fatal in ../../cmd/s3-gw/app.go
NoHealthyServers = "no healthy servers" // Fatal in ../../cmd/s3-gw/app.go
CouldNotInitializeAPIHandler = "could not initialize API handler" // Fatal in ../../cmd/s3-gw/app.go
RuntimeSoftMemoryDefinedWithGOMEMLIMIT = "soft runtime memory defined with GOMEMLIMIT environment variable, config value skipped" // Warn in ../../cmd/s3-gw/app.go
RuntimeSoftMemoryLimitUpdated = "soft runtime memory limit value updated" // Info in ../../cmd/s3-gw/app.go
)

View file

@ -1,38 +0,0 @@
package xml
import (
"encoding/xml"
"io"
"sync"
)
const awsDefaultNamespace = "http://s3.amazonaws.com/doc/2006-03-01/"
type DecoderProvider struct {
mu sync.RWMutex
defaultXMLNSForCompleteMultipart bool
}
func NewDecoderProvider(defaultNamespace bool) *DecoderProvider {
return &DecoderProvider{
defaultXMLNSForCompleteMultipart: defaultNamespace,
}
}
func (d *DecoderProvider) NewCompleteMultipartDecoder(r io.Reader) *xml.Decoder {
dec := xml.NewDecoder(r)
d.mu.RLock()
if d.defaultXMLNSForCompleteMultipart {
dec.DefaultSpace = awsDefaultNamespace
}
d.mu.RUnlock()
return dec
}
func (d *DecoderProvider) UseDefaultNamespaceForCompleteMultipart(useDefaultNamespace bool) {
d.mu.Lock()
d.defaultXMLNSForCompleteMultipart = useDefaultNamespace
d.mu.Unlock()
}

View file

@ -84,3 +84,19 @@ func (m *AppMetrics) Statistic() *APIStatMetrics {
func (m *AppMetrics) Gather() ([]*dto.MetricFamily, error) {
return m.gate.Gather()
}
func (m *AppMetrics) MarkHealthy(endpoint string) {
if !m.isEnabled() {
return
}
m.gate.HTTPServer.MarkHealthy(endpoint)
}
func (m *AppMetrics) MarkUnhealthy(endpoint string) {
if !m.isEnabled() {
return
}
m.gate.HTTPServer.MarkUnhealthy(endpoint)
}

View file

@ -134,6 +134,16 @@ var appMetricsDesc = map[string]map[string]Description{
VariableLabels: []string{"direction"},
},
},
serverSubsystem: {
httpHealthMetric: Description{
Type: dto.MetricType_GAUGE,
Namespace: namespace,
Subsystem: serverSubsystem,
Name: httpHealthMetric,
Help: "HTTP Server endpoint health",
VariableLabels: []string{"endpoint"},
},
},
}
type Description struct {

View file

@ -21,6 +21,7 @@ type GateMetrics struct {
Pool *poolMetricsCollector
Billing *billingMetrics
Stats *APIStatMetrics
HTTPServer *httpServerMetrics
}
func NewGateMetrics(scraper StatisticScraper) *GateMetrics {
@ -38,12 +39,16 @@ func NewGateMetrics(scraper StatisticScraper) *GateMetrics {
statsMetric := newAPIStatMetrics()
registry.MustRegister(statsMetric)
serverMetric := newHTTPServerMetrics()
registry.MustRegister(serverMetric)
return &GateMetrics{
registry: registry,
State: stateMetric,
Pool: poolMetric,
Billing: billingMetric,
Stats: statsMetric,
HTTPServer: serverMetric,
}
}
@ -52,6 +57,7 @@ func (g *GateMetrics) Unregister() {
g.registry.Unregister(g.Pool)
g.Billing.Unregister()
g.registry.Unregister(g.Stats)
g.registry.Unregister(g.HTTPServer)
}
func (g *GateMetrics) Handler() http.Handler {

34
metrics/http.go Normal file
View file

@ -0,0 +1,34 @@
package metrics
import "github.com/prometheus/client_golang/prometheus"
const (
serverSubsystem = "server"
httpHealthMetric = "health"
)
type httpServerMetrics struct {
endpointHealth *prometheus.GaugeVec
}
func newHTTPServerMetrics() *httpServerMetrics {
return &httpServerMetrics{
endpointHealth: mustNewGaugeVec(appMetricsDesc[serverSubsystem][httpHealthMetric]),
}
}
func (m *httpServerMetrics) Collect(ch chan<- prometheus.Metric) {
m.endpointHealth.Collect(ch)
}
func (m *httpServerMetrics) Describe(desc chan<- *prometheus.Desc) {
m.endpointHealth.Describe(desc)
}
func (m *httpServerMetrics) MarkHealthy(endpoint string) {
m.endpointHealth.WithLabelValues(endpoint).Set(float64(1))
}
func (m *httpServerMetrics) MarkUnhealthy(endpoint string) {
m.endpointHealth.WithLabelValues(endpoint).Set(float64(0))
}

View file

@ -81,6 +81,7 @@ const (
partNumberKV = "Number"
sizeKV = "Size"
etagKV = "ETag"
md5KV = "MD5"
// keys for lock.
isLockKV = "IsLock"
@ -185,6 +186,7 @@ func newNodeVersionFromTreeNode(filePath string, treeNode *treeNode) *data.NodeV
_, isDeleteMarker := treeNode.Get(isDeleteMarkerKV)
_, isCombined := treeNode.Get(isCombinedKV)
eTag, _ := treeNode.Get(etagKV)
md5, _ := treeNode.Get(md5KV)
version := &data.NodeVersion{
BaseNodeVersion: data.BaseNodeVersion{
@ -193,6 +195,7 @@ func newNodeVersionFromTreeNode(filePath string, treeNode *treeNode) *data.NodeV
OID: treeNode.ObjID,
Timestamp: treeNode.TimeStamp,
ETag: eTag,
MD5: md5,
Size: treeNode.Size,
FilePath: filePath,
},
@ -302,6 +305,8 @@ func newPartInfo(node NodeResponse) (*data.PartInfo, error) {
return nil, fmt.Errorf("invalid created timestamp: %w", err)
}
partInfo.Created = time.UnixMilli(utcMilli)
case md5KV:
partInfo.MD5 = value
}
}
@ -578,7 +583,7 @@ func (c *Tree) GetVersions(ctx context.Context, bktInfo *data.BucketInfo, filepa
}
func (c *Tree) GetLatestVersion(ctx context.Context, bktInfo *data.BucketInfo, objectName string) (*data.NodeVersion, error) {
meta := []string{oidKV, isUnversionedKV, isDeleteMarkerKV, etagKV, sizeKV}
meta := []string{oidKV, isUnversionedKV, isDeleteMarkerKV, etagKV, sizeKV, md5KV}
path := pathFromName(objectName)
p := &GetNodesParams{
@ -586,7 +591,7 @@ func (c *Tree) GetLatestVersion(ctx context.Context, bktInfo *data.BucketInfo, o
TreeID: versionTree,
Path: path,
Meta: meta,
LatestOnly: true,
LatestOnly: false,
AllAttrs: false,
}
nodes, err := c.service.GetNodes(ctx, p)
@ -594,11 +599,43 @@ func (c *Tree) GetLatestVersion(ctx context.Context, bktInfo *data.BucketInfo, o
return nil, err
}
if len(nodes) == 0 {
latestNode, err := getLatestNode(nodes)
if err != nil {
return nil, err
}
return newNodeVersion(objectName, latestNode)
}
func getLatestNode(nodes []NodeResponse) (NodeResponse, error) {
var (
maxCreationTime uint64
targetIndexNode = -1
)
for i, node := range nodes {
currentCreationTime := node.GetTimestamp()
if checkExistOID(node.GetMeta()) && currentCreationTime > maxCreationTime {
maxCreationTime = currentCreationTime
targetIndexNode = i
}
}
if targetIndexNode == -1 {
return nil, layer.ErrNodeNotFound
}
return newNodeVersion(objectName, nodes[0])
return nodes[targetIndexNode], nil
}
func checkExistOID(meta []Meta) bool {
for _, kv := range meta {
if kv.GetKey() == "OID" {
return true
}
}
return false
}
// pathFromName splits name by '/'.
@ -992,6 +1029,7 @@ func (c *Tree) AddPart(ctx context.Context, bktInfo *data.BucketInfo, multipartN
sizeKV: strconv.FormatUint(info.Size, 10),
createdKV: strconv.FormatInt(info.Created.UTC().UnixMilli(), 10),
etagKV: info.ETag,
md5KV: info.MD5,
}
for _, part := range parts {
@ -1124,6 +1162,9 @@ func (c *Tree) addVersion(ctx context.Context, bktInfo *data.BucketInfo, treeID
if len(version.ETag) > 0 {
meta[etagKV] = version.ETag
}
if len(version.MD5) > 0 {
meta[md5KV] = version.MD5
}
if version.IsDeleteMarker() {
meta[isDeleteMarkerKV] = "true"
@ -1168,7 +1209,7 @@ func (c *Tree) clearOutdatedVersionInfo(ctx context.Context, bktInfo *data.Bucke
}
func (c *Tree) getVersions(ctx context.Context, bktInfo *data.BucketInfo, treeID, filepath string, onlyUnversioned bool) ([]*data.NodeVersion, error) {
keysToReturn := []string{oidKV, isUnversionedKV, isDeleteMarkerKV, etagKV, sizeKV}
keysToReturn := []string{oidKV, isUnversionedKV, isDeleteMarkerKV, etagKV, sizeKV, md5KV}
path := pathFromName(filepath)
p := &GetNodesParams{
BktInfo: bktInfo,

View file

@ -168,3 +168,127 @@ func TestTreeServiceAddVersion(t *testing.T) {
require.Len(t, versions, 1)
require.Equal(t, storedNode, versions[0])
}
func TestGetLatestNode(t *testing.T) {
for _, tc := range []struct {
name string
nodes []NodeResponse
exceptedNodeID uint64
error bool
}{
{
name: "empty",
nodes: []NodeResponse{},
error: true,
},
{
name: "one node of the object version",
nodes: []NodeResponse{
nodeResponse{
nodeID: 1,
parentID: 0,
timestamp: 1,
meta: []nodeMeta{
{
key: oidKV,
value: []byte(oidtest.ID().String()),
},
},
},
},
exceptedNodeID: 1,
},
{
name: "one node of the object version and one node of the secondary object",
nodes: []NodeResponse{
nodeResponse{
nodeID: 2,
parentID: 0,
timestamp: 3,
meta: []nodeMeta{},
},
nodeResponse{
nodeID: 1,
parentID: 0,
timestamp: 1,
meta: []nodeMeta{
{
key: oidKV,
value: []byte(oidtest.ID().String()),
},
},
},
},
exceptedNodeID: 1,
},
{
name: "all nodes represent a secondary object",
nodes: []NodeResponse{
nodeResponse{
nodeID: 2,
parentID: 0,
timestamp: 3,
meta: []nodeMeta{},
},
nodeResponse{
nodeID: 4,
parentID: 0,
timestamp: 5,
meta: []nodeMeta{},
},
},
error: true,
},
{
name: "several nodes of different types and with different timestamp",
nodes: []NodeResponse{
nodeResponse{
nodeID: 1,
parentID: 0,
timestamp: 1,
meta: []nodeMeta{
{
key: oidKV,
value: []byte(oidtest.ID().String()),
},
},
},
nodeResponse{
nodeID: 3,
parentID: 0,
timestamp: 3,
meta: []nodeMeta{},
},
nodeResponse{
nodeID: 4,
parentID: 0,
timestamp: 4,
meta: []nodeMeta{
{
key: oidKV,
value: []byte(oidtest.ID().String()),
},
},
},
nodeResponse{
nodeID: 6,
parentID: 0,
timestamp: 6,
meta: []nodeMeta{},
},
},
exceptedNodeID: 4,
},
} {
t.Run(tc.name, func(t *testing.T) {
actualNode, err := getLatestNode(tc.nodes)
if tc.error {
require.Error(t, err)
return
}
require.NoError(t, err)
require.Equal(t, tc.exceptedNodeID, actualNode.GetNodeID())
})
}
}