Refactor 'Pool' #316
No reviewers
Labels
No labels
P0
P1
P2
P3
good first issue
pool
Infrastructure
blocked
bug
config
discussion
documentation
duplicate
enhancement
go
help wanted
internal
invalid
kludge
observability
perfomance
question
refactoring
wontfix
No milestone
No project
No assignees
4 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: TrueCloudLab/frostfs-sdk-go#316
Loading…
Add table
Reference in a new issue
No description provided.
Delete branch "achuprov/frostfs-sdk-go:feat/refactorPool"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Close #300
This PR splits the functionality of
Pool
into the following components:Pool
: Provides the external interface for interaction.connectionManager
: Implements the internal functionality for managing clients.healthCheck
: Handles client health monitoring.@ -0,0 +12,4 @@
"sync/atomic"
"time"
sdkClient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
nitpick: consider remove uppercase from
sdkClient
in all filesIt is a copypaste, I would leave as-is.
@ -1967,3 +1948,3 @@
)
// NewPool creates connection pool using parameters.
// NewPool creates cnnectionManager using parameters.
Typo. Overall I would change comment a bit, e.g.
NewPool inits a connectionManager and returns a new Pool instance
. Same fornewConnectionManager
.It
NewPool
createsconnectionManager
, what doesnewConnectionManager
do?Fixed. I think the comment for
NewPool
should not contain information about private components of Pool.// newConnectionManager returns an instance of connectionManager configured according to the parameters.
@ -2688,19 +2190,19 @@ func (p *Pool) GetObject(ctx context.Context, prm PrmObjectGet) (ResGetObject, e
//
// Main return value MUST NOT be processed on an erroneous return.
question: Are these comments (here and in other methods) necessary? I believe it's clear that we should not process the result if an error occurs
Fixed
In general, LGTM but I have feeling that these entities (
Pool
,connectionManager
andhealthchek
) are still tight coupled, know a lot about each other (and share responsibility).healthcheck
struct knows about buffers, node paramsfunc (h *healthCheck) startRebalance(ctx context.Context, callback func(ctx context.Context, buffers [][]float64), nodesParams []*nodesParam) {
though it just runs
callback
by tickercase <-ticker.C:
callback(ctx, buffers)
ticker.Reset(h.clientRebalanceInterval)
Pool
struct knows a lot aboutconnectionManager
. Let's look atPutObject
func (p *Pool) PutObject(ctx context.Context, prm PrmObjectPut) (ResPutObject, error) {
As I understand the main idea is getting from connection manager ready client to use. I would expect we form some parameters (prmContext is just fine) and invoke method (initCallContext is also just fine) to get client.
But
p.manager.fillAppropriateKey(&prm.prmCommon)
if err := p.manager.openDefaultSession(ctx, &ctxCall); err != nil {
manager
ni.SetCurrentEpoch(p.manager.cache.Epoch())
manager
p.manager.checkSessionTokenErr(err, ctxCall.endpoint)
I suggest we should make all these struct more loosely coupled. To make it easier to understand responsibility we can place (maybe just during development) each struct into separate package and work with it know nothing about inner struct and just using exported methods
8cce85337b
to2df73145df
2df73145df
tob306cf06e4
@dkirillov
Now, only get the callback with ctx.
Move call
fillAppropriateKey(&prm.prmCommon
toinitCall
Move
manager.cache.Epoch()
tomanager.Epoch()
fixed
diff
Maybe we can keep all
initCall
logic inPool
. AndconnectionManager
will just return healthy client/connection?b306cf06e4
to7a2b153c11
@dkirillov Ok, all session management logic is left in
Pool
.connectionManager
returns onlyclient
andstatistic
Since we do refactor, can we also move
clientWrapper
(with all its methods and related structs (clientStatusMonitor
,wrapperPrm
)) to separate file (client.go
for example)?@ -0,0 +261,4 @@
// Iterate iterates over all clients in the connectionManager. It returns false if there are no more clients.
// User MUST fully consume the iterator until false is received.
func (cm *connectionManager) iterate() func() (client, bool) {
Why don't we use something like:
You are right, your solution is more elegant.
Final diff
@ -0,0 +334,4 @@
stat := Statistic{}
for _, inner := range cm.innerPools {
nodes := make([]string, 0, len(inner.clients))
inner.lock.RLock()
It seems we don't need to use mutex here. Each client thread-safe and mutext protect only
inner.sampler
Done
7a2b153c11
toad36d13b3b
ad36d13b3b
to43c6924116
43c6924116
to656727e79b
656727e79b
to70a543746b
LGTM,
but what do you think about #316 (comment)
70a543746b
to143be39782
New commits pushed, approval review dismissed automatically according to repository settings
143be39782
tobe26e50fb5
be26e50fb5
to48262e937f
48262e937f
to6a3bf114ea
6a3bf114ea
tofc8aebdb25
⬆️ Add
containerListStream
fc8aebdb25
toac8b8b4b47
⬆️ Rename field Account to OwnerID in PrmContainerList
ac8b8b4b47
toe554b6b255
e554b6b255
to55fd24e156
What is with this commit message
55fd24e156
?55fd24e156
to7017ae6b28