[#203] Add pool docs
All checks were successful
DCO / DCO (pull_request) Successful in 48s
Tests and linters / Tests (1.22) (pull_request) Successful in 1m3s
Tests and linters / Tests (1.23) (pull_request) Successful in 1m2s
Tests and linters / Lint (pull_request) Successful in 1m41s

Signed-off-by: Nikita Zinkevich <n.zinkevich@yadro.com>
This commit is contained in:
Nikita Zinkevich 2024-08-29 15:52:21 +03:00
parent f0b9493ce3
commit 3c00f4eeac
4 changed files with 34 additions and 4 deletions

BIN
doc/image/pool.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 194 KiB

27
doc/pool.md Normal file
View file

@ -0,0 +1,27 @@
# Node connection pool
* Distributes requests between fixed number of nodes
* Wraps
The distribution between nodes in connection pool is based on priority and weight parameters from
NodeParam struct. The distribution model is presented below. On this scheme nodes with the same
priority have the same color.
![](./image/pool.png "Pool connections distribution model")
## Priority
Pool component forwards request to the nodes with the highest priority (the lower the value -
the higher the priority). The `node 1` from the image's scenario (I) is healthy
and has the highest priority (1), that's why the pool forwards requests from it. There are no other
nodes with priority 1, so `node 1` receives all requests. In the second scenario (II) `node 1`
becomes unhealthy. In that case pool tries to connect nodes with next in priority nodes e.g.
`Node 4` and `node 2`. If all of them become unhealthy too, the pool sends requests to nodes with
priority 3 in scenario (III) and so on.
## Weights
If there are several nodes with the same priority, then requests are distributed randomly between
these nodes based on their weights. To do that the proportion of weights is calculated.
For example, for `node 2` and `node 4` with weights 2 and 8 the distribution would be 20 and 80 percent
respectively.

View file

@ -7,8 +7,7 @@ a weighted random selection of the underlying client to make requests.
Create pool instance with 3 nodes connection.
This InitParameters will make pool use 192.168.130.71 node while it is healthy. Otherwise, it will make the pool use
192.168.130.72 for 90% of requests and 192.168.130.73 for remaining 10%.
:
192.168.130.72 for 90% of requests and 192.168.130.73 for remaining 10%:
var prm pool.InitParameters
prm.SetKey(key)

View file

@ -293,7 +293,7 @@ func (x *wrapperPrm) setErrorThreshold(threshold uint32) {
x.errorThreshold = threshold
}
// SetGracefulCloseOnSwitchTimeout specifies the timeout after which unhealthy client be closed during rebalancing
// setGracefulCloseOnSwitchTimeout specifies the timeout after which unhealthy client be closed during rebalancing
// if it will become healthy back.
//
// See also setErrorThreshold.
@ -1450,6 +1450,9 @@ func (x *NodeParam) SetPriority(priority int) {
}
// Priority returns priority of the node.
// Requests will be served by nodes subset with the highest priority (the smaller value - the higher priority).
// If there are no healthy nodes in subsets with current or higher priority, requests will be served
// by nodes subset with lower priority.
func (x *NodeParam) Priority() int {
return x.priority
}
@ -1465,6 +1468,7 @@ func (x *NodeParam) Address() string {
}
// SetWeight specifies weight of the node.
// Weights used to adjust requests' distribution between nodes with the same priority.
func (x *NodeParam) SetWeight(weight float64) {
x.weight = weight
}
@ -1508,7 +1512,7 @@ func (x *WaitParams) checkForPositive() {
}
}
// CheckForValid checks if all wait params are non-negative.
// CheckValidity checks if all wait params are non-negative.
func (x *WaitParams) CheckValidity() error {
if x.Timeout <= 0 {
return errors.New("timeout cannot be negative")