[#1248] node: Do not update cache twice #30
No reviewers
Labels
No labels
P0
P1
P2
P3
badger
frostfs-adm
frostfs-cli
frostfs-ir
frostfs-lens
frostfs-node
good first issue
triage
Infrastructure
blocked
bug
config
discussion
documentation
duplicate
enhancement
go
help wanted
internal
invalid
kludge
observability
perfomance
question
refactoring
wontfix
No milestone
No project
No assignees
5 participants
Notifications
Due date
Dependencies
No dependencies set.
Reference: TrueCloudLab/frostfs-node#30
Loading…
Reference in a new issue
No description provided.
Delete branch "carpawell/fix/multiple-cache-update-requests-FROST"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Adopted https://github.com/nspcc-dev/neofs-node/pull/2151.
@ -29,1 +29,3 @@
ttl time.Duration
m sync.RWMutex // protects progMap
progMap map[K]chan struct{} // contains fetch-in-progress keys
ttl time.Duration
We already have
valueWithTime
which can be extended. Can we avoid having a map and add a mutex tovalueWithTime
instead?@ -29,1 +29,3 @@
ttl time.Duration
m sync.RWMutex // protects progMap
progMap map[K]chan struct{} // contains fetch-in-progress keys
ttl time.Duration
what if multiple routines come and request the same value that does not exist yet?
@ -29,1 +29,3 @@
ttl time.Duration
m sync.RWMutex // protects progMap
progMap map[K]chan struct{} // contains fetch-in-progress keys
ttl time.Duration
I think it's about something like this:
map[key]{*value, keyMutex}
First get object from this map with global mutex (it can be optimized by using hashed key and shards), then lock keyMutex to get value.
@fyrchik, @dstepanov-yadro, lets continue that.
@fyrchik, the whole PR is useless then and no new optimization will be merged.
@dstepanov-yadro, updated PR, check, please. i think, i may get you wrong.
@ -48,3 +59,4 @@
}
}
func (c *ttlNetCache[K, V]) actualValue(val any) (*valueWithTime[V], bool) {
can we replace
any
with generic type?hm, yeah,
actualValue
is useless with generic, in fact. dropped@ -51,0 +68,4 @@
return nil, false
}
func (c *ttlNetCache[K, V]) waitForUpdate(key K, vip *valueInProgress[V]) (V, error) {
key is redundant
Also, why is it a method of
ttlNetCache
?used to be used, dropped
@ -27,0 +29,4 @@
type valueInProgress[V any] struct {
v V
e error
m sync.RWMutex
not sure if it's a common practice, but I normally see the mutex declared right before the thing it guards.
do you mean field order? (if yes, fixed)
@ -28,2 +36,3 @@
type ttlNetCache[K comparable, V any] struct {
ttl time.Duration
m sync.RWMutex // protects progMap
progMap map[K]*valueInProgress[V] // contains fetch-in-progress keys
is it simpler to use https://pkg.go.dev/sync#Map and avoid all the mutexes?
Sure, but this usage seems to fall into the description on the next paragraph after the one you are quoting, isn't? And type safety is a non-concern given that the parent type is polymorphic.
did not think about it cause i've never used it and we do not have it in our repos. added an exapmle (separate commit), may be squashed or dropped @fyrchik @acid-ant @aarifullin
There are
delete
calls exists.This point is closer to how this cache will be used, but does not fully correspond to it.
ok, dropped
34c43b18a7
to0a01917d83
@ -64,3 +80,3 @@
}
v, err := c.netRdr(key)
valInProg := &valueInProgress[V]{}
if you happen to keep this version, it probably should be
well, it seems like it is wrong to me: the whole PR is about the restricting simultaneous network cache updates so we need an "updating" routine to fill map with already locked mutex, otherwise, other routines are able to get it and
RLock
its mutex despite the fact that the value is not fetched yetso i think it should be either
LoadOrStore
with already held mutex orLoad
+Store
if we dont want to have useless mutexesI see. But what I had in mind is that if you are calling
LoadOrStore
and there happened to be a value already (locked or otherwise), the mutex you just created and locked is never actually unlocked, is it? (the one from the freshval
that didn't happen to be stored in the map).0a01917d83
to261bf48b97
261bf48b97
todfa39203e8
@ -70,0 +110,4 @@
valInProg.m.Unlock()
c.m.Lock()
delete(c.progMap, key)
Why do we unconditionally delete something from the map we have just written to?
if we are here (that line), we are the routine (the only one per a value) that failed to get a val from the cache and decided to fetch if from the network. when the work is done, we remove record from "the scheduler" (
progMap
) cause it is finished@ -68,1 +103,4 @@
c.m.Unlock()
v, err := c.netRdr(key)
c.set(key, v, err)
Do we still need this line?
yes, that's a cache still. why did you doubt? got the value (not concurrently with that PR) -- updated the cache
Because below we explicitly set
valInProg
valuesit is for a concurrent routines (that wait when network job is done) only. all the other will take a val directly from the cache cause no cache miss will be faced after an update (well, for 30s only but anyway)
dfa39203e8
tof23af21064