* cache: add sharded cache implementation Add Cache impl and a few tests. This cache is 256-way sharded, mainly so each shard has it's own lock. The main cache structure is a readonly jump plane into the right shard. This should remove the single lock contention on the main lock and provide more concurrent throughput - Obviously this hasn't been tested or measured. The key into the cache was made a uint32 (hash.fnv) and the hashing op is not using strings.ToLower anymore remove any GC in that code path. * here too * Minimum shard size * typos * blurp * small cleanups no defer * typo * Add freq based on Johns idea * cherry-pick conflict resolv * typo * update from early code review from john * add prefetch to the cache * mw/cache: add prefetch * remove println * remove comment * Fix tests * Test prefetch in setup * Add start of cache * try add diff cache options * Add hacky testcase * not needed * allow the use of a percentage for prefetch If the TTL falls below xx% do a prefetch, if the record was popular. Some other fixes and correctly prefetch only popular records.
31 lines
513 B
Go
31 lines
513 B
Go
package cache
|
|
|
|
import "testing"
|
|
|
|
func TestCacheAddAndGet(t *testing.T) {
|
|
c := New(4)
|
|
c.Add(1, 1)
|
|
|
|
if _, found := c.Get(1); !found {
|
|
t.Fatal("Failed to find inserted record")
|
|
}
|
|
}
|
|
|
|
func TestCacheLen(t *testing.T) {
|
|
c := New(4)
|
|
|
|
c.Add(1, 1)
|
|
if l := c.Len(); l != 1 {
|
|
t.Fatalf("Cache size should %d, got %d", 1, l)
|
|
}
|
|
|
|
c.Add(1, 1)
|
|
if l := c.Len(); l != 1 {
|
|
t.Fatalf("Cache size should %d, got %d", 1, l)
|
|
}
|
|
|
|
c.Add(2, 2)
|
|
if l := c.Len(); l != 2 {
|
|
t.Fatalf("Cache size should %d, got %d", 2, l)
|
|
}
|
|
}
|