neoneo-go/pkg/core/storage/memcached_store.go

271 lines
6 KiB
Go
Raw Normal View History

package storage
import (
"bytes"
"context"
"sort"
"strings"
"sync"
"github.com/nspcc-dev/neo-go/pkg/util/slice"
)
storage: allow accessing MemCachedStore during Persist Persist by its definition doesn't change MemCachedStore visible state, all KV pairs that were acessible via it before Persist remain accessible after Persist. The only thing it does is flushing of the current set of KV pairs from memory to peristent store. To do that it needs read-only access to the current KV pair set, but technically it then replaces maps, so we have to use full write lock which makes MemCachedStore inaccessible for the duration of Persist. And Persist can take a lot of time, it's about disk access for regular DBs. What we do here is we create new in-memory maps for MemCachedStore before flushing old ones to the persistent store. Then a fake persistent store is created which actually is a MemCachedStore with old maps, so it has exactly the same visible state. This Store is never accessed for writes, so we can read it without taking any internal locks and at the same time we no longer need write locks for original MemCachedStore, we're not using it. All of this makes it possible to use MemCachedStore as normally reads are handled going down to whatever level is needed and writes are handled by new maps. So while Persist for (*Blockchain).dao does its most time-consuming work we can process other blocks (reading data for transactions and persisting storeBlock caches to (*Blockchain).dao). The change was tested for performance with neo-bench (single node, 10 workers, LevelDB) on two machines and block dump processing (RC4 testnet up to 62800 with VerifyBlocks set to false) on i7-8565U. Reference results (bbe4e9cd7bb33428633586f080f64494cd6ac9cf): Ryzen 9 5950X: RPS 23616.969 22817.086 23222.378 ≈ 23218 ± 1.72% TPS 23047.316 22608.578 22735.540 ≈ 22797 ± 0.99% CPU % 23.434 25.553 23.848 ≈ 24.3 ± 4.63% Mem MB 600.636 503.060 582.043 ≈ 562 ± 9.22% Core i7-8565U: RPS 6594.007 6499.501 6572.902 ≈ 6555 ± 0.76% TPS 6561.680 6444.545 6510.120 ≈ 6505 ± 0.90% CPU % 58.452 60.568 62.474 ≈ 60.5 ± 3.33% Mem MB 234.893 285.067 269.081 ≈ 263 ± 9.75% DB restore: real 0m22.237s 0m23.471s 0m23.409s ≈ 23.04 ± 3.02% user 0m35.435s 0m38.943s 0m39.247s ≈ 37.88 ± 5.59% sys 0m3.085s 0m3.360s 0m3.144s ≈ 3.20 ± 4.53% After the change: Ryzen 9 5950X: RPS 27747.349 27407.726 27520.210 ≈ 27558 ± 0.63% ↑ 18.69% TPS 26992.010 26993.468 27010.966 ≈ 26999 ± 0.04% ↑ 18.43% CPU % 28.928 28.096 29.105 ≈ 28.7 ± 1.88% ↑ 18.1% Mem MB 760.385 726.320 756.118 ≈ 748 ± 2.48% ↑ 33.10% Core i7-8565U: RPS 7783.229 7628.409 7542.340 ≈ 7651 ± 1.60% ↑ 16.72% TPS 7708.436 7607.397 7489.459 ≈ 7602 ± 1.44% ↑ 16.85% CPU % 74.899 71.020 72.697 ≈ 72.9 ± 2.67% ↑ 20.50% Mem MB 438.047 436.967 416.350 ≈ 430 ± 2.84% ↑ 63.50% DB restore: real 0m20.838s 0m21.895s 0m21.794s ≈ 21.51 ± 2.71% ↓ 6.64% user 0m39.091s 0m40.565s 0m41.493s ≈ 40.38 ± 3.00% ↑ 6.60% sys 0m3.184s 0m2.923s 0m3.062s ≈ 3.06 ± 4.27% ↓ 4.38% It obviously uses more memory now and utilizes CPU more aggressively, but at the same time it allows to improve all relevant metrics and finally reach a situation where we process 50K transactions in less than second on Ryzen 9 5950X (going higher than 25K TPS). The other observation is much more stable block time, on Ryzen 9 it's as close to 1 second as it could be.
2021-07-30 20:35:03 +00:00
// MemCachedStore is a wrapper around persistent store that caches all changes
// being made for them to be later flushed in one batch.
type MemCachedStore struct {
MemoryStore
storage: allow accessing MemCachedStore during Persist Persist by its definition doesn't change MemCachedStore visible state, all KV pairs that were acessible via it before Persist remain accessible after Persist. The only thing it does is flushing of the current set of KV pairs from memory to peristent store. To do that it needs read-only access to the current KV pair set, but technically it then replaces maps, so we have to use full write lock which makes MemCachedStore inaccessible for the duration of Persist. And Persist can take a lot of time, it's about disk access for regular DBs. What we do here is we create new in-memory maps for MemCachedStore before flushing old ones to the persistent store. Then a fake persistent store is created which actually is a MemCachedStore with old maps, so it has exactly the same visible state. This Store is never accessed for writes, so we can read it without taking any internal locks and at the same time we no longer need write locks for original MemCachedStore, we're not using it. All of this makes it possible to use MemCachedStore as normally reads are handled going down to whatever level is needed and writes are handled by new maps. So while Persist for (*Blockchain).dao does its most time-consuming work we can process other blocks (reading data for transactions and persisting storeBlock caches to (*Blockchain).dao). The change was tested for performance with neo-bench (single node, 10 workers, LevelDB) on two machines and block dump processing (RC4 testnet up to 62800 with VerifyBlocks set to false) on i7-8565U. Reference results (bbe4e9cd7bb33428633586f080f64494cd6ac9cf): Ryzen 9 5950X: RPS 23616.969 22817.086 23222.378 ≈ 23218 ± 1.72% TPS 23047.316 22608.578 22735.540 ≈ 22797 ± 0.99% CPU % 23.434 25.553 23.848 ≈ 24.3 ± 4.63% Mem MB 600.636 503.060 582.043 ≈ 562 ± 9.22% Core i7-8565U: RPS 6594.007 6499.501 6572.902 ≈ 6555 ± 0.76% TPS 6561.680 6444.545 6510.120 ≈ 6505 ± 0.90% CPU % 58.452 60.568 62.474 ≈ 60.5 ± 3.33% Mem MB 234.893 285.067 269.081 ≈ 263 ± 9.75% DB restore: real 0m22.237s 0m23.471s 0m23.409s ≈ 23.04 ± 3.02% user 0m35.435s 0m38.943s 0m39.247s ≈ 37.88 ± 5.59% sys 0m3.085s 0m3.360s 0m3.144s ≈ 3.20 ± 4.53% After the change: Ryzen 9 5950X: RPS 27747.349 27407.726 27520.210 ≈ 27558 ± 0.63% ↑ 18.69% TPS 26992.010 26993.468 27010.966 ≈ 26999 ± 0.04% ↑ 18.43% CPU % 28.928 28.096 29.105 ≈ 28.7 ± 1.88% ↑ 18.1% Mem MB 760.385 726.320 756.118 ≈ 748 ± 2.48% ↑ 33.10% Core i7-8565U: RPS 7783.229 7628.409 7542.340 ≈ 7651 ± 1.60% ↑ 16.72% TPS 7708.436 7607.397 7489.459 ≈ 7602 ± 1.44% ↑ 16.85% CPU % 74.899 71.020 72.697 ≈ 72.9 ± 2.67% ↑ 20.50% Mem MB 438.047 436.967 416.350 ≈ 430 ± 2.84% ↑ 63.50% DB restore: real 0m20.838s 0m21.895s 0m21.794s ≈ 21.51 ± 2.71% ↓ 6.64% user 0m39.091s 0m40.565s 0m41.493s ≈ 40.38 ± 3.00% ↑ 6.60% sys 0m3.184s 0m2.923s 0m3.062s ≈ 3.06 ± 4.27% ↓ 4.38% It obviously uses more memory now and utilizes CPU more aggressively, but at the same time it allows to improve all relevant metrics and finally reach a situation where we process 50K transactions in less than second on Ryzen 9 5950X (going higher than 25K TPS). The other observation is much more stable block time, on Ryzen 9 it's as close to 1 second as it could be.
2021-07-30 20:35:03 +00:00
// plock protects Persist from double entrance.
plock sync.Mutex
// Persistent Store.
ps Store
}
type (
// KeyValue represents key-value pair.
KeyValue struct {
Key []byte
Value []byte
}
// KeyValueExists represents key-value pair with indicator whether the item
// exists in the persistent storage.
KeyValueExists struct {
KeyValue
Exists bool
}
// MemBatch represents a changeset to be persisted.
MemBatch struct {
Put []KeyValueExists
Deleted []KeyValueExists
}
)
// NewMemCachedStore creates a new MemCachedStore object.
func NewMemCachedStore(lower Store) *MemCachedStore {
return &MemCachedStore{
MemoryStore: *NewMemoryStore(),
ps: lower,
}
}
// Get implements the Store interface.
func (s *MemCachedStore) Get(key []byte) ([]byte, error) {
s.mut.RLock()
defer s.mut.RUnlock()
k := string(key)
if val, ok := s.mem[k]; ok {
return val, nil
}
if _, ok := s.del[k]; ok {
return nil, ErrKeyNotFound
}
return s.ps.Get(key)
}
// GetBatch returns currently accumulated changeset.
func (s *MemCachedStore) GetBatch() *MemBatch {
s.mut.RLock()
defer s.mut.RUnlock()
var b MemBatch
b.Put = make([]KeyValueExists, 0, len(s.mem))
for k, v := range s.mem {
key := []byte(k)
_, err := s.ps.Get(key)
b.Put = append(b.Put, KeyValueExists{KeyValue: KeyValue{Key: key, Value: v}, Exists: err == nil})
}
b.Deleted = make([]KeyValueExists, 0, len(s.del))
for k := range s.del {
key := []byte(k)
_, err := s.ps.Get(key)
b.Deleted = append(b.Deleted, KeyValueExists{KeyValue: KeyValue{Key: key}, Exists: err == nil})
}
return &b
}
// Seek implements the Store interface.
func (s *MemCachedStore) Seek(key []byte, f func(k, v []byte)) {
seekres := s.SeekAsync(context.Background(), key, false)
for kv := range seekres {
f(kv.Key, kv.Value)
}
}
// SeekAsync returns non-buffered channel with matching KeyValue pairs. Key and
// value slices may not be copied and may be modified. SeekAsync can guarantee
// that key-value items are sorted by key in ascending way.
func (s *MemCachedStore) SeekAsync(ctx context.Context, key []byte, cutPrefix bool) chan KeyValue {
// Create memory store `mem` and `del` snapshot not to hold the lock.
var memRes []KeyValueExists
sk := string(key)
s.mut.RLock()
for k, v := range s.MemoryStore.mem {
if strings.HasPrefix(k, sk) {
memRes = append(memRes, KeyValueExists{
KeyValue: KeyValue{
Key: []byte(k),
Value: v,
},
Exists: true,
})
}
}
for k := range s.MemoryStore.del {
if strings.HasPrefix(k, sk) {
memRes = append(memRes, KeyValueExists{
KeyValue: KeyValue{
Key: []byte(k),
},
})
}
}
ps := s.ps
s.mut.RUnlock()
// Sort memRes items for further comparison with ps items.
sort.Slice(memRes, func(i, j int) bool {
return bytes.Compare(memRes[i].Key, memRes[j].Key) < 0
})
var (
data2 = make(chan KeyValue)
seekres = make(chan KeyValue)
)
// Seek over persistent store.
go func() {
var done bool
ps.Seek(key, func(k, v []byte) {
if done {
return
}
select {
case <-ctx.Done():
done = true
default:
// Must copy here, #1468.
data2 <- KeyValue{
Key: slice.Copy(k),
Value: slice.Copy(v),
}
}
})
close(data2)
}()
// Merge results of seek operations in ascending order.
go func() {
var (
kvMem KeyValueExists
haveMem bool
iMem int
)
if iMem < len(memRes) {
kvMem = memRes[iMem]
haveMem = true
iMem++
}
kvPs, havePs := <-data2
for {
if !haveMem && !havePs {
break
}
var isMem = haveMem && (!havePs || (bytes.Compare(kvMem.Key, kvPs.Key) < 0))
if isMem {
if kvMem.Exists {
if cutPrefix {
kvMem.Key = kvMem.Key[len(key):]
}
seekres <- KeyValue{
Key: kvMem.Key,
Value: kvMem.Value,
}
}
if iMem < len(memRes) {
kvMem = memRes[iMem]
haveMem = true
iMem++
} else {
haveMem = false
}
} else {
if !bytes.Equal(kvMem.Key, kvPs.Key) {
if cutPrefix {
kvPs.Key = kvPs.Key[len(key):]
}
seekres <- kvPs
}
kvPs, havePs = <-data2
}
}
close(seekres)
}()
return seekres
}
// Persist flushes all the MemoryStore contents into the (supposedly) persistent
// store ps.
func (s *MemCachedStore) Persist() (int, error) {
var err error
var keys, dkeys int
storage: allow accessing MemCachedStore during Persist Persist by its definition doesn't change MemCachedStore visible state, all KV pairs that were acessible via it before Persist remain accessible after Persist. The only thing it does is flushing of the current set of KV pairs from memory to peristent store. To do that it needs read-only access to the current KV pair set, but technically it then replaces maps, so we have to use full write lock which makes MemCachedStore inaccessible for the duration of Persist. And Persist can take a lot of time, it's about disk access for regular DBs. What we do here is we create new in-memory maps for MemCachedStore before flushing old ones to the persistent store. Then a fake persistent store is created which actually is a MemCachedStore with old maps, so it has exactly the same visible state. This Store is never accessed for writes, so we can read it without taking any internal locks and at the same time we no longer need write locks for original MemCachedStore, we're not using it. All of this makes it possible to use MemCachedStore as normally reads are handled going down to whatever level is needed and writes are handled by new maps. So while Persist for (*Blockchain).dao does its most time-consuming work we can process other blocks (reading data for transactions and persisting storeBlock caches to (*Blockchain).dao). The change was tested for performance with neo-bench (single node, 10 workers, LevelDB) on two machines and block dump processing (RC4 testnet up to 62800 with VerifyBlocks set to false) on i7-8565U. Reference results (bbe4e9cd7bb33428633586f080f64494cd6ac9cf): Ryzen 9 5950X: RPS 23616.969 22817.086 23222.378 ≈ 23218 ± 1.72% TPS 23047.316 22608.578 22735.540 ≈ 22797 ± 0.99% CPU % 23.434 25.553 23.848 ≈ 24.3 ± 4.63% Mem MB 600.636 503.060 582.043 ≈ 562 ± 9.22% Core i7-8565U: RPS 6594.007 6499.501 6572.902 ≈ 6555 ± 0.76% TPS 6561.680 6444.545 6510.120 ≈ 6505 ± 0.90% CPU % 58.452 60.568 62.474 ≈ 60.5 ± 3.33% Mem MB 234.893 285.067 269.081 ≈ 263 ± 9.75% DB restore: real 0m22.237s 0m23.471s 0m23.409s ≈ 23.04 ± 3.02% user 0m35.435s 0m38.943s 0m39.247s ≈ 37.88 ± 5.59% sys 0m3.085s 0m3.360s 0m3.144s ≈ 3.20 ± 4.53% After the change: Ryzen 9 5950X: RPS 27747.349 27407.726 27520.210 ≈ 27558 ± 0.63% ↑ 18.69% TPS 26992.010 26993.468 27010.966 ≈ 26999 ± 0.04% ↑ 18.43% CPU % 28.928 28.096 29.105 ≈ 28.7 ± 1.88% ↑ 18.1% Mem MB 760.385 726.320 756.118 ≈ 748 ± 2.48% ↑ 33.10% Core i7-8565U: RPS 7783.229 7628.409 7542.340 ≈ 7651 ± 1.60% ↑ 16.72% TPS 7708.436 7607.397 7489.459 ≈ 7602 ± 1.44% ↑ 16.85% CPU % 74.899 71.020 72.697 ≈ 72.9 ± 2.67% ↑ 20.50% Mem MB 438.047 436.967 416.350 ≈ 430 ± 2.84% ↑ 63.50% DB restore: real 0m20.838s 0m21.895s 0m21.794s ≈ 21.51 ± 2.71% ↓ 6.64% user 0m39.091s 0m40.565s 0m41.493s ≈ 40.38 ± 3.00% ↑ 6.60% sys 0m3.184s 0m2.923s 0m3.062s ≈ 3.06 ± 4.27% ↓ 4.38% It obviously uses more memory now and utilizes CPU more aggressively, but at the same time it allows to improve all relevant metrics and finally reach a situation where we process 50K transactions in less than second on Ryzen 9 5950X (going higher than 25K TPS). The other observation is much more stable block time, on Ryzen 9 it's as close to 1 second as it could be.
2021-07-30 20:35:03 +00:00
s.plock.Lock()
defer s.plock.Unlock()
s.mut.Lock()
keys = len(s.mem)
dkeys = len(s.del)
if keys == 0 && dkeys == 0 {
storage: allow accessing MemCachedStore during Persist Persist by its definition doesn't change MemCachedStore visible state, all KV pairs that were acessible via it before Persist remain accessible after Persist. The only thing it does is flushing of the current set of KV pairs from memory to peristent store. To do that it needs read-only access to the current KV pair set, but technically it then replaces maps, so we have to use full write lock which makes MemCachedStore inaccessible for the duration of Persist. And Persist can take a lot of time, it's about disk access for regular DBs. What we do here is we create new in-memory maps for MemCachedStore before flushing old ones to the persistent store. Then a fake persistent store is created which actually is a MemCachedStore with old maps, so it has exactly the same visible state. This Store is never accessed for writes, so we can read it without taking any internal locks and at the same time we no longer need write locks for original MemCachedStore, we're not using it. All of this makes it possible to use MemCachedStore as normally reads are handled going down to whatever level is needed and writes are handled by new maps. So while Persist for (*Blockchain).dao does its most time-consuming work we can process other blocks (reading data for transactions and persisting storeBlock caches to (*Blockchain).dao). The change was tested for performance with neo-bench (single node, 10 workers, LevelDB) on two machines and block dump processing (RC4 testnet up to 62800 with VerifyBlocks set to false) on i7-8565U. Reference results (bbe4e9cd7bb33428633586f080f64494cd6ac9cf): Ryzen 9 5950X: RPS 23616.969 22817.086 23222.378 ≈ 23218 ± 1.72% TPS 23047.316 22608.578 22735.540 ≈ 22797 ± 0.99% CPU % 23.434 25.553 23.848 ≈ 24.3 ± 4.63% Mem MB 600.636 503.060 582.043 ≈ 562 ± 9.22% Core i7-8565U: RPS 6594.007 6499.501 6572.902 ≈ 6555 ± 0.76% TPS 6561.680 6444.545 6510.120 ≈ 6505 ± 0.90% CPU % 58.452 60.568 62.474 ≈ 60.5 ± 3.33% Mem MB 234.893 285.067 269.081 ≈ 263 ± 9.75% DB restore: real 0m22.237s 0m23.471s 0m23.409s ≈ 23.04 ± 3.02% user 0m35.435s 0m38.943s 0m39.247s ≈ 37.88 ± 5.59% sys 0m3.085s 0m3.360s 0m3.144s ≈ 3.20 ± 4.53% After the change: Ryzen 9 5950X: RPS 27747.349 27407.726 27520.210 ≈ 27558 ± 0.63% ↑ 18.69% TPS 26992.010 26993.468 27010.966 ≈ 26999 ± 0.04% ↑ 18.43% CPU % 28.928 28.096 29.105 ≈ 28.7 ± 1.88% ↑ 18.1% Mem MB 760.385 726.320 756.118 ≈ 748 ± 2.48% ↑ 33.10% Core i7-8565U: RPS 7783.229 7628.409 7542.340 ≈ 7651 ± 1.60% ↑ 16.72% TPS 7708.436 7607.397 7489.459 ≈ 7602 ± 1.44% ↑ 16.85% CPU % 74.899 71.020 72.697 ≈ 72.9 ± 2.67% ↑ 20.50% Mem MB 438.047 436.967 416.350 ≈ 430 ± 2.84% ↑ 63.50% DB restore: real 0m20.838s 0m21.895s 0m21.794s ≈ 21.51 ± 2.71% ↓ 6.64% user 0m39.091s 0m40.565s 0m41.493s ≈ 40.38 ± 3.00% ↑ 6.60% sys 0m3.184s 0m2.923s 0m3.062s ≈ 3.06 ± 4.27% ↓ 4.38% It obviously uses more memory now and utilizes CPU more aggressively, but at the same time it allows to improve all relevant metrics and finally reach a situation where we process 50K transactions in less than second on Ryzen 9 5950X (going higher than 25K TPS). The other observation is much more stable block time, on Ryzen 9 it's as close to 1 second as it could be.
2021-07-30 20:35:03 +00:00
s.mut.Unlock()
return 0, nil
}
storage: allow accessing MemCachedStore during Persist Persist by its definition doesn't change MemCachedStore visible state, all KV pairs that were acessible via it before Persist remain accessible after Persist. The only thing it does is flushing of the current set of KV pairs from memory to peristent store. To do that it needs read-only access to the current KV pair set, but technically it then replaces maps, so we have to use full write lock which makes MemCachedStore inaccessible for the duration of Persist. And Persist can take a lot of time, it's about disk access for regular DBs. What we do here is we create new in-memory maps for MemCachedStore before flushing old ones to the persistent store. Then a fake persistent store is created which actually is a MemCachedStore with old maps, so it has exactly the same visible state. This Store is never accessed for writes, so we can read it without taking any internal locks and at the same time we no longer need write locks for original MemCachedStore, we're not using it. All of this makes it possible to use MemCachedStore as normally reads are handled going down to whatever level is needed and writes are handled by new maps. So while Persist for (*Blockchain).dao does its most time-consuming work we can process other blocks (reading data for transactions and persisting storeBlock caches to (*Blockchain).dao). The change was tested for performance with neo-bench (single node, 10 workers, LevelDB) on two machines and block dump processing (RC4 testnet up to 62800 with VerifyBlocks set to false) on i7-8565U. Reference results (bbe4e9cd7bb33428633586f080f64494cd6ac9cf): Ryzen 9 5950X: RPS 23616.969 22817.086 23222.378 ≈ 23218 ± 1.72% TPS 23047.316 22608.578 22735.540 ≈ 22797 ± 0.99% CPU % 23.434 25.553 23.848 ≈ 24.3 ± 4.63% Mem MB 600.636 503.060 582.043 ≈ 562 ± 9.22% Core i7-8565U: RPS 6594.007 6499.501 6572.902 ≈ 6555 ± 0.76% TPS 6561.680 6444.545 6510.120 ≈ 6505 ± 0.90% CPU % 58.452 60.568 62.474 ≈ 60.5 ± 3.33% Mem MB 234.893 285.067 269.081 ≈ 263 ± 9.75% DB restore: real 0m22.237s 0m23.471s 0m23.409s ≈ 23.04 ± 3.02% user 0m35.435s 0m38.943s 0m39.247s ≈ 37.88 ± 5.59% sys 0m3.085s 0m3.360s 0m3.144s ≈ 3.20 ± 4.53% After the change: Ryzen 9 5950X: RPS 27747.349 27407.726 27520.210 ≈ 27558 ± 0.63% ↑ 18.69% TPS 26992.010 26993.468 27010.966 ≈ 26999 ± 0.04% ↑ 18.43% CPU % 28.928 28.096 29.105 ≈ 28.7 ± 1.88% ↑ 18.1% Mem MB 760.385 726.320 756.118 ≈ 748 ± 2.48% ↑ 33.10% Core i7-8565U: RPS 7783.229 7628.409 7542.340 ≈ 7651 ± 1.60% ↑ 16.72% TPS 7708.436 7607.397 7489.459 ≈ 7602 ± 1.44% ↑ 16.85% CPU % 74.899 71.020 72.697 ≈ 72.9 ± 2.67% ↑ 20.50% Mem MB 438.047 436.967 416.350 ≈ 430 ± 2.84% ↑ 63.50% DB restore: real 0m20.838s 0m21.895s 0m21.794s ≈ 21.51 ± 2.71% ↓ 6.64% user 0m39.091s 0m40.565s 0m41.493s ≈ 40.38 ± 3.00% ↑ 6.60% sys 0m3.184s 0m2.923s 0m3.062s ≈ 3.06 ± 4.27% ↓ 4.38% It obviously uses more memory now and utilizes CPU more aggressively, but at the same time it allows to improve all relevant metrics and finally reach a situation where we process 50K transactions in less than second on Ryzen 9 5950X (going higher than 25K TPS). The other observation is much more stable block time, on Ryzen 9 it's as close to 1 second as it could be.
2021-07-30 20:35:03 +00:00
// tempstore technically copies current s in lower layer while real s
// starts using fresh new maps. This tempstore is only known here and
// nothing ever changes it, therefore accesses to it (reads) can go
// unprotected while writes are handled by s proper.
var tempstore = &MemCachedStore{MemoryStore: MemoryStore{mem: s.mem, del: s.del}, ps: s.ps}
s.ps = tempstore
s.mem = make(map[string][]byte)
s.del = make(map[string]bool)
s.mut.Unlock()
storage: introduce PutChangeSet and use it for Persist We're using batches in wrong way during persist, we already have all changes accumulated in two maps and then we move them to batch and then this is applied. For some DBs like BoltDB this batch is just another MemoryStore, so we essentially just shuffle the changeset from one map to another, for others like LevelDB batch is just a serialized set of KV pairs, it doesn't help much on subsequent PutBatch, we just duplicate the changeset again. So introduce PutChangeSet that allows to take two maps with sets and deletes directly. It also allows to simplify MemCachedStore logic. neo-bench for single node with 10 workers, LevelDB: Reference: RPS 30189.132 30556.448 30390.482 ≈ 30379 ± 0.61% TPS 29427.344 29418.687 29434.273 ≈ 29427 ± 0.03% CPU % 33.304 27.179 33.860 ≈ 31.45 ± 11.79% Mem MB 800.677 798.389 715.042 ≈ 771 ± 6.33% Patched: RPS 30264.326 30386.364 30166.231 ≈ 30272 ± 0.36% ⇅ TPS 29444.673 29407.440 29452.478 ≈ 29435 ± 0.08% ⇅ CPU % 34.012 32.597 33.467 ≈ 33.36 ± 2.14% ⇅ Mem MB 549.126 523.656 517.684 ≈ 530 ± 3.15% ↓ 31.26% BoltDB: Reference: RPS 31937.647 31551.684 31850.408 ≈ 31780 ± 0.64% TPS 31292.049 30368.368 31307.724 ≈ 30989 ± 1.74% CPU % 33.792 22.339 35.887 ≈ 30.67 ± 23.78% Mem MB 1271.687 1254.472 1215.639 ≈ 1247 ± 2.30% Patched: RPS 31746.818 30859.485 31689.761 ≈ 31432 ± 1.58% ⇅ TPS 31271.499 30340.726 30342.568 ≈ 30652 ± 1.75% ⇅ CPU % 34.611 34.414 31.553 ≈ 33.53 ± 5.11% ⇅ Mem MB 1262.960 1231.389 1335.569 ≈ 1277 ± 4.18% ⇅
2021-08-12 10:35:09 +00:00
err = tempstore.ps.PutChangeSet(tempstore.mem, tempstore.del)
storage: allow accessing MemCachedStore during Persist Persist by its definition doesn't change MemCachedStore visible state, all KV pairs that were acessible via it before Persist remain accessible after Persist. The only thing it does is flushing of the current set of KV pairs from memory to peristent store. To do that it needs read-only access to the current KV pair set, but technically it then replaces maps, so we have to use full write lock which makes MemCachedStore inaccessible for the duration of Persist. And Persist can take a lot of time, it's about disk access for regular DBs. What we do here is we create new in-memory maps for MemCachedStore before flushing old ones to the persistent store. Then a fake persistent store is created which actually is a MemCachedStore with old maps, so it has exactly the same visible state. This Store is never accessed for writes, so we can read it without taking any internal locks and at the same time we no longer need write locks for original MemCachedStore, we're not using it. All of this makes it possible to use MemCachedStore as normally reads are handled going down to whatever level is needed and writes are handled by new maps. So while Persist for (*Blockchain).dao does its most time-consuming work we can process other blocks (reading data for transactions and persisting storeBlock caches to (*Blockchain).dao). The change was tested for performance with neo-bench (single node, 10 workers, LevelDB) on two machines and block dump processing (RC4 testnet up to 62800 with VerifyBlocks set to false) on i7-8565U. Reference results (bbe4e9cd7bb33428633586f080f64494cd6ac9cf): Ryzen 9 5950X: RPS 23616.969 22817.086 23222.378 ≈ 23218 ± 1.72% TPS 23047.316 22608.578 22735.540 ≈ 22797 ± 0.99% CPU % 23.434 25.553 23.848 ≈ 24.3 ± 4.63% Mem MB 600.636 503.060 582.043 ≈ 562 ± 9.22% Core i7-8565U: RPS 6594.007 6499.501 6572.902 ≈ 6555 ± 0.76% TPS 6561.680 6444.545 6510.120 ≈ 6505 ± 0.90% CPU % 58.452 60.568 62.474 ≈ 60.5 ± 3.33% Mem MB 234.893 285.067 269.081 ≈ 263 ± 9.75% DB restore: real 0m22.237s 0m23.471s 0m23.409s ≈ 23.04 ± 3.02% user 0m35.435s 0m38.943s 0m39.247s ≈ 37.88 ± 5.59% sys 0m3.085s 0m3.360s 0m3.144s ≈ 3.20 ± 4.53% After the change: Ryzen 9 5950X: RPS 27747.349 27407.726 27520.210 ≈ 27558 ± 0.63% ↑ 18.69% TPS 26992.010 26993.468 27010.966 ≈ 26999 ± 0.04% ↑ 18.43% CPU % 28.928 28.096 29.105 ≈ 28.7 ± 1.88% ↑ 18.1% Mem MB 760.385 726.320 756.118 ≈ 748 ± 2.48% ↑ 33.10% Core i7-8565U: RPS 7783.229 7628.409 7542.340 ≈ 7651 ± 1.60% ↑ 16.72% TPS 7708.436 7607.397 7489.459 ≈ 7602 ± 1.44% ↑ 16.85% CPU % 74.899 71.020 72.697 ≈ 72.9 ± 2.67% ↑ 20.50% Mem MB 438.047 436.967 416.350 ≈ 430 ± 2.84% ↑ 63.50% DB restore: real 0m20.838s 0m21.895s 0m21.794s ≈ 21.51 ± 2.71% ↓ 6.64% user 0m39.091s 0m40.565s 0m41.493s ≈ 40.38 ± 3.00% ↑ 6.60% sys 0m3.184s 0m2.923s 0m3.062s ≈ 3.06 ± 4.27% ↓ 4.38% It obviously uses more memory now and utilizes CPU more aggressively, but at the same time it allows to improve all relevant metrics and finally reach a situation where we process 50K transactions in less than second on Ryzen 9 5950X (going higher than 25K TPS). The other observation is much more stable block time, on Ryzen 9 it's as close to 1 second as it could be.
2021-07-30 20:35:03 +00:00
s.mut.Lock()
if err == nil {
storage: allow accessing MemCachedStore during Persist Persist by its definition doesn't change MemCachedStore visible state, all KV pairs that were acessible via it before Persist remain accessible after Persist. The only thing it does is flushing of the current set of KV pairs from memory to peristent store. To do that it needs read-only access to the current KV pair set, but technically it then replaces maps, so we have to use full write lock which makes MemCachedStore inaccessible for the duration of Persist. And Persist can take a lot of time, it's about disk access for regular DBs. What we do here is we create new in-memory maps for MemCachedStore before flushing old ones to the persistent store. Then a fake persistent store is created which actually is a MemCachedStore with old maps, so it has exactly the same visible state. This Store is never accessed for writes, so we can read it without taking any internal locks and at the same time we no longer need write locks for original MemCachedStore, we're not using it. All of this makes it possible to use MemCachedStore as normally reads are handled going down to whatever level is needed and writes are handled by new maps. So while Persist for (*Blockchain).dao does its most time-consuming work we can process other blocks (reading data for transactions and persisting storeBlock caches to (*Blockchain).dao). The change was tested for performance with neo-bench (single node, 10 workers, LevelDB) on two machines and block dump processing (RC4 testnet up to 62800 with VerifyBlocks set to false) on i7-8565U. Reference results (bbe4e9cd7bb33428633586f080f64494cd6ac9cf): Ryzen 9 5950X: RPS 23616.969 22817.086 23222.378 ≈ 23218 ± 1.72% TPS 23047.316 22608.578 22735.540 ≈ 22797 ± 0.99% CPU % 23.434 25.553 23.848 ≈ 24.3 ± 4.63% Mem MB 600.636 503.060 582.043 ≈ 562 ± 9.22% Core i7-8565U: RPS 6594.007 6499.501 6572.902 ≈ 6555 ± 0.76% TPS 6561.680 6444.545 6510.120 ≈ 6505 ± 0.90% CPU % 58.452 60.568 62.474 ≈ 60.5 ± 3.33% Mem MB 234.893 285.067 269.081 ≈ 263 ± 9.75% DB restore: real 0m22.237s 0m23.471s 0m23.409s ≈ 23.04 ± 3.02% user 0m35.435s 0m38.943s 0m39.247s ≈ 37.88 ± 5.59% sys 0m3.085s 0m3.360s 0m3.144s ≈ 3.20 ± 4.53% After the change: Ryzen 9 5950X: RPS 27747.349 27407.726 27520.210 ≈ 27558 ± 0.63% ↑ 18.69% TPS 26992.010 26993.468 27010.966 ≈ 26999 ± 0.04% ↑ 18.43% CPU % 28.928 28.096 29.105 ≈ 28.7 ± 1.88% ↑ 18.1% Mem MB 760.385 726.320 756.118 ≈ 748 ± 2.48% ↑ 33.10% Core i7-8565U: RPS 7783.229 7628.409 7542.340 ≈ 7651 ± 1.60% ↑ 16.72% TPS 7708.436 7607.397 7489.459 ≈ 7602 ± 1.44% ↑ 16.85% CPU % 74.899 71.020 72.697 ≈ 72.9 ± 2.67% ↑ 20.50% Mem MB 438.047 436.967 416.350 ≈ 430 ± 2.84% ↑ 63.50% DB restore: real 0m20.838s 0m21.895s 0m21.794s ≈ 21.51 ± 2.71% ↓ 6.64% user 0m39.091s 0m40.565s 0m41.493s ≈ 40.38 ± 3.00% ↑ 6.60% sys 0m3.184s 0m2.923s 0m3.062s ≈ 3.06 ± 4.27% ↓ 4.38% It obviously uses more memory now and utilizes CPU more aggressively, but at the same time it allows to improve all relevant metrics and finally reach a situation where we process 50K transactions in less than second on Ryzen 9 5950X (going higher than 25K TPS). The other observation is much more stable block time, on Ryzen 9 it's as close to 1 second as it could be.
2021-07-30 20:35:03 +00:00
// tempstore.mem and tempstore.del are completely flushed now
// to tempstore.ps, so all KV pairs are the same and this
// substitution has no visible effects.
s.ps = tempstore.ps
} else {
// We're toast. We'll try to still keep proper state, but OOM
// killer will get to us eventually.
for k := range s.mem {
tempstore.put(k, s.mem[k])
}
for k := range s.del {
tempstore.drop(k)
}
s.ps = tempstore.ps
s.mem = tempstore.mem
s.del = tempstore.del
}
storage: allow accessing MemCachedStore during Persist Persist by its definition doesn't change MemCachedStore visible state, all KV pairs that were acessible via it before Persist remain accessible after Persist. The only thing it does is flushing of the current set of KV pairs from memory to peristent store. To do that it needs read-only access to the current KV pair set, but technically it then replaces maps, so we have to use full write lock which makes MemCachedStore inaccessible for the duration of Persist. And Persist can take a lot of time, it's about disk access for regular DBs. What we do here is we create new in-memory maps for MemCachedStore before flushing old ones to the persistent store. Then a fake persistent store is created which actually is a MemCachedStore with old maps, so it has exactly the same visible state. This Store is never accessed for writes, so we can read it without taking any internal locks and at the same time we no longer need write locks for original MemCachedStore, we're not using it. All of this makes it possible to use MemCachedStore as normally reads are handled going down to whatever level is needed and writes are handled by new maps. So while Persist for (*Blockchain).dao does its most time-consuming work we can process other blocks (reading data for transactions and persisting storeBlock caches to (*Blockchain).dao). The change was tested for performance with neo-bench (single node, 10 workers, LevelDB) on two machines and block dump processing (RC4 testnet up to 62800 with VerifyBlocks set to false) on i7-8565U. Reference results (bbe4e9cd7bb33428633586f080f64494cd6ac9cf): Ryzen 9 5950X: RPS 23616.969 22817.086 23222.378 ≈ 23218 ± 1.72% TPS 23047.316 22608.578 22735.540 ≈ 22797 ± 0.99% CPU % 23.434 25.553 23.848 ≈ 24.3 ± 4.63% Mem MB 600.636 503.060 582.043 ≈ 562 ± 9.22% Core i7-8565U: RPS 6594.007 6499.501 6572.902 ≈ 6555 ± 0.76% TPS 6561.680 6444.545 6510.120 ≈ 6505 ± 0.90% CPU % 58.452 60.568 62.474 ≈ 60.5 ± 3.33% Mem MB 234.893 285.067 269.081 ≈ 263 ± 9.75% DB restore: real 0m22.237s 0m23.471s 0m23.409s ≈ 23.04 ± 3.02% user 0m35.435s 0m38.943s 0m39.247s ≈ 37.88 ± 5.59% sys 0m3.085s 0m3.360s 0m3.144s ≈ 3.20 ± 4.53% After the change: Ryzen 9 5950X: RPS 27747.349 27407.726 27520.210 ≈ 27558 ± 0.63% ↑ 18.69% TPS 26992.010 26993.468 27010.966 ≈ 26999 ± 0.04% ↑ 18.43% CPU % 28.928 28.096 29.105 ≈ 28.7 ± 1.88% ↑ 18.1% Mem MB 760.385 726.320 756.118 ≈ 748 ± 2.48% ↑ 33.10% Core i7-8565U: RPS 7783.229 7628.409 7542.340 ≈ 7651 ± 1.60% ↑ 16.72% TPS 7708.436 7607.397 7489.459 ≈ 7602 ± 1.44% ↑ 16.85% CPU % 74.899 71.020 72.697 ≈ 72.9 ± 2.67% ↑ 20.50% Mem MB 438.047 436.967 416.350 ≈ 430 ± 2.84% ↑ 63.50% DB restore: real 0m20.838s 0m21.895s 0m21.794s ≈ 21.51 ± 2.71% ↓ 6.64% user 0m39.091s 0m40.565s 0m41.493s ≈ 40.38 ± 3.00% ↑ 6.60% sys 0m3.184s 0m2.923s 0m3.062s ≈ 3.06 ± 4.27% ↓ 4.38% It obviously uses more memory now and utilizes CPU more aggressively, but at the same time it allows to improve all relevant metrics and finally reach a situation where we process 50K transactions in less than second on Ryzen 9 5950X (going higher than 25K TPS). The other observation is much more stable block time, on Ryzen 9 it's as close to 1 second as it could be.
2021-07-30 20:35:03 +00:00
s.mut.Unlock()
return keys, err
}
// Close implements Store interface, clears up memory and closes the lower layer
// Store.
func (s *MemCachedStore) Close() error {
// It's always successful.
_ = s.MemoryStore.Close()
return s.ps.Close()
}