How to solve the problem of concurrent cache access in Go language?
In concurrent programming, caching is a commonly used optimization strategy. By caching data, frequent access to underlying storage can be reduced and system performance improved. However, in multiple concurrent access scenarios, concurrent cache access problems are often encountered, such as cache competition, cache penetration, etc. This article will introduce how to solve the problem of concurrent cache access in Go language and provide specific code examples.
package main import ( "fmt" "sync" ) var cache map[string]string var mutex sync.Mutex func main() { cache = make(map[string]string) var wg sync.WaitGroup for i := 0; i < 10; i++ { wg.Add(1) go func(index int) { defer wg.Done() key := fmt.Sprintf("key-%d", index) value, ok := getFromCache(key) if ok { fmt.Printf("Read from cache: %s -> %s ", key, value) } else { value = expensiveCalculation(key) setToCache(key, value) fmt.Printf("Write to cache: %s -> %s ", key, value) } }(i) } wg.Wait() } func getFromCache(key string) (string, bool) { mutex.Lock() defer mutex.Unlock() value, ok := cache[key] return value, ok } func setToCache(key string, value string) { mutex.Lock() defer mutex.Unlock() cache[key] = value } func expensiveCalculation(key string) string { // 模拟耗时操作 return fmt.Sprintf("value-%s", key) }
In the above code, we are before and after the getFromCache
and setToCache
operations The addition of a mutex lock ensures that only one thread can read and write to the cache at the same time, thus solving the problem of concurrent cache access.
package main import ( "fmt" "sync" ) var cache map[string]string var rwmutex sync.RWMutex func main() { cache = make(map[string]string) var wg sync.WaitGroup for i := 0; i < 10; i++ { wg.Add(1) go func(index int) { defer wg.Done() key := fmt.Sprintf("key-%d", index) value, ok := getFromCache(key) if ok { fmt.Printf("Read from cache: %s -> %s ", key, value) } else { value = expensiveCalculation(key) setToCache(key, value) fmt.Printf("Write to cache: %s -> %s ", key, value) } }(i) } wg.Wait() } func getFromCache(key string) (string, bool) { rwmutex.RLock() defer rwmutex.RUnlock() value, ok := cache[key] return value, ok } func setToCache(key string, value string) { rwmutex.Lock() defer rwmutex.Unlock() cache[key] = value } func expensiveCalculation(key string) string { // 模拟耗时操作 return fmt.Sprintf("value-%s", key) }
In the above code, we use read-write locks sync.RWMutex
, before and after the read operation A read lock RLock
is added, and a write lock Lock
is added before and after the write operation, so that we can allow multiple threads to read the cache at the same time, but only one thread can perform write operations. This improves concurrency performance.
By using mutex locks or read-write locks, we can effectively solve the concurrent cache access problem in Go language. In actual applications, the appropriate lock mechanism can be selected according to specific needs to ensure the security and performance of concurrent access.
(word count: 658)
The above is the detailed content of How to solve the problem of concurrent cache access in Go language?. For more information, please follow other related articles on the PHP Chinese website!