Home > Database > Redis > body text

How to solve the Redis cache avalanche problem

WBOY
Release: 2023-06-03 09:46:02
forward
1746 people have browsed it

The cache layer carries a large number of requests and effectively protects the storage layer. However, if a large number of requests arrive at the storage layer due to a large number of cache failures or the entire cache cannot provide services, the load on the storage layer will increase (a large number of requests query the database). This is the scenario of cache avalanche;

To solve the cache avalanche, you can start from the following points:

1. Keep the cache layer highly available

Use Redis sentry mode or Redis In the cluster deployment method, even if individual Redis nodes go offline, the entire cache layer can still be used. In addition, Redis can be deployed in multiple computer rooms, so that even if the computer room crashes, the cache layer can still be highly available.

2. Current limiting downgrade component

Both the cache layer and the storage layer will have the probability of errors, and they can be regarded as resources. As a distributed system with a large amount of concurrency, if a resource is unavailable, it may cause exceptions when all threads obtain this resource, causing the entire system to become unavailable. Downgrade is very normal in high-concurrency systems. For example, in recommendation services, if the personalized recommendation service is unavailable, you can downgrade to supplement the hotspot data so that the entire recommendation service will not be unavailable. Common current limiting degradation components include Hystrix, Sentinel, etc.

3. The cache does not expire

The keys saved in Redis will never expire, so there will be no problem of a large number of caches invalidating at the same time, but what follows is that Redis needs more storage.

4. Optimize the cache expiration time

When designing the cache, choose an appropriate expiration time for each key to avoid a large number of keys invalidating at the same time, causing a cache avalanche.

5. Use mutex lock to rebuild cache

In high concurrency scenarios, in order to avoid a large number of requests reaching the storage layer to query data and rebuild cache at the same time, you can use mutex lock control, such as according to The key goes to the cache layer to query the data. When the cache layer is hit, the key is locked, then the data is queried from the storage layer, the data is written to the cache layer, and finally the lock is released. If other threads find that acquiring the lock fails, let the thread sleep for a period of time and try again. Regarding the lock type, if you are in a stand-alone environment, you can use Lock under the Java concurrent package. If you are in a distributed environment, you can use distributed lock (SETNX method in Redis).

Mutex lock reconstruction cache pseudocode in a distributed environment

/**
 * 互斥锁建立缓存
 *
 **/
public String get(String key) {
   // redis中查询key对应的value
   String value = redis.get(key);
   // 缓存未命中
   if (value == null) {
      // 互斥锁
      String key_mutex_lock = "mutex:lock" + key; 
      // 互斥锁加锁成功
      if(redis.setnx(key_mutex_lock,"1")) { // 返回 0(false),1(true)
          try {
              // 设置互斥锁超时时间,这里设置的是锁的失效时间,而不是key的失效时间
              redis.expire(key_mutex_lock,3*60);
              // 从数据库查询
              value = db.get(key);
              // 数据写入缓存
              redis.set(key,value);
            
          } finally {
               // 释放锁
              boolean keyExist = jedis.exists(key_mutex_lock);
              if(keyExist){
                  redis.delete(key_mutex_lock);
               }
      } else { 
              // 加锁失败,线程休息50ms后重试
               Thread.sleep(50);
               return get(key); // 直接返回缓存结果  
     }
   }
}
Copy after login

Redis distributed lock is used to implement cache reconstruction in a distributed environment. The advantage is that the design idea is simple and data consistency is guaranteed; The disadvantage is that the code complexity increases and may cause users to wait. Assume that under high concurrency, the key is locked during cache reconstruction. If there are currently 1,000 concurrent requests, 999 of them are blocked, resulting in 999 user requests being blocked and waiting.

6. Asynchronous reconstruction of the cache

In this scheme, an asynchronous strategy is adopted to build the cache. Threads will be obtained from the thread pool to build the cache asynchronously, so that all requests will not directly reach the storage. Layer, each Redis key in this solution maintains a logical timeout. When the logical timeout is less than the current time, it means that the current cache has expired and the cache should be updated. Otherwise, it means that the current cache has not expired and the value in the cache is returned directly. For example, in Redis, the expiration time of the key is set to 60 minutes, and the logical expiration time in the corresponding value is set to 30 minutes. In this way, when the key reaches the logical expiration time of 30 minutes, the cache of this key can be updated asynchronously, but during the period of updating the cache, the old cache is still available. This asynchronous cache reconstruction method can effectively prevent a large number of keys from invalidating at the same time.

/**
  *  异步重建缓存: ValueObject为对应的封装的实体模型
  *
  **/
public String get(String key) {
    // 重缓存中查询key对应的ValueObject对象
    ValueObject valueObject = redis.get(key);
    // 获取存储中对应的value值
    String value = valueObject.getValue();
    // 获取实体模型中的缓存过期的时间:timeOut = 设置缓存时的当前时间+过期时间(如30秒,60秒等等)
    long logicTimeOut = valueObject.getTimeOut();  // 等位换算为long类型
    // 当前可以在逻辑上失效
    if (logicTimeOut <= System.currentTimeMillis()) {
         // 异步更新缓存
         threadPool.execute(new Runnable() {
             String key_mutex_lock = "mutex_lock" + key;
              // 互斥锁加锁成功
      if(redis.setnx(key_mutex_lock,"1")) { // 返回 0(false),1(true)
          try {
              // 设置互斥锁超时时间,这里设置的是锁的失效时间,而不是key的失效时间
              redis.expire(key_mutex_lock,3*60);
              // 从数据库查询
              dbValue = db.get(key);
              // 数据写入缓存
              redis.set(key,dbValue);
            
          } finally {
              // 释放锁
              boolean keyExist = jedis.exists(key_mutex_lock);
              if(keyExist){
                  redis.delete(key_mutex_lock);
               }
             
         }
      } else { 
             
              
             }
             
         
         });
       return value; // 直接返回缓存结果  
    }
}
Copy after login

The above is the detailed content of How to solve the Redis cache avalanche problem. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:yisu.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template