Cause of avalanche:
The simple understanding of cache avalanche is: due to the failure of the original cache (or the data is not loaded into the cache), the new cache has not arrived. During this period (the cache is normally obtained from Redis, as shown below) all requests that should have accessed the cache are querying the database, which puts huge pressure on the database CPU and memory. In severe cases, it can cause database downtime and system collapse.
The basic solution is as follows:
First, most system designers consider using locks or queues to ensure that there will not be a large number of threads reading and writing the database at one time. Avoiding excessive pressure on the database when the cache fails, although it can alleviate the pressure on the database to a certain extent, it also reduces the throughput of the system.
Second, analyze user behavior and try to evenly distribute cache invalidation times.
Third, if a certain cache server is down, you can consider primary and backup, such as: redis primary and backup, but double caching involves update transactions, and update may read dirty data, which needs to be solved .
Solution to the Redis avalanche effect:
1. Distributed locks can be used. For stand-alone version, local locks
2. Message middleware method
3. First-level and second-level cache Redis Ehchache
4. Evenly distributed Redis key expiration time
Explanation:
1. When there are suddenly a large number of requests to the database server time, perform request restrictions. Using the above mechanism, it is guaranteed that only one thread (request) operates. Otherwise, queue and wait (cluster distributed lock, stand-alone local lock). Reduce server throughput and low efficiency.
Add lock!
Ensure that only one thread can enter. In fact, only one request can perform the query operation.
You can also use the current limiting strategy here. ~
2. Use message middleware to solve
This solution is the most reliable solution!
Message middleware can solve high concurrency! ! !
If a large number of requests are accessed and Redis has no value, the query results will be stored in the message middleware (using the MQ asynchronous synchronization feature)
3. Make a second-level cache. A1 is the original cache and A2 is the copy cache. When A1 fails, you can access A2. The cache expiration time of A1 is set to short-term and A2 is set to long-term (this point is supplementary)
4. Set different expiration times for different keys to make the cache invalidation time points as even as possible.
For more Redis related knowledge, please visit the Redis usage tutorial column!
The above is the detailed content of How to solve the avalanche caused by redis. For more information, please follow other related articles on the PHP Chinese website!