This article is about the prevention and solution of redis cache penetration and cache failure. It has certain reference value. Interested friends can learn about it. I hope it will be helpful to you.
Cache penetration:
Understanding
Cache penetration refers to querying a data that must not exist due to cache When there is a miss, it needs to be queried from the database. If the data cannot be found, it will not be written to the cache. This will cause the non-existent data to be queried in the database every time it is requested, causing cache penetration.
Solution:
Store all possible query parameters in hash form, verify them first at the control layer, and discard them if they do not match. The most common one is to use Bloom filters to hash all possible data into a bitmap that is large enough. Data that must not exist will be intercepted by this bitmap, thus avoiding damage to the underlying storage system. Query pressure.
You can also use a simpler and more crude method. If the data returned by a query is empty (whether the data does not exist or the system fails), we will still cache the empty result, but it will expire. It will be short, no more than five minutes at most.
Cache avalanche
Understanding
If the cache fails intensively within a period of time and a large number of cache penetrations occur, all of queries all landed on the database, causing a cache avalanche.
There is no perfect solution to this, but you can analyze user behavior and try to distribute the failure time points evenly. Most system designers consider using locks or queues to ensure single-thread (process) writes to the cache, thereby preventing a large number of concurrent requests from falling on the underlying storage system in the event of a failure.
Solution
After the cache expires, control the number of threads that read the database and write the cache through locking or queuing. For example, only one thread is allowed to query data and write cache for a certain key, while other threads wait.
You can use the cache reload mechanism to update the cache in advance, and then manually trigger the loading of the cache before a large concurrent access occurs.
Different keys, set different expiration times, and let the cache expire. Try to do secondary caching evenly
, or a double cache strategy. A1 is the original cache and A2 is the copy cache. When A1 expires, A2 can be accessed. The cache expiration time of A1 is set to short-term and A2 is set to long-term.
Related tutorials: redis video tutorial
The above is the detailed content of A brief discussion on the prevention and solution of cache penetration and cache invalidation in redis. For more information, please follow other related articles on the PHP Chinese website!