What should I do if there is a Redis cache exception? The following article will introduce you to Redis cache exceptions and solutions. I hope it will be helpful to you!
Cache avalanche refers to a large area of cache failure at the same time, so subsequent requests will fall to the database on, causing the database to collapse due to a large number of requests in a short period of time. [Related recommendations: Redis Video Tutorial]
Solution
1. Set the expiration time of cached data randomly. Prevent a large number of data from expiring at the same time.
2. Generally, when the amount of concurrency is not particularly large, the most commonly used solution is lock queuing.
3. Add a corresponding cache tag to each cached data and record whether the cache is invalid. If the cache tag is invalid, update the data cache.
Cache penetration refers to data that is neither in the cache nor in the database, causing all requests to fall on the database, resulting in a short period of time in the database Crashed under a large number of requests.
Solution
1. Add verification at the interface layer, such as user authentication verification, id does basic verification, id<= 0 direct interception;
2. The data that cannot be obtained from the cache is not obtained in the database. At this time, the key-value pair can also be written as key-null, and the cache validity time can be set to be short. point, such as 30 seconds (setting too long will make it unusable under normal circumstances). This can prevent attacking users from repeatedly using the same ID to brute force attacks;
3. Use a Bloom filter to hash all possible data into a large enough bitmap. Data that must not exist will be It is intercepted by this bitmap, thereby avoiding query pressure on the underlying storage system.
Additional
The ultimate use of space is Bitmap and Bloom Filter.
Bitmap: The typical one is the hash table
The disadvantage is that Bitmap can only record 1 bit of information for each element. If you want to complete additional functions, I am afraid that only It can be accomplished by sacrificing more space and time.
Bloom filter (recommended)
introduces k(k>1)k(k>1) independent hash functions to ensure that Under a given space and misjudgment rate, the process of element weight determination is completed.
Its advantage is that space efficiency and query time are much higher than the general algorithm, but its disadvantage is that it has a certain misrecognition rate and difficulty in deletion.
The core idea of the Bloom-Filter algorithm is to use multiple different Hash functions to resolve "conflicts".
Hash has a conflict (collision) problem. The values of two URLs obtained by using the same Hash may be the same. In order to reduce conflicts, we can introduce several more hashes. If we find that an element is not in the set through one of the hash values, then the element is definitely not in the set. Only when all Hash functions tell us that the element is in the set can we be sure that the element exists in the set. This is the basic idea of Bloom-Filter.
Bloom-Filter is generally used to determine whether an element exists in a large data collection.
Cache breakdown refers to data that is not in the cache but exists in the database (usually the cache time has expired). At this time, due to the special nature of concurrent users Many times, the data is not read in the cache at the same time, and the data is fetched from the database at the same time, causing the pressure on the database to increase instantly and causing excessive pressure. Different from cache avalanche, cache breakdown refers to concurrent query of the same data. Cache avalanche means that different data have expired, and a lot of data cannot be found, so the database is searched.
Solution
1. Set hotspot data to never expire
2. Add mutex lock, mutex lock
Cache preheating means loading relevant cache data directly into the cache system after the system goes online. In this way, you can avoid the problem of querying the database first and then caching the data when the user requests it! Users directly query cached data that has been preheated!
Solution
1. Directly write a cache refresh page, and do it manually when going online;
2. The amount of data is not large, you can Automatically load when the project starts;
3. Refresh the cache regularly;
When the number of visits increases sharply and service problems occur ( When the response time is slow or non-responsive) or non-core services affect the performance of the core process, it is still necessary to ensure that the service is still available, even if the service is compromised. The system can automatically downgrade based on some key data, or configure switches to achieve manual downgrade.
The ultimate goal of cache downgrade is to ensure that core services are available, even if they are lossy. And some services cannot be downgraded (such as adding to shopping cart, checkout).
Before downgrading, the system should be sorted out to see if the system can lose soldiers and retain commanders; thereby sorting out what must be protected to the death and what can be downgraded; for example, you can refer to the log level setting plan:
1. General: For example, some services occasionally time out due to network jitter or the service is online, and can be automatically downgraded;
2. Warning: Some services have fluctuating success rates within a period of time (such as 95~ 100%), you can automatically downgrade or manually downgrade, and send an alarm;
3. Error: For example, the availability rate is lower than 90%, or the database connection pool is exhausted, or the number of visits suddenly increases. When it reaches the maximum threshold that the system can withstand, it can be automatically downgraded or manually downgraded according to the situation;
4. Serious error: For example, if the data is wrong due to special reasons, emergency manual downgrade is required.
The purpose of service downgrade is to prevent Redis service failure from causing avalanche problems in the database. Therefore, for unimportant cached data, a service downgrade strategy can be adopted. For example, a common approach is that if there is a problem with Redis, instead of querying the database, it directly returns the default value to the user.
Cache hotspot key
When a Key in the cache (such as a promotional product) expires at a certain point in time, the Key will be cached at this point in time. There are a large number of concurrent requests. When these requests find that the cache has expired, they usually load data from the back-end DB and reset it to the cache. At this time, large concurrent requests may instantly overwhelm the back-end DB.
Solution
Lock the cache query. If the KEY does not exist, lock it, then check the DB into the cache, and then unlock it; if other processes find that there is a lock Just wait, and then return the data after unlocking or enter the DB query
For more programming-related knowledge, please visit: Programming Video! !
The above is the detailed content of What should I do if Redis cache exception occurs? How to solve?. For more information, please follow other related articles on the PHP Chinese website!