What is redis cache?
Redis cache is an open source log-type Key-Value database written in ANSIC language, supports network, can be memory-based and persistent, and provides APIs in multiple languages.
What is the role of redis cache?
The use of Redis cache greatly improves the performance and efficiency of applications, especially in data query. But at the same time, it also brings some problems. Among them, the most critical issue is the consistency of data. Strictly speaking, this problem has no solution. If the consistency requirements of the data are very high, then caching cannot be used.
Another typical problem is cache penetration, cache avalanche and cache breakdown. At present, the industry also has relatively popular solutions. This article is not intended to solve these three problems more perfectly, nor is it intended to subvert popular solutions in the industry. Instead, we will demonstrate these three problem phenomena through actual code operations. The reason for doing this is because it is difficult to have a very vivid concept in the mind just by looking at the academic explanations of these issues. With actual code demonstrations, we can deepen our understanding and understanding of these issues.
Cache penetration
Cache penetration refers to querying data that must not exist in a database. The normal process of using cache is roughly that data query first performs cache query. If the key does not exist or the key has expired, then query the database and put the queried object into the cache. If the database query object is empty, it will not be placed in the cache.
Code flow
1. Pass the parameter into the primary key ID of the object
2. Get the object from the cache based on the key
3. If the object is not empty , return directly
4. If the object is empty, query the database
5. If the object queried from the database is not empty, put it in the cache (set the expiration time) Imagine In this case, what will happen if the parameter passed in is -1? This -1 is an object that definitely does not exist. The database will be queried every time, and every query will be empty, and the cache will not be performed every time. If there is a malicious attack, this vulnerability can be exploited to put pressure on the database or even crush it. Even if UUID is used, it is easy to find a non-existent KEY and carry out attacks.
In my work, the editor will use the method of caching null values, which is step 5 in [Code Process]. If the object queried from the database is empty, it will also be put into the cache, just the set cache The expiration time is shorter, for example, set to 60 seconds.
(Learning video sharing: redis video tutorial)
Cache avalanche
Cache avalanche refers to a certain time Segment, cache centralized expiration.
One of the reasons for the avalanche is that when I write this article, it is almost midnight on Double 12, and there will soon be a wave of panic buying. This wave of products will be put in more intensively. Caching, assuming one hour. Then at one o'clock in the morning, the cache of this batch of products has expired. The access queries for this batch of products all fall on the database, which will generate periodic pressure peaks for the database.
When the editor is working on e-commerce projects, he usually uses different categories of goods and caches them for different periods. Products in the same category, plus a random factor. This can spread the cache expiration time as much as possible. Moreover, the cache time of products in popular categories will be longer and the cache time of products in unpopular categories will be shorter, which can also save the resources of the cache service.
In fact, centralized expiration is not very fatal. The more fatal cache avalanche is when a node of the cache server is down or disconnected from the network. Because the naturally formed cache avalanche must create caches intensively during a certain period of time, then the database can withstand the pressure at that time. At this time, the database can also withstand the pressure. It's nothing more than periodic pressure on the database. The downtime of the cache service node will put unpredictable pressure on the database server, and it is very likely that the database will be overwhelmed in an instant.
Cache breakdown
Cache breakdown refers to a key that is very hot and is constantly carrying large concurrency. Large concurrency focuses on accessing this point. , when the key expires, the continuous large concurrency breaks through the cache and directly requests the database, which is like cutting a hole in a barrier.
When the editor was working on an e-commerce project, this product became a "hot item".
In fact, in most cases, it is difficult for this kind of explosion to cause overwhelming pressure on the database server. There are very few companies that have reached this level. Therefore, the pragmatic editor has made early preparations for the main products so that the cache will never expire. Even if some products become popular by themselves, they can just set them to never expire.
To be simple, the mutex key mutex lock is really not useful.
Learning tutorial sharing: redis database tutorial
The above is the detailed content of What do redis cache avalanche, cache breakdown, and cache penetration mean?. For more information, please follow other related articles on the PHP Chinese website!