There is no doubt about the high performance and stability of redis, an in-memory database. However, if we stuff too much data into redis and the memory is too large, then if something goes wrong problem, then it may bring us disaster.
Online business over the past few years has shown that there is no doubt about the high performance and stability of redis, an in-memory database. However, we stuffed too much data into redis and the memory was too large. If something goes wrong, it may bring us disaster (I think many companies have encountered it). Here are some of the problems we encountered:
When the main database goes down, our most common disaster recovery strategy is "cutting off the main database". Specifically, it selects a slave library from the remaining slave libraries of the cluster and upgrades it to the master library. After the slave library is upgraded to the master library, the remaining slave libraries are mounted under it to become its slave library, and finally the entire master-slave database is restored. Cluster structure. The above is a complete disaster recovery process, and the most costly process is the remounting of the slave library, not the switching of the main library.
Solution
The solution is of course to minimize the use of memory. Under normal circumstances, we do this:
1 Set the expiration time
Set the expiration time for time-sensitive keys, and reduce the memory usage of expired keys through redis' own expired key cleanup strategy. It can also Reduce business troubles, no need to clean up regularly
2 Do not store garbage in redis
This is simply nonsense, but is there anyone who has the same problem as us?3 Clean up useless data in a timely manner
For example, a redis carries the data of 3 businesses, and 2 businesses go offline after a period of time, then you can The related data of the two businesses has been cleaned up4 Try to compress the data
For example, for some long text data, compression can greatly reduce memory usage5 Pay attention to memory growth and locate large-capacity keys
Whether you are a DBA or a developer, if you use redis, you must pay attention to memory, otherwise, you are actually incompetent. , here you can analyze which keys in the redis instance are relatively large to help the business quickly locate abnormal keys (keys with unexpected growth are often the source of problems)6 pika
If you really don’t want to be so tired, then migrate the business to the new open source pika, so that you don’t have to pay too much attention to the memory. The problems caused by the large redis memory are not a problem. For more Redis-related technical articles, please visit theIntroduction to Using Redis Database Tutorial column to learn!
The above is the detailed content of What to do if the amount of redis data is too large. For more information, please follow other related articles on the PHP Chinese website!