Table of Contents
Background
1. Hot key detection
1.1 QPS monitoring of each slot in the cluster
1.2 The proxy mechanism of proxy is used as the entire traffic entrance statistics
1.3 redis LFU-based hotspot key discovery mechanism
1.4 Detection based on Redis client
2. Hot key solution
2.1 Limit the current for a specific key or slot
2.2 Use the second-level (local) cache
2.3 Key removal
2.4 Another idea of ​​local cache configuration center
2.5 Other plans that can be made in advance
Some integrated solutions
Summary
Home Database Redis Let's talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

Let's talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

Feb 10, 2022 pm 06:45 PM
redis

How to deal with the hot key problem in Redis? The following article will introduce to you common solutions to the Redis cache hot key problem. I hope it will be helpful to you!

Let's talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

When doing some C-side business, it is inevitable to introduce a first-level cache to replace the pressure on the database and reduce the business response time. In fact, each time a middleware is introduced to solve the problem At the same time, it will inevitably bring about many new issues that need attention, such as how to achieve cache consistency mentioned in the previous article "Database and Cache Consistency in Practice". In fact, there will be some other problems, such as hot keys, large keys, etc. that may be caused when using Redis as a first-level cache. In this article, we will discuss the hot key(hot key) issue and how to reasonably Solve the hot key problem.

Background

HotkeyWhat is the problem and how is it caused?

Generally speaking, the cache Redis we use is a multi-node cluster version. When reading and writing a certain key, the corresponding slot will be calculated based on the hash of the key, and it can be found based on this slot. The corresponding shard (a set of redis clusters composed of one master and multiple slaves) is used to access the K-V. However, in the actual application process, for some specific businesses or some specific periods of time (such as product flash sales activities in e-commerce businesses), a large number of requests may occur to access the same key. All requests (and the read-write ratio of such requests is very high) will fall on the same redis server, and the load on the redis will be seriously increased. At this time, adding new redis instances to the entire system will be of no use, because according to the hash algorithm, Requests for the same key will still fall on the same new machine, which will still become the system bottleneck2, and even cause the entire cluster to crash. If the value of this hotspot key is relatively large, it will also cause the network card to reach the bottleneck. This problem Known as the "hot key" problem. [Related recommendations: Redis Video Tutorial]

As shown in Figures 1 and 2 below, they are the normal redis cluster cluster and the redis cluster key access using a layer of proxy proxy respectively.

Lets talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

Lets talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

As mentioned above, hot keys will bring extremely high load pressure to a small number of nodes in the cluster. If not handled correctly , then these nodes may be down, which will affect the operation of the entire cache cluster. Therefore, we must discover hot keys and solve hot key problems in time.

1. Hot key detection

Hot key detection, seeing some significant impacts caused by the dispersion of the redis cluster and hot keys, we can do it through a rough and fine thinking process A solution for hotspot key detection.

1.1 QPS monitoring of each slot in the cluster

The most obvious impact of hot key is the traffic distribution under the premise that the QPS in the entire redis cluster is not that large When it comes to the problem of uneven slots in the cluster, the first thing we can think of is to monitor the traffic in each slot. After reporting, compare the traffic of each slot, and then we can find the specific slots affected when the hot key appears. . Although this monitoring is the most convenient, the granularity is too coarse. It is only suitable for early cluster monitoring solutions and is not suitable for scenarios where hot keys are accurately detected.

1.2 The proxy mechanism of proxy is used as the entire traffic entrance statistics

If we are using the redis cluster proxy mode in Figure 2, since all requests will go to the proxy first Go to the specific slot node, then the detection statistics of this hot key can be done in the proxy. In the proxy, based on the time sliding window, each key is counted, and then the number that exceeds the corresponding threshold is counted. key. In order to prevent too many redundant statistics, you can also set some rules to only count keys corresponding to the prefix and type. This method requires at least a proxy mechanism and has requirements for the redis architecture.

1.3 redis LFU-based hotspot key discovery mechanism

Versions of redis 4.0 or above support the LFU-based hotspot key discovery mechanism on each node, use redis-cli –hotkeys Just add the –hotkeys option when executing redis-cli. You can use this command regularly on the node to discover the corresponding hotspot key.

Lets talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

As shown below, you can see the execution results of redis-cli –hotkeys and the statistics of hot keys. The execution time of this command is longer. , you can set up scheduled execution to collect statistics.

1.4 Detection based on Redis client

Since the redis command is issued from the client every time, based on this we can perform statistics and counting in some codes of the redis client. Each client makes statistics based on the time sliding window. After exceeding a certain threshold, the statistics are reported to the server, and then the server sends them to each client uniformly, and configures the corresponding expiration time.

This method looks more beautiful, but in fact it is not so suitable in some application scenarios, because the transformation on the client side will bring greater impact to the running process. Memory overhead, more directly speaking, for automatic memory management languages ​​​​such as Java and goLang, objects will be created more frequently, thus triggering gc and causing the interface response time to increase. This is something that is not easy to predict. .

In the end, you can make corresponding choices through the infrastructure of each company.

2. Hot key solution

Through the above methods, we have detected the corresponding hot key or hot slot, then we need to solve the corresponding hot key problem. There are several ideas for solving hot keys. Let’s go through them one by one.

2.1 Limit the current for a specific key or slot

The simplest and crudest way is to limit the current for a specific slot or hot key. This solution is obviously suitable for It is a loss for business, so it is recommended to only use specific current limiting when there is an online problem and the loss needs to be stopped.

2.2 Use the second-level (local) cache

Local cache is also the most commonly used solution. Since our first-level cache cannot withstand such a heavy pressure, Just add a second level cache. Since each request is issued by the service, it is perfect to add this second-level cache to the service side. Therefore, every time the server obtains the corresponding hot key, it can use the local cache to store a copy until the local cache expires. Then request again to reduce the pressure on the redis cluster. Taking java as an example, guavaCache is a ready-made tool. The following example:

    //本地缓存初始化以及构造
    private static LoadingCache<String, List<Object>> configCache
            = CacheBuilder.newBuilder()
            .concurrencyLevel(8)  //并发读写的级别,建议设置cpu核数
            .expireAfterWrite(10, TimeUnit.SECONDS)  //写入数据后多久过期
            .initialCapacity(10) //初始化cache的容器大小
            .maximumSize(10)//cache的容器最大
            .recordStats()
            // build方法中可以指定CacheLoader,在缓存不存在时通过CacheLoader的实现自动加载缓存
            .build(new CacheLoader<String, List<Object>>() {
                @Override
                public List<Object> load(String hotKey) throws Exception {
                    
                }
            });
    
    //本地缓存获取
    Object result = configCache.get(key);
Copy after login

The biggest impact of local cache on us is the problem of data inconsistency. How long we set the cache expiration time will lead to the longest online data inconsistency problem. This cache time You need to measure your own cluster pressure and the maximum inconsistent time accepted by the business.

2.3 Key removal

How to ensure that hot key problems will not occur while ensuring data consistency as much as possible? Removing the key is also a good solution.

When we put it into the cache, we split the cache key of the corresponding business into multiple different keys. As shown in the figure below, we first split the key into N parts on the side of the update cache. For example, if a key is named "good_100", then we can split it into four parts, "good_100_copy1", "good_100_copy2", " good_100_copy3", "good_100_copy4", these N keys need to be modified every time they are updated or added. This step is to remove the key.

For the service side, we need to find ways to make the traffic we access even enough, and how to add suffixes to the hot keys we are about to access. There are several ways to do a hash based on the IP or mac address of the machine, and then take the remainder of the value and the number of split keys, and finally decide what kind of key suffix it will be spliced ​​into, so as to which machine it will be hit to; one when the service starts The random number is the remainder of the number of split keys.

Lets talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

2.4 Another idea of ​​local cache configuration center

For those who are familiar with the microservice configuration center, our The idea can be changed to the consistency of the configuration center. Take nacos as an example. How does it achieve distributed configuration consistency and respond quickly? Then we can compare the cache analogy to configuration and do it like this.

Long polling Localization configuration. First, all configurations will be initialized when the service starts, and then long polling will be started regularly to check whether the current service monitoring configuration has changed. If there is a change, the long polling request will return immediately to update the local configuration; if there is no change, for All business codes use local memory cache configuration. This ensures the timeliness and consistency of distributed cache configuration.

2.5 Other plans that can be made in advance

Each of the above solutions is relatively independent to solve the hot key problem, so if we really face business demands, we will actually There is a long time to consider the overall scheme design. For hot key issues caused by some extreme flash sales scenarios, if we have enough budget, we can directly isolate the service business and the redis cache cluster to avoid affecting normal business, and at the same time, we can temporarily adopt better disaster recovery and Current limiting measures.

Some integrated solutions

There are currently many relatively complete application-level solutions for hotKey on the market, among which JD.com has open source hotkey tools in this regard The principle is to make insights on the client side, and then report the corresponding hotkey. After the server detects it, it will send the corresponding hotkey to the corresponding server for local caching, and this local cache will be updated synchronously after the remote corresponding key is updated. Already It is currently a relatively mature automatic hot key detection and distributed consistency cache solution, Jingdong retail hot key.

Lets talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing

Summary

The above are some solutions on how to deal with hot keys that the author roughly understands or has practiced, starting from the discovery of hot keys To solve the two key problems of hot keys. Each solution has advantages and disadvantages, such as business inconsistency, difficulty in implementation, etc. You can make corresponding adjustments and changes based on the current characteristics of your own business and the current company's infrastructure.

For more programming-related knowledge, please visit: Introduction to Programming! !

The above is the detailed content of Let's talk about how to deal with the cache hot key problem in Redis? Commonly used solution sharing. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to build the redis cluster mode How to build the redis cluster mode Apr 10, 2025 pm 10:15 PM

Redis cluster mode deploys Redis instances to multiple servers through sharding, improving scalability and availability. The construction steps are as follows: Create odd Redis instances with different ports; Create 3 sentinel instances, monitor Redis instances and failover; configure sentinel configuration files, add monitoring Redis instance information and failover settings; configure Redis instance configuration files, enable cluster mode and specify the cluster information file path; create nodes.conf file, containing information of each Redis instance; start the cluster, execute the create command to create a cluster and specify the number of replicas; log in to the cluster to execute the CLUSTER INFO command to verify the cluster status; make

How to clear redis data How to clear redis data Apr 10, 2025 pm 10:06 PM

How to clear Redis data: Use the FLUSHALL command to clear all key values. Use the FLUSHDB command to clear the key value of the currently selected database. Use SELECT to switch databases, and then use FLUSHDB to clear multiple databases. Use the DEL command to delete a specific key. Use the redis-cli tool to clear the data.

How to use the redis command How to use the redis command Apr 10, 2025 pm 08:45 PM

Using the Redis directive requires the following steps: Open the Redis client. Enter the command (verb key value). Provides the required parameters (varies from instruction to instruction). Press Enter to execute the command. Redis returns a response indicating the result of the operation (usually OK or -ERR).

How to use redis lock How to use redis lock Apr 10, 2025 pm 08:39 PM

Using Redis to lock operations requires obtaining the lock through the SETNX command, and then using the EXPIRE command to set the expiration time. The specific steps are: (1) Use the SETNX command to try to set a key-value pair; (2) Use the EXPIRE command to set the expiration time for the lock; (3) Use the DEL command to delete the lock when the lock is no longer needed.

How to read redis queue How to read redis queue Apr 10, 2025 pm 10:12 PM

To read a queue from Redis, you need to get the queue name, read the elements using the LPOP command, and process the empty queue. The specific steps are as follows: Get the queue name: name it with the prefix of "queue:" such as "queue:my-queue". Use the LPOP command: Eject the element from the head of the queue and return its value, such as LPOP queue:my-queue. Processing empty queues: If the queue is empty, LPOP returns nil, and you can check whether the queue exists before reading the element.

How to implement the underlying redis How to implement the underlying redis Apr 10, 2025 pm 07:21 PM

Redis uses hash tables to store data and supports data structures such as strings, lists, hash tables, collections and ordered collections. Redis persists data through snapshots (RDB) and append write-only (AOF) mechanisms. Redis uses master-slave replication to improve data availability. Redis uses a single-threaded event loop to handle connections and commands to ensure data atomicity and consistency. Redis sets the expiration time for the key and uses the lazy delete mechanism to delete the expiration key.

How to read the source code of redis How to read the source code of redis Apr 10, 2025 pm 08:27 PM

The best way to understand Redis source code is to go step by step: get familiar with the basics of Redis. Select a specific module or function as the starting point. Start with the entry point of the module or function and view the code line by line. View the code through the function call chain. Be familiar with the underlying data structures used by Redis. Identify the algorithm used by Redis.

How to make message middleware for redis How to make message middleware for redis Apr 10, 2025 pm 07:51 PM

Redis, as a message middleware, supports production-consumption models, can persist messages and ensure reliable delivery. Using Redis as the message middleware enables low latency, reliable and scalable messaging.

See all articles