Home > Database > Redis > body text

Why is Redis so fast using single thread?

PHPz
Release: 2023-05-26 09:56:06
forward
1482 people have browsed it

    Why does Redis use single thread?

    Overhead of multi-threading

    If there is no good system design, using multi-threading will usually lead to the results shown on the right (note the ordinate). When you first increase the number of threads, the system throughput rate will increase. When you further increase the number of threads, the system throughput rate will increase slowly or even decrease.

    Why is Redis so fast using single thread?

    The key bottleneck is: There are usually shared resources in the system that are accessed by multiple threads at the same time. In order to ensure the correctness of shared resources, additional mechanisms are needed to ensure that threads Security, such as locking, comes with additional overhead.

    For example, take the most commonly used List type. Assume that Redis adopts a multi-thread design, and there are two threads A and B doing on List respectively. For LPUSH and LPUSH operations, in order to achieve the same result every time they are executed, that is, [B thread takes out the data put by A thread], these two processes need to be executed serially. This is the concurrent access control problem of shared resources faced by the multi-threaded programming model.

    Why is Redis so fast using single thread?

    Concurrency access control has always been a difficult issue in multi-threaded development: if you simply use a mutex, even if threads are added, most threads will It is also waiting to acquire the mutex lock, and the parallel becomes serial. The system throughput rate does not increase with the increase of threads.

    At the same time, adding concurrent access control will also reduce the readability and maintainability of the system code, so Redis simply adopts single-threaded mode.

    Why is Redis so fast using single thread?

    The reason why single thread is used is the result of many aspects of Redis designers' evaluation.

    • Most operations of Redis are completed in memory

    • Using efficient data structures, such as hash tables and skip tables

    • Adopts a multiplexing mechanism so that it can handle a large number of client requests concurrently in network IO operations and achieve high throughput

    Since Redis uses a single thread for IO. If the thread is blocked, it cannot be multiplexed. So it is not difficult to imagine that Redis must have been designed for potential blocking points in network and IO operations.

    Potential blocking points of network and IO operations

    In network communication, in order to process a Get request, the server needs to listen to the client request (bind/listen), and The client establishes a connection (accept), reads the request from the socket (recv), parses the request sent by the client (parse), and finally returns it to the client Result(send).

    The most basic single-threaded implementation is to perform the above operations in sequence.

    Why is Redis so fast using single thread?

    The accept and recv operations marked in red above are potential blocking points:

    • When Redis monitors a connection request, But when the connection cannot be successfully established, it will be blocked in the accept() function, and other clients cannot establish a connection with Redis at this time

    • When When Redis reads data from a client through recv(), if the data has not arrived, it will always block

    High performance based on multiplexing IO model

    In order to solve the blocking problem in IO, Redis adopts the Linux IO multiplexing mechanism, which allows multiple listening sockets and connected sockets to exist simultaneously in the kernel (select/epoll).

    The kernel will always listen for connections or data requests on these sockets. Redis will process incoming requests, thereby achieving the effect of one thread processing multiple IO streams.

    Why is Redis so fast using single thread?

    At this time, the Redis thread will not be blocked on a specific client request processing, so it can connect to multiple clients at the same time and process requests.

    Callback mechanism

    select/epoll Once it detects that a request arrives on FD, the corresponding event will be triggered and put into a queue. The Redis thread will continuously process the event queue. So event-based callbacks are implemented.

    For example, Redis will register the accept and get callback functions for Accept and Read events. When the Linux kernel monitors a connection request or a read data request, it will trigger the Accept event and Read event. At this time, the kernel will call back the corresponding accept and get functions of Redis. deal with.

    Performance bottlenecks of Redis

    After the above analysis, although multiple client requests can be monitored at the same time through the multiplexing mechanism, Redis still has some performance bottlenecks, which is why we A situation that needs to be avoided in daily programming.

    1. Time-consuming operations

    If any request takes a long time in Redis, it will have an impact on the performance of the entire server. Subsequent requests must wait for the previous time-consuming request to be processed before they can be processed.

    We need to avoid this when designing business scenarios; Redis's lazy-free mechanism also puts the time-consuming operation of releasing memory in an asynchronous thread for execution.

    2. High concurrency scenario

    When the amount of concurrency is very large, there is a performance bottleneck in single-threaded reading and writing of client IO data. Although the IO multiplexing mechanism is used, it can still only be single-threaded. Reading the client's data in sequence cannot utilize multiple CPU cores.

    Redis in 6.0 can use CPU multi-core and multi-threading to read and write client data, but only the reading and writing for the client are parallel, and the actual operation of each command is still single-threaded.

    Other interesting questions related to Redis

    Take this opportunity to also ask a few interesting questions related to redis.

    Why is Redis so fast using single thread?

    • Why use Redis? Isn’t it bad to directly access the memory?

    This one is actually not very clearly defined. For some data that does not change frequently, it can be placed directly in the memory. It does not have to be placed in Redis. It can be placed in the memory. . There may be consistency issues when updating data, that is, the data on only one server may be modified, so the data only exists in local memory. Accessing the Redis server can solve the consistency problem, using Redis.

    • What should I do if there is too much data that cannot be stored in the memory? For example, if I want to cache 100G of data, what should I do?

    There is also an advertisement here. Tair is Taobao's open source distributed KV cache system. It inherits rich operations from Redis. Theoretically, the total data volume is unlimited. It is aimed at usability and resiliency. The scalability and reliability have also been upgraded. Interested friends can find out~

    The above is the detailed content of Why is Redis so fast using single thread?. For more information, please follow other related articles on the PHP Chinese website!

    Related labels:
    source:yisu.com
    Statement of this Website
    The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
    Popular Tutorials
    More>
    Latest Downloads
    More>
    Web Effects
    Website Source Code
    Website Materials
    Front End Template
    About us Disclaimer Sitemap
    php.cn:Public welfare online PHP training,Help PHP learners grow quickly!