Home > Backend Development > PHP Tutorial > How is Nginx designed for performance and scale?

How is Nginx designed for performance and scale?

WBOY
Release: 2016-08-08 09:18:58
Original
1708 people have browsed it
NGINX’s outstanding performance in network applications lies in its unique design. Many network or application servers are mostly simple frameworks based on threads or processes. What stands out about NGINX is its mature event-driven framework, which can handle thousands of concurrent connections on modern hardware.
NGINX Internals infographic starts at the top level of the process framework and works downwards to reveal how NGINX handles multiple connections in a single process and further explores how it works.
Scenario Setting - NGINX Process Model

To better understand this design pattern, we need to understand how NGINX operates. NGINX has a main thread that handles privileged operations such as reading configuration files and binding ports, as well as a set of worker processes and auxiliary processes.

# service nginx restart

* Restarting nginx

# ps -ef --forest | grep nginx

root 32475 1 0 13:36 ? 00:00:00 nginx: master process /usr/sbin /nginx

-c /etc/nginx/nginx.conf

nginx 32476 32475 0 13:36 ? 00:00:00 _ nginx: worker process

nginx 32477 32475 0 13:36 ? 00:00:00 _ nginx: worker process

nginx 32479 32475 0 13:36 _ nginx: worker process

nginx 32480 32475 0 13:36 ? 00:00:00 _ nginx: worker process

nginx 32481 32475 0 13:36 ? 00:00:00 _ nginx: cache manager process

nginx 32482 32475 0 13:36 ? 00:00:00 _ nginx: cache loader process


In this quad-core server, the main The thread creates four worker processes and a set of cache helper processes, which manage the hard disk cache.
Why is the framework so important?
The basis of any Unix application is a thread or process - for the Linux operating system, threads and processes are almost the same; the biggest difference is that memory is shared between threads. A thread or process is a self-contained set of instructions that the operating system schedules to run on a single CPU core. Many complex applications run in parallel on multiple threads or processes for two reasons:
  • Applications can use multiple CPU cores of the computer at the same time

  • Threads and processes are easy to operate in parallel, such as handling multiple connections at the same time


  • Processes and threads consume resources, such as memory and Occupation of other operating system resources, wapped on and off the cores (this operation is called a context switch). Today's servers need to handle thousands of small, active threads or processes at the same time. Once the memory is exhausted or the read and write load is too high, these will lead to large-scale context switching and performance will be seriously degraded.
    The usual design idea is that network applications allocate a thread or process for each connection. This type of framework is simple and easy to implement, but difficult to scale when dealing with thousands of connections simultaneously.
    How does NGINX work?
    NGINX utilizes a predictive process model to schedule available hardware resources:
  • The main process handles privileged operations such as configuration file reading, port binding, etc., as well as creating a small set of child processes (next Three types of processes)

  • When starting, the cache loader process loads the cache from the hard disk into the memory, and then exits. Its scheduling is conservative, so resource overhead is low

  • The cache management process runs regularly, cleaning entities from the hard disk cache to the specified size

  • The worker process is responsible for all work, handling network connections, hard disk reads Write operations, and upstream server communication


  • NGINX recommended configuration is that one worker process corresponds to one CPU core to ensure effective utilization of hardware resources. Set worker_processes auto in the configuration file:

    worker_processes auto;


    Once NGINX is served, only the worker process is busy, and each worker process handles multiple connections in a non-blocking manner, reducing the number of context switches.
    Each worker process is single-threaded and runs independently, responsible for acquiring new connections and processing them. Processes communicate through shared memory, such as cache data, session persistence data, and other shared resources. NGINX 1.7.11 and later versions have an optional thread pool to which worker processes throw blocking operations. For more details, see "Nginx introduces thread pool to improve performance by 9 times". For NGINX Plus users, these new features will appear in Release 7 this year.
    NGINX internal working process

    Each NGINX working process is initialized by a configuration file, and the main process provides it with a set of listening sockets.
    The worker process starts with socket listening events (accept_mutex and kernel socket sharding). The events are initialized by new connections, and then these connections are dispatched to a state machine - the HTTP state machine is the most commonly used one. , but NGINX also implements stream-based state machines and communication protocol-based state machines (SMTP, IMAP, and POP3).

    The state machine is an important set of instructions that tells NGINX how to handle each request. Many web servers have the same functionality as NGINX's state machine - the difference lies in their implementation.
    Scheduling a state machine
    A state machine is like playing chess, and a single HTTP transaction is like a game of chess. On one end of the chessboard is the web server—like a grandmaster chess player making very fast decisions, and on the other side is the remote client—a web browser accessing a site or application over a relatively slow network.
    However, the game rules may be very complex. For example, the network service may need to communicate with a third party, or an authentication server, or even a third-party module in the server to extend the game rules.
    Blocking state machine
    Back to the previous description, a process or thread is a set of instructions, and the operating system schedules it to run on a certain CPU core. Most web servers and web applications play the game of chess according to the model of one process handling one connection, or one thread handling one connection; each process or thread containing instructions participates in the entirety of the game. During this period, the process running on the server is blocked most of the time, that is, waiting for a client to complete the next move.

  • The network server process listens for new connections on the socket. This new game connection is initiated by the client.

  • Once you get a new game and enter the game session, you need to wait for the client to respond every time you move, and the process is blocked.

  • Once the game is over, the network server process will check if the client wants to play another game (corresponding to a surviving connection). Once the connection is closed (the client leaves or times out), the network server process returns to listening for new games.


  • Remember that each active HTTP connection, that is, each chess game, requires a specific process or thread like a chess master to participate in it. This architecture is simple and easy to extend third-party models or new rules. However, there is an extremely unbalanced logic here. For a lightweight HTTP connection, represented by a single file descriptor and a small amount of memory, this connection will be mapped to a thread or process, and the thread or process is a weight. level operating system objects. Although it is convenient when programming, the waste is huge.
    NGINX IS A REAL MASTER
    Perhaps you have heard of simultaneous display games, where a chess master plays against twelve players at the same time.

    NGINX worker process also plays "chess" in this way. Each worker process - a worker on a CPU core - is a master who can handle thousands of games at the same time.

  • The worker process gets events from the socket that is connected and starts listening;

  • Once the socket receives the event, the worker process will process the event immediately:

    • A listening event on the socket means that the client starts a new chess game, and the worker process creates a new socket connection.

    • An event on the socket connection is that the client makes a move and the worker thread responds appropriately.


    A worker process never blocks on the network transport waiting for its counterpart (client) to reply. After each move, the worker process quickly processes other waiting chess games or welcomes new players.
    Why is it faster than blocking, multi-process frameworks?
    NGINX has good scalability because it supports one worker thread to handle thousands of connections. Each new connection creates a file descriptor, consuming only a small portion of the additional memory of the worker process, and the additional overhead is very small. Processes can always be pinned to CPUs, so context switches are relatively infrequent and only occur when there is no work. Translator's Note: cpu binding refers to binding one or more processes to one or more processors.
    Use blocking method, that is, one connection corresponds to one process, and each connection requires a lot of additional Resources and overhead, context switching is very frequent.
    (For more details, see Andrew Alexeev’s article on NGINX architecture. The author is NGINX co-founder and vice president of corporate development.)
    As long as the system is properly tuned, every job of NGINX The process can handle tens of thousands of concurrent HTTP connections, handling network peaks without any glitches, i.e. more chess games can be played at the same time.
    Update configuration files to upgrade NGINX
    The process framework has a small number of worker processes, which facilitates configuration file and even binary file updates.

    Updating NGINX configuration is a simple, lightweight and reliable operation. That is, as long as you run the nginx -s reload command, the configuration file on the disk will be checked and a SIGHUB signal will be sent to the main process.
    Once the main process receives a SIGHUB, it will do two things:
  • Reload the configuration file and create a new set of worker processes. The newly created worker processes immediately accept connections and process Network communication (with new configuration environment).

  • Notify old worker processes to gracefully roll out and those worker processes stop accepting new connections. Once the currently processing HTTP request ends, the worker process closes the connection. Once all connections are closed, the worker process exits.


  • Reloading the process will cause a small CPU and memory spike, but the overhead is minimal compared to loading resources from active connections. Configuration files can be reloaded multiple times per second. NGINX worker processes that generate many waiting connections to close usually rarely cause problems, but even if there are problems, they can be solved quickly.
    NGINX binary file upgrade achieves excellent high availability - you can upgrade files online without any loss of connection, service or service downtime or interruption. Translator's Note: On the fly, the work can be completed while the program is running.

    The binary file upgrade process is similar to elegant configuration file reloading; the new NGINX main process runs in parallel with the original main process and shares the listening socket. Both processes are active, handling their respective network communications. You can notify the original main process and its worker processes to exit gracefully.
    For a detailed description of the process, see NGINX Control
    Final Conclusion
    NGINX internal infographic shows a panoramic view of NGINX’s high-standard functions. Behind the simple explanation is more than ten years of experience. Thanks to continuous innovation and optimization, NGINX is widely used in various hardware platforms and has achieved the best performance. Even in modern times, network applications need to maintain security and reliability, and NGINX performs exceptionally well.
    If you want to know more about NGINX optimization, here are some good information:
  • NGINX installation, performance tuning

  • NGINX performance tuning

  • "Nginx introduces thread pool to improve performance by 9 times"

  • NGINX - open source application framework

  • NGINX socket splitting (Socket Sharding) release version 1.9.1


  • Reprinted from: Python developer

    The above introduces how Nginx is designed for performance and scale? , including relevant content, I hope it will be helpful to friends who are interested in PHP tutorials.

    Related labels:
    source:php.cn
    Statement of this Website
    The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
    Popular Tutorials
    More>
    Latest Downloads
    More>
    Web Effects
    Website Source Code
    Website Materials
    Front End Template