nginx core architecture overview

WBOY
Release: 2016-08-08 09:20:15
Original
909 people have browsed it

Before graduation, after completing the project, I got into socket programming for a while and used the Qt framework in C++ to write toy-like TCP and UDP communication clients. When chatting with my direct senior on the phone, I was advised to dig deeper into sockets and try to take the back-end or architect route. When asked how to dig deeper, the answer is to study the source code. If you want to learn socket-related knowledge, studying the server source code is the most appropriate. As for which server to choose, after careful consideration and investigation, I found that compared to the heavier and bulkier apache, nginx is smaller and more excellent. So before starting to officially eat the source code, I first started some self-popularization work.

1, process model

First of all, by default, like other servers, nginx under Unix also continues to run in the background in the form of daemon (daemon process). Although nginx can also turn off the background mode for debugging purposes and use the foreground mode, you can even cancel the master process through configuration (will be explained in detail later), so that nginx can be used as a single process Work. But these have little to do with the architecture that nginx is proud of, so I won’t list them here. Although nginx also supports multi-threading, we still focus on understanding its default multi-process mode.

  nginx will create a master process (main process) and several worker processes (slave processes) after startup. The master process is mainly responsible for managing the worker process, specifically receiving signals from the administrator and forwarding them to the corresponding worker process; monitoring the working status of the worker process, in workerRe-create and start the worker process when the process terminates abnormally. The worker process is responsible for handling basic network events. workerThe processes have equal priorities and are independent of each other. They compete fairly for requests from clients. Each request is processed by only one worker process. The schematic diagram of the nginx process model is shown in Figure 1. N g g1 Nginx

Process model Significant diagram

The number of Worker processes can be set. Generally, the setting is consistent with the number of CPU

cores. Relevant to the event processing model. We will continue to introduce the event processing model of

nginx later. 2. Signals and requests nginx interacts with the outside world through two interfaces: signals from the administrator and requests from the client. Below we give examples to illustrate how nginx handles signals and requests.

To control

nginx

, the administrator needs to communicate with the master process, just send a command signal to the

master

process. For example, nginx used the kill -HUP [pid] command to restart nginx before the 0.8 version. Using this command to restart nginx will achieve a graceful restart process without service interruption. After receiving the HUP command, the master process will first reload the configuration file, then start a new worker process, and send a stop signal to the old worker process. At this time, the new worker process begins to receive network requests, and the old worker process stops receiving new requests. After the current request is processed, the old worker process exits and is destroyed. After version 0.8, nginx introduced a series of command line parameters to facilitate server management, such as ./nginx -s reload and ./nginx -s stop, respectively. Used to restart and stop nginx. When executing the operation command, we actually start a new nginx process. After parsing the parameters in the command, this process sends the corresponding signal to the master process on its own, achieving the same goal as sending the signal manually before. Effect. 3. Requests and events

The server most often handles requests from the 80porthttpprotocol. Let’s take this as an example to illustrate the process of nginx processing requests. First of all, every worker process is formed from the master process fork (forked). In the master process, the socket that needs to be monitored is first established. (socket, i.e. IPaddress+port number) and the corresponding listenfd (listening file descriptor or handle). We know that each process in socket communication must be assigned a port number, and the socket allocation work of the worker process is completed by the master process. The listenfd of all worker processes become readable when a new connection arrives. To ensure that only one worker process handles the connection, each worker process is registered with listenfd Before reading the event, you must first grab the accept_mutex (accept connection mutex). After a worker process successfully grabs the connection, it starts to read the request, parse the request, process the request and feed back the data to the client.

4. Process model analysis

nginx uses but not only uses the multi-process request processing model (PPC). Each worker process only processes one request at a time, making the resources between requests independent. Locking is required, and requests can be processed in parallel without affecting each other. A request processing failure causes a worker process to exit abnormally, which will not interrupt the service. Instead, the master process will immediately restart a new worker process, reducing the overall risk faced by the server. , making the service more stable. However, compared with the multi-threaded model (TPC), the system overhead is slightly larger and the efficiency is slightly lower, which requires other means to improve.

5. High concurrency mechanism of nginx——Asynchronous non-blocking event mechanism

The event processing mechanism of IIS is multi-threaded, and each request has an exclusive working thread. Since multi-threading takes up more memory, the CPU overhead caused by context switching between threads (repeated operations of protecting and restoring the register group) is also very large. Servers with multi-threading mechanisms are facing thousands of concurrency When the amount of data is increased, it will put a lot of pressure on the system, and high concurrency performance is not ideal. Of course, if the hardware is good enough and can provide sufficient system resources, system pressure will no longer be a problem.

Let’s go deep into the system level to discuss the differences between multi-process and multi-thread, blocking mechanism and non-blocking mechanism.

Students who are familiar with operating systems should understand that the emergence of multi-threading is to more fully schedule and use CPU when resources are sufficient, and it is especially beneficial to improve the utilization of multi-core CPU. However, threads are the smallest unit of system tasks, and processes are the smallest units of system resource allocation. This means that multi-threading will face a big problem: when the number of threads increases and resource requirements increase, the parent processes of these threads may It is impossible to immediately apply for enough resources for all threads in one go. When the system does not have enough resources to satisfy a process, it will choose to make the entire process wait. At this time, even if the system resources are sufficient for some threads to work normally, the parent process cannot apply for these resources, causing all threads to wait together. To put it bluntly, with multi-threading, threads within a process can be scheduled flexibly (although the risk of thread deadlock and the cost of thread switching are increased), but there is no guarantee that the parent process can still be in the system when it becomes larger and heavier. Get reasonable scheduling. It can be seen that multi-threading can indeed improve the utilization of CPU, but it is not an ideal solution to solve the problem of high concurrent requests on the server, not to mention the high utilization of CPU under high concurrency. maintain. The above is the multi-threaded synchronous blocking event mechanism of IIS.

The multi-process mechanism of nginx ensures that each request applies for system resources independently. Once the conditions are met, each request can be processed immediately, that is, asynchronous non-blocking processing. However, the resource overhead required to create a process will be more than that of threads. In order to save the number of processes, nginx uses some process scheduling algorithms to make I/O event processing not only rely on multi-process mechanism, but asynchronous Non-blocking multi-process mechanism. Next, we will introduce the asynchronous non-blocking event processing mechanism of nginx in detail.

6、epoll

 Under Linux, a high-performance network with high concurrency must epoll, nginx also uses the epoll model as the processing mechanism for network events. Let’s first take a look at how epoll came about.

The earliest scheduling solution is the asynchronous busy polling method, that is, continuous polling of I/O events, which is to traverse the access status of the socket collection. Obviously, this solution causes unnecessary traffic when the server is idle. CPUoverhead. E Later, leSelect

and

Poll as agents of the scheduling process and improved CPU utils appeared. Literally, one was " " , One is "Poll", they are essentially the same, they are polling socket to collect and process requests. The difference from before is that they can monitor I/O event, the polling thread will be blocked when idle, and be awakened when one or more I/O events arrive, getting rid of the "busy polling"'s " Busy", becomes an asynchronous polling method. The select/poll model polls the entire FD (file descriptor) collection, that is, the socket collection. The network event processing efficiency decreases linearly with the number of concurrent requests, so a macro is used to limit it. Maximum number of concurrent connections. At the same time, the communication method between the kernel space and user space of the select/poll model is memory copy, which brings high overhead. The above shortcomings have led to the creation of new models.  epoll

can be considered as the abbreviation of

event poll, is the Linux kernel that has been improved to handle large batches of file descriptors poll, it is Linux bet long An enhanced version of the road multiplexing I/O interface select/poll, which can significantly improve the system CPU utilization when the program has only a small number of active connections among a large number of concurrent connections. First of all, epoll has no limit on the maximum number of concurrent connections. The upper limit is the maximum number of files that can be opened, which is related to the hardware memory size. On a 1GB machine, it is about 10w; and then The most significant advantage of epoll, it only operates on "active" 's socket, because only those are asynchronously awakened by the kernel I/O read and write events The socket is put into the ready queue and is ready to be processed by the worker process. This saves a lot of polling overhead in the actual production environment and greatly improves the efficiency of event processing; Finally, epoll uses shared memory (MMAP) to realize communication between kernel space and user space, eliminating the overhead of memory copying. In addition, the ET (edge ​​trigger) working mode of epoll used in nginx is the fast working mode. ET mode only supports non-blocking socket, FD is ready, that is, the kernel sends a notification through epoll, and after certain operations, FD is no longer ready. Notifications will also be sent when in the status, but if there has been no I/O operation causing FD to become not ready, notifications will no longer be sent. In general, nginx under Linux is event-based and uses epoll to handle network events.

Copyright Statement: This article is an original article by the blogger and may not be reproduced without the blogger's permission.

The above has introduced an overview of the core architecture of nginx, including aspects of it. I hope it will be helpful to friends who are interested in PHP tutorials.

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template