Why is nginx performance good?

藏色散人
Release: 2019-06-12 10:11:34
Original
2776 people have browsed it

Why is nginx performance good?

After nginx is started, it will run in the background as a daemon in a Unix system. The background process includes a master process and multiple worker processes. We can also manually turn off the background mode, let nginx run in the foreground, and configure nginx to cancel the master process, so that nginx can run in a single-process mode.

Obviously, we will definitely not do this in a production environment, so turning off the background mode is generally used for debugging. In the following chapters, we will explain in detail how to debug nginx.

So, we can see that nginx works in a multi-process manner. Of course, nginx also supports multi-threading. However, our mainstream method is still the multi-process method, which is also the default method of nginx. . There are many benefits to using multi-process in nginx, so I will mainly explain the multi-process mode of nginx.

As mentioned just now, after nginx is started, there will be a master process and multiple worker processes. The master process is mainly used to manage worker processes, including: receiving signals from the outside world, sending signals to each worker process, monitoring the running status of the worker process, and automatically restarting a new worker process when the worker process exits (under abnormal circumstances). .

Basic network events are handled in the worker process. Multiple worker processes are peer-to-peer. They compete equally for requests from clients, and each process is independent of each other. A request can only be processed in one worker process, and a worker process cannot process requests from other processes. The number of worker processes can be set. Generally, we will set it to be consistent with the number of CPU cores of the machine. The reason for this is inseparable from the process model and event processing model of nginx.

After nginx starts, if we want to operate nginx, what should we do?

From the above we can see that the master manages the worker process, so we only need to communicate with the master process. The master process will receive signals from the outside world and then do different things based on the signals. So if we want to control nginx, we only need to send a signal to the master process through kill. For example, kill -HUP pid tells nginx to restart nginx gracefully. We usually use this signal to restart nginx or reload the configuration. Because it restarts gracefully, the service is not interrupted. What does the master process do after receiving the HUP signal?

First, after receiving the signal, the master process will reload the configuration file, then start a new worker process, and send signals to all old worker processes to tell them that they can retire honorably.

After the new worker starts, it begins to receive new requests, while the old worker stops receiving new requests after receiving the signal from the master, and all unprocessed requests in the current process After the complete request processing is completed, exit.

Of course, sending signals directly to the master process is an older method of operation. After nginx version 0.8, it introduced a series of command line parameters to facilitate our management. For example, ./nginx -s reload is to restart nginx, and ./nginx -s stop is to stop nginx from running.

How to do it?

Let’s take reload as an example. We see that when executing the command, we start a new nginx process, and the new nginx process will know our purpose after parsing the reload parameter. It is to control nginx to reload the configuration file. It will send a signal to the master process, and then the next action is the same as if we directly sent the signal to the master process.

Now, we know what nginx does internally when we operate nginx. So, how does the worker process handle requests? As we mentioned earlier, worker processes are equal, and each process has the same opportunity to process requests. When we provide http service on port 80 and a connection request comes, each process may handle the connection. How to do this?

First of all, each worker process is forked from the master process. In the master process, the socket (listenfd) that needs to be listened is first established, and then multiple worker processes are forked. The listenfd of all worker processes will become readable when a new connection arrives. To ensure that only one process handles the connection, all worker processes grab accept_mutex before registering the listenfd read event. The process that grabs the mutex registers the listenfd read event. Call accept in the read event to accept the connection.

After a worker process accepts the connection, it starts to read the request, parse the request, process the request, generate data, and then return it to the client, and finally disconnect the connection. Such a complete request is That's it. We can see that a request is completely processed by the worker process, and is only processed in one worker process.

Multi-threading model VS multi-process model, this is a problem!

So, what are the benefits of nginx adopting this process model? Of course, there will definitely be many benefits. First of all, for each worker process, it is an independent process and does not need to be locked, so the overhead caused by locking is eliminated. At the same time, it will be much more convenient during programming and problem finding. Secondly, using independent processes will not affect each other. After one process exits, other processes are still working and the service will not be interrupted. The master process will quickly start a new worker process. Of course, if the worker process exits abnormally, there must be a bug in the program. Abnormal exit will cause all requests on the current worker to fail, but it will not affect all requests, so the risk is reduced. Of course, there are many benefits, and everyone can experience them slowly.

The above has talked a lot about the process model of nginx. Next, let’s take a look at how nginx handles events.

Someone may ask, nginx uses a multi-worker method to process requests. There is only one main thread in each worker, so the number of concurrencies that can be processed is very limited. How many workers can handle as many concurrencies? , how to achieve high concurrency? No, this is the brilliance of nginx. nginx uses an asynchronous and non-blocking method to process requests. In other words, nginx can handle thousands of requests at the same time.

Think about the common working method of apache (apache also has an asynchronous non-blocking version, but it conflicts with some of its own modules, so it is not commonly used). Each request will occupy a working thread. When When the number of concurrency reaches thousands, thousands of threads are processing requests at the same time. This is a big challenge for the operating system. The memory occupied by threads is very large, and the CPU overhead caused by thread context switching is very large. Naturally, the performance cannot be improved, and these overheads are completely meaningless. .

Synchronous blocking VS asynchronous non-blocking

Why can nginx be processed in an asynchronous non-blocking way, or what exactly is asynchronous non-blocking? Let’s go back to the starting point and look at the complete process of a request. First, the request comes, a connection is established, and then the data is received. After receiving the data, the data is sent. Specific to the bottom layer of the system, it is the read and write events. When the read and write events are not ready, they will inevitably be inoperable. If you do not call it in a non-blocking way, you will have to block the call. If the event is not ready, you can only wait. Okay, you can continue when the event is ready. Blocking calls will enter the kernel and wait, and the CPU will be used by others. For single-threaded workers, it is obviously not suitable. When there are more network events, everyone is waiting, and no one uses the CPU when it is idle. CPU utilization Naturally, the rate cannot go up, let alone high concurrency.

Okay, you said adding the number of processes, what is the difference between this and Apache's thread model? Be careful not to increase unnecessary context switching. Therefore, in nginx, blocking system calls are the most taboo. Don't block, then it's non-blocking. Non-blocking means that if the event is not ready, it will return to EAGAIN immediately to tell you that the event is not ready yet. Why are you panicking? Come back later. Okay, after a while, check the event again until the event is ready. During this period, you can do other things first, and then check whether the event is ready. Although it is no longer blocked, you have to check the status of the event from time to time. You can do more things, but the overhead is not small. Therefore, there is an asynchronous non-blocking event processing mechanism, and the specific system calls are system calls like select/poll/epoll/kqueue.

They provide a mechanism that allows you to monitor multiple events at the same time. Calling them is blocking, but you can set a timeout. Within the timeout, if an event is ready, it will return. This mechanism just solves our two problems above. Take epoll as an example (in the following examples, we often use epoll as an example to represent this type of function). When the event is not ready, it is placed in epoll. , when the event is ready, we go to read and write. When the read and write returns EAGAIN, we add it to epoll again. In this way, as long as an event is ready, we will process it, and only when all events are not ready, we will wait in epoll. In this way, we can handle a large number of concurrent requests. Of course, the concurrent requests here refer to unprocessed requests. There is only one thread, so of course there is only one request that can be processed at the same time. We just continuously switch between requests. That's it, the switch was voluntarily given up because the asynchronous event was not ready. There is no cost to switching here. You can understand it as processing multiple prepared events in a loop, which is actually the case.

Compared with multi-threading, this event processing method has great advantages. There is no need to create threads, each request occupies very little memory, there is no context switching, and event processing is very lightweight. class. No matter how many concurrencies there are, it will not lead to unnecessary waste of resources (context switching). More concurrency will just take up more memory. I have tested the number of connections before. On a machine with 24G memory, the number of concurrent requests processed has exceeded 2 million. Today's network servers basically use this method, which is also the main reason why nginx has high performance.

We have said before that it is recommended to set the number of workers to the number of CPU cores. It is easy to understand here. More workers will only cause processes to compete for CPU resources, thus causing unnecessary Context switch.

Moreover, nginx provides a cpu affinity binding option in order to make better use of multi-core features. We can bind a certain process to a certain core, so that there will be no problems due to process switching. Come cache failure. Small optimizations like this are very common in nginx, and it also illustrates the painstaking efforts of the nginx author. For example, when nginx compares 4-byte strings, it will convert the 4 characters into an int type and then compare them to reduce the number of CPU instructions and so on.

Now, we know why nginx chooses such a process model and event model. For a basic web server, there are usually three types of events, network events, signals, and timers. From the above explanation, we know that network events can be solved well through asynchronous non-blocking. How to deal with signals and timers?

First, signal processing.

For nginx, there are some specific signals that represent specific meanings. The signal will interrupt the current running of the program and continue execution after changing the state. If it is a system call, it may cause the system call to fail and require reentry. Regarding signal processing, you can study some professional books, so I won’t go into details here. For nginx, if nginx is waiting for an event (during epoll_wait), if the program receives a signal, after the signal processing function is processed, epoll_wait will return an error, and then the program can enter the epoll_wait call again.

In addition, let’s take a look at the timer. Since epoll_wait and other functions can set a timeout when they are called, nginx uses this timeout to implement the timer. The timer events in nginx are placed in a red-black tree that maintains timers. Each time before entering epoll_wait, the minimum time of all timer events is obtained from the red-black tree, and the timeout of epoll_wait is calculated. Enter epoll_wait after time.

So, when no event is generated and there is no interrupt signal, epoll_wait will time out, that is, the timer event has arrived. At this time, nginx will check all timeout events, set their status to timeout, and then handle the network event. It can be seen from this that when we write nginx code, the first thing we usually do when processing the callback function of a network event is to determine the timeout, and then process the network event.

For more Nginx related knowledge, visit the Nginx usage tutorial column!

The above is the detailed content of Why is nginx performance good?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template