Does Nginx use multi-threading internally?
某草草
某草草 2017-05-16 17:21:17
0
1
635

Nginx divides the processing of a request into multiple phases (phases). Will there be IO blocking during these stages? If there is blocking, Nginx will execute other requests, but will there be a priority distinction when executing other requests (will requests that have already progressed to a later stage be executed first?)? Also, will Nginx have a thread pool to process it at each stage, or will it have its own thread from beginning to end?

某草草
某草草

reply all(1)
伊谢尔伦

cat /proc/3776/status|grep Threads It can be seen that the Nginx worker process has only 1 thread, of which 3776 is the PID of the Nginx worker process. In addition, Nginx has added AIO thread pool support from 1.7.11, and can use AIO multi-threading to read and send large files to prevent the worker process from being blocked (use sendfile for small files, and AIO thread pool for large files). To enable thread pool support, You need to explicitly add the --with-threads option when configuring.
https://www.nginx.com/blog/thread-pools-boost-performance-9x/
http://nginx.org/en/docs/ngx_core_module.html#thread_pool

Transfer:
When listen_fd has a new accept() request, the operating system will wake up all child processes, because these processes all epoll_wait() the same listen_fd, and the operating system has no way of judging who is responsible for accepting, so it simply wakes them all up. , but in the end only one process will successfully accept, and the other processes will fail to accept. All child processes are "scared awake", so it is called Thundering Herd.

The listening socket is initialized at startup. The worker process accepts, reads requests and outputs responses through these sockets. Nginx does not use the master process to distribute requests like PHP-FPM. This work is done by the operating system kernel mechanism. Completed, so it may cause a panic phenomenon, that is, when listen_fd has a new accept() request, the operating system will wake up all child processes.

Nginx’s idea of ​​​​solving thundering groups: avoid thundering groups.
http://nginx.org/en/docs/ngx_core_module.html#accept_mutex
Specific measures include using a global mutex lock (accept_mutex on), and each worker process in epoll_wait () before applying for the lock, continue processing if the application is obtained, and wait if it cannot be obtained, and set up a load balancing algorithm (when the task volume of a certain working process reaches 7/8 of the total set volume, it will not Try to apply for a lock again) to balance the workload of each process.

Nginx’s new way to solve the problem of thundering groups: use the Socket ReusePort function provided by the kernel
NGINX 1.9.1 supports socket sharding:
http://nglua.com/docs/sharding.html
NGINX1.9.1 supports the SO_REUSEPORT option of socket, This option is available in newer versions of many operating systems, including DragonFly BSD and Linux (3.9+ kernel). This option allows multiple sockets to listen on the same IP address and port combination. The kernel load balances incoming connections to these sockets. Effective fragmentation. When the SO_REUSEPORT option is not turned on, the listening socket will notify a process by default when a connection comes in. If the accept_mutex off command will wake up all working processes at this time, they will compete to get it. This is what is called Thundering herd phenomenon. If you use epoll without locking (accept_mutex off), when there is a read operation on the listening port, a thundering herd phenomenon will occur. After enabling the SO_REUSEPORT option, each process has its own independent listening socket. The kernel determines whichever valid socket (process) gets the connection. This reduces latency and improves worker process performance, it also means that worker processes are given new connections before they are ready to handle them.

nginx works in a multi-process mode by default, with one master process and multiple worker processes. The master process is mainly used to manage worker processes. Multiple worker processes compete equally for requests from clients. One worker process can handle multiple requests, but cannot Process requests from other worker processes. There is only one main thread in each worker process. With the support of epoll, it uses an asynchronous non-blocking method to process requests to achieve high concurrency. epoll supports monitoring multiple events (socket polling). When the event is not ready, put it in epoll. When the event is ready, read and write. Compared with multi-threading, this event processing method has great advantages. There is no need to create threads, and each request occupies There is also very little memory, no context switching, and event processing is very lightweight. No matter how many concurrencies there are, it will not lead to unnecessary waste of resources (context switching). More concurrencies will just take up more memory. The common working method of httpd is that each request will occupy a working thread. When the number of concurrency reaches thousands, there will be thousands of threads processing requests at the same time. This is a big challenge for the operating system. Threads The memory usage caused by this is very large, and the CPU overhead caused by thread context switching is very large. Naturally, the performance of httpd cannot be improved. The Tengine team has previously tested the number of connections. On a machine with 24G memory, Nginx handles The number of concurrent requests has reached more than 2 million. (On average, 1G memory can handle more than 80,000 requests.) Nginx supports binding a certain process to a certain core (CPU affinity binding), so that there will be no problems due to process switching. To avoid cache failure, it is recommended to set up several worker processes according to the number of cores in the CPU. However, note that too many worker processes will only cause processes to compete for CPU resources, thus causing unnecessary context switching, so worker processes The more processes, the better. For details, see:
http://tengine.taobao.org/book/chapter_02.html

Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!