What is Socket segmentation in Nginx server

王林
Release: 2023-05-17 20:19:10
forward
727 people have browsed it

The 1.9.1 release of nginx introduces a new feature: allowing the use of the so_reuseport socket option, which is available in new versions of many operating systems, including dragonfly bsd and linux (kernel version 3.9 and later ). This socket option allows multiple sockets to listen on the same IP and port combination. The kernel is able to load balance incoming connections across these sockets. (For nginx plus customers, this feature will appear in version 7, which will be released by the end of the year)

The so_reuseport option has many potential practical applications. Other services can also use it to simply implement rolling upgrades during execution (nginx already supports rolling upgrades). For nginx, enabling this option can reduce lock contention in certain scenarios and improve performance.

As described in the figure below, when the so_reuseport option is valid, a separate listening socket notifies the worker process of the accessed connection, and each worker thread attempts to obtain the connection.

What is Socket segmentation in Nginx server

When the so_reuseport option is enabled, there are multiple socket listeners for each IP address and port binding connection, and each worker process can be assigned one . The system kernel determines which valid socket listener (and implicitly, for which worker process) gets the connection. This can reduce lock competition between worker processes when obtaining new connections (Translator's Note: Competition between worker processes requesting to obtain mutually exclusive resource locks), and can improve performance on multi-core systems. However, this also means that when a worker process falls into a blocking operation, the blocking affects not only the worker process that has accepted the connection, but also causes the worker process scheduled to be allocated by the kernel to send the connection request and therefore becomes blocked.

What is Socket segmentation in Nginx server

Set up shared socket

In order for the so_reuseport socket option to work, it should be http or tcp (stream mode) The listen item in the communication option directly introduces the latest reuseport parameter, just like the following example:

Copy code The code is as follows:


http {
server { listen 80 reuseport;
}

After referencing the reuseport parameter, the accept_mutex parameter will be invalid for the referenced socket, because the mutex (mutex) is redundant for reuseport. For ports that do not use reuseport, it is still valuable to set accept_mutex.

reuseport's benchmark performance test

I ran the benchmark tool on a 36-core AWS instance to test 4 nginx worker processes. In order to reduce the impact of the network, both the client and nginx are running locally , and let nginx return ok string instead of a file. I compared three nginx configurations: default (equivalent to accept_mutex on), accept_mutex off, and reuseport. As shown in the figure, reuseport's requests per second are two to three times that of the others, while the latency and latency standard deviation are also reduced.


I ran another related performance test - the client and nginx were on different machines and nginx returned an html file. As shown in the table below, the latency reduction using reuseport is similar to the previous performance test, with the standard deviation reduction in latency being more significant (nearly one-tenth). Other results (not shown in the table) are equally encouraging. Using reuseport, the load is evenly distributed among the worker processes. Under default conditions (equivalent to accept_mutex on), some workers receive a higher percentage of the load, while with accept_mutex off all workers receive a higher load.

Copy code The code is as follows:


latency (ms) latency stdev (ms) cpu load

default 15.65 26.59 0.3

accept_mutex off 15.59 26.48 10What is Socket segmentation in Nginx serverreuseport 12.35 3.15 0.3

In these performance tests, the speed of connection requests is very high, but the requests do not require a lot of processing. Other basic tests should point out that reuseport can also significantly improve performance when application traffic fits this scenario. (The reuseport parameter cannot be used in the listen directive in the mail context, such as email, because email traffic will definitely not match this scenario.) We encourage you to test first rather than directly apply it on a large scale.

The above is the detailed content of What is Socket segmentation in Nginx server. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:yisu.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template