The 1.9.1 release of nginx introduces a new feature: allowing the use of the so_reuseport socket option, which is available in new versions of many operating systems, including dragonfly bsd and linux (kernel version 3.9 and later ). This socket option allows multiple sockets to listen on the same IP and port combination. The kernel is able to load balance incoming connections across these sockets. (For nginx plus customers, this feature will appear in version 7, which will be released by the end of the year)
The so_reuseport option has many potential practical applications. Other services can also use it to simply implement rolling upgrades during execution (nginx already supports rolling upgrades). For nginx, enabling this option can reduce lock contention in certain scenarios and improve performance.
As described in the figure below, when the so_reuseport option is valid, a separate listening socket notifies the worker process of the accessed connection, and each worker thread attempts to obtain the connection.
When the so_reuseport option is enabled, there are multiple socket listeners for each IP address and port binding connection, and each worker process can be assigned one . The system kernel determines which valid socket listener (and implicitly, for which worker process) gets the connection. This can reduce lock competition between worker processes when obtaining new connections (Translator's Note: Competition between worker processes requesting to obtain mutually exclusive resource locks), and can improve performance on multi-core systems. However, this also means that when a worker process falls into a blocking operation, the blocking affects not only the worker process that has accepted the connection, but also causes the worker process scheduled to be allocated by the kernel to send the connection request and therefore becomes blocked.
Set up shared socket
In order for the so_reuseport socket option to work, it should be http or tcp (stream mode) The listen item in the communication option directly introduces the latest reuseport parameter, just like the following example:
Copy code The code is as follows:
http {
server { listen 80 reuseport;
}
After referencing the reuseport parameter, the accept_mutex parameter will be invalid for the referenced socket, because the mutex (mutex) is redundant for reuseport. For ports that do not use reuseport, it is still valuable to set accept_mutex.
reuseport's benchmark performance test
I ran the benchmark tool on a 36-core AWS instance to test 4 nginx worker processes. In order to reduce the impact of the network, both the client and nginx are running locally , and let nginx return ok string instead of a file. I compared three nginx configurations: default (equivalent to accept_mutex on), accept_mutex off, and reuseport. As shown in the figure, reuseport's requests per second are two to three times that of the others, while the latency and latency standard deviation are also reduced.
latency (ms) latency stdev (ms) cpu load
accept_mutex off 15.59 26.48 10reuseport 12.35 3.15 0.3
In these performance tests, the speed of connection requests is very high, but the requests do not require a lot of processing. Other basic tests should point out that reuseport can also significantly improve performance when application traffic fits this scenario. (The reuseport parameter cannot be used in the listen directive in the mail context, such as email, because email traffic will definitely not match this scenario.) We encourage you to test first rather than directly apply it on a large scale.
The above is the detailed content of What is Socket segmentation in Nginx server. For more information, please follow other related articles on the PHP Chinese website!