Some of the configurations mentioned below require a newer Linux (2.6 or above) kernel. It can be supported. The author uses CentOS 7.4 and kernel version 3.10. If it does not meet the needs, it is best to upgrade accordingly. After all, patching is a thankless task. For system-level tuning, we usually just modify the file descriptor limit, buffer queue length, and number of temporary ports.
Since each TCP connection occupies a file descriptor, once the file descriptors are exhausted, a new connection will return "Too many open files" like this error, in order to improve performance, we need to modify it: 1. System-level restriction editing file /etc/sysctl.conf, add the following content:
fs.file-max =10000000 fs.nr_open =10000000
User-level restriction editing file /etc/security /limits.conf, add the following content:
* hard nofile 1000000 * soft nofile 1000000
We need to ensure that the user-level limit is lower than the system-level limit, otherwise it will result in the inability to log in through SSH. After the modification is completed, execute the following command:
$ sysctl -p
You can check whether the modification is successful by executing the command ulimit -a.
Edit the file /etc/sysctl.conf and add the following content:
# The length of the syn quenenet.ipv4.tcp_max_syn_backlog =65535# The length of the tcp accept queuenet.core.somaxconn =65535
Among them, tcp_max_syn_backlog is used to specify the semi-connection SYN queue length. When a new connection is made, When it arrives, the system will detect the semi-connected SYN queue. If the queue is full, the SYN request cannot be processed and the statistical counts will be added to ListenOverflows and ListenDrops in /proc/net/netstat. somaxconn is used to specify the full-connected ACCEPT queue length. When the queue is full, the ACK packet sent by the client will not be processed correctly and the error "connection reset by peer" will be returned. Nginx will record an error log "no live upstreams while connecting to upstreams". If the above error occurs, we You need to consider increasing the configuration of these two items.
Since Nginx is used as a proxy, each TCP connection to the upstream web service will occupy a temporary port, so we need to modify the ip_local_port_range parameter to modify the /etc/sysctl.conf file , add the following content:
net.ipv4.ip_local_port_range =102465535 net.ipv4.ip_local_reserved_ports =8080,8081,9000-9010
Among them, the parameter ip_local_reserved_ports is used to specify the reserved port. This is to prevent the service port from being occupied and unable to start.
Nginx parameter optimization mainly focuses on the nginx.conf configuration file, which will not be described in detail below.
An important reason for Nginx’s powerful performance is that it adopts a multi-process non-blocking I/O model, so we must make good use of this:
worker_processes By default, Nginx has only one master process and one worker process. We need to modify it. It can be set to a specified number or to auto, which is the number of CPU cores of the system. Increasing the number of workers may cause competition between processes for CPU resources, resulting in unnecessary context switches. So we just set it to the number of CPU cores: worker_processes auto
worker_connections The number of concurrent connections each worker can handle, the default value of 512 is not quite enough If used, we will increase it appropriately: worker_connections 4096
Nginx supports the following I/O multiplexing methods to handle connections: select, poll, kqueue, epoll, rtsig , /dev/poll, eventport. Different operating systems use different tools, and in Linux systems, epoll is the most efficient
In order to avoid frequent changes from Nginx to Web services To establish and disconnect connections, we can enable the KeepAlive long connection feature supported from HTTP 1.1, which can significantly reduce CPU and network overhead. In our actual combat, it is also the biggest improvement in performance. Keepalive must be used in conjunction with proxy_http_version and proxy_set_header. The reference configuration is as follows:
upstream BACKEND { keepalive 300; server 127.0.0.1:8081; } server { listen 8080; location /{ proxy_pass http://BACKEND; proxy_http_version 1.1; proxy_set_header Connection""; } }
where keepalive is neither timeout nor the number of connection pools. The official explanation is as follows:
The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed.
It can be seen that it means "maximum number of idle long connections" ”, idle long connections exceeding this number will be recycled. When the number of requests is stable and smooth, the number of idle long connections will be very small (close to 0), but in reality the number of requests cannot always be smooth and stable. When the number of requests fluctuates, the number of idle long connections also fluctuates:
#When the number of idle long connections is greater than the configured value, the part of the long connections that is greater than the configured value will be recycled ;
When the long connection is not enough, a new long connection will be re-established.
如果该值过小,连接池会经常进行回收、分配和再回收操作。为了避免这种情况出现,可以根据实际情况适当调整这个值,在我们实际情况中,目标QPS为6000,Web服务响应时间约为200ms,因此需要约1200个长连接,而 keepalive值取长连接数量的10%~30%就可以了,这里我们取300,如果不想计算,直接设为1000也是可行的。
记录日志的I/O开销比较高,好在Nginx支持日志缓存,我们可以利用这个功能,降低写日志文件的频率,从而提高性能。结合使用buffer和flush两个参数可以控制缓存行为
access_log /var/logs/nginx-access.log buffer=64k gzip flush=1m
其中 buffer制定了缓存大小,当缓冲区达到 buffer所指定的大小时,Nginx就会将缓存起来的日志写到文件中;flush指定了缓存超时时间,当 flush指定的时间到达时,也会触发缓存日志写入文件操作。
Nginx配置中同样有相应的配置项:worker_rlimit_nofile, 理论上这个值应该设置为 /etc/security/limits.conf 中的值除以 worker_processes, 但实际中不可能每个进程均匀分配,所以这里只要设置成和 /etc/security/limits.conf 一样就可以了
worker_rlimit_nofile 1000000;
The above is the detailed content of Nginx performance optimization methods. For more information, please follow other related articles on the PHP Chinese website!