


Introduction to the better configuration method of nginx in high concurrency (code analysis)
This article brings you an introduction to the better configuration method (code analysis) of nginx in high concurrency. It has certain reference value. Friends in need can refer to it. I hope it will be helpful to you.
1. The optimization here mainly refers to the configuration optimization of nginx. Generally speaking, the following items in the nginx configuration file are more effective for optimization:
Number of nginx processes, suggestions It is specified according to the number of CPUs, which is generally the same as the number of CPU cores or a multiple of it.
worker_processes 8;
Allocate CPU to each process. In the above example, 8 processes are assigned to 8 CPUs. Of course, you can write multiple ones, or assign one process to multiple CPUs.
worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000;
The following command refers to the maximum number of file descriptors opened by an nginx process. The theoretical value should be the maximum number of open files in the system (ulimit-n) divided by the number of nginx processes, but nginx allocates requests It's not that uniform, so it's better to be consistent with the value of ulimit -n.
worker_rlimit_nofile 65535;
Use epoll's I/O model and use this model to efficiently handle asynchronous events
use epoll;
The maximum number of connections allowed per process. Theoretically, the maximum number of connections per nginx server is worker_processes*worker_connections.
worker_connections 65535;
http connection timeout, the default is 60s, the function is to make the connection from the client to the server continue to be valid within the set time. When a subsequent request to the server occurs, this function avoids the establishment or re-establishment establish connection. Remember that this parameter cannot be set too large! Otherwise, many invalid http connections will occupy the number of nginx connections, and eventually nginx will crash!
keepalive_timeout 60;
The buffer size of the client request header can be set according to your system paging size. Generally, the header size of a request will not exceed 1k, but since the general system paging is larger than 1k, So here it is set to paging size. The paging size can be obtained with the command getconf PAGESIZE.
client_header_buffer_size 4k;
The following parameter will specify the cache for open files. It is not enabled by default. max specifies the number of caches. It is recommended to be consistent with the number of open files. Inactive refers to how long the file has not been requested before the cache is deleted. .
open_file_cache max=102400 inactive=20s;
The following refers to how often the cached valid information is checked.
open_file_cache_valid 30s;
The minimum number of uses of the file within the inactive parameter in the open_file_cache directive. If this number is exceeded, the file descriptor is always opened in the cache. As in the above example, if a file is not used once within the inactive time is used, it will be removed.
open_file_cache_min_uses 1;
Hide information about the operating system and web server (Nginx) version number in the response header, which is good for security.
server_tokens off;
can make sendfile() work. sendfile() can copy data (or any two file descriptors) between disk and TCP socket. Pre-sendfile is to apply for a data buffer in user space before transmitting data. Then use read() to copy the data from the file to this buffer, and write() to write the buffer data to the network. sendfile() immediately reads the data from the disk to the OS cache. Because this copying is done in the kernel, sendfile() is more efficient than combining read() and write() and turning the discard buffer on and off (more on sendfile).
sendfile on;
Tell nginx to send all header files in one packet instead of sending them one after another. That is to say, the data packet will not be transmitted immediately. When the data packet is the largest, it will be transmitted all at once. This will help solve network congestion.
tcp_nopush on;
Tell nginx not to cache the data, but to send it piece by piece - when data needs to be sent in time, this attribute should be set to the application, so that when sending a small piece of data information, the return value cannot be obtained immediately .
tcp_nodelay on;
For example:
http { server_tokens off; sendfile on; tcp_nopush on; tcp_nodelay on; ...... }
The buffer size of the client request header. This can be set according to the system paging size. Generally, the size of a request header will not exceed 1k. However, due to the general system The paging must be larger than 1k, so the paging size is set here.
client_header_buffer_size 4k;
The buffer size of the client request header. This can be set according to the system paging size. Generally, the size of a request header will not exceed 1k. However, since the system paging is generally larger than 1k, it is set here. is the paging size. The paging size can be obtained with the command getconf PAGESIZE.
[root@test-huanqiu ~]# getconf PAGESIZE 4096
But there are also cases where client_header_buffer_size exceeds 4k, but the value of client_header_buffer_size must be set to an integral multiple of the "system paging size".
Specify cache for open files, which is not enabled by default. max specifies the number of caches. It is recommended to be consistent with the number of open files. Inactive refers to how long the file is not requested before the cache is deleted.
open_file_cache max=65535 inactive=60s;
The minimum number of uses of the file within the inactive parameter time of the open_file_cache directive. If this number is exceeded, the file descriptor is always opened in the cache. As in the above example, if a file is not used once within the inactive time is used, it will be removed.
open_file_cache_min_uses 1;
Specify how often to check cached valid information.
open_file_cache_valid 80s;
The following is a simple nginx configuration file used:
[root@dev-huanqiu ~]# cat /usr/local/nginx/conf/nginx.conf user www www; worker_processes 8; worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000; error_log /www/log/nginx_error.log crit; pid /usr/local/nginx/nginx.pid; worker_rlimit_nofile 65535; events { use epoll; worker_connections 65535; } http { include mime.types; default_type application/octet-stream; charset utf-8; server_names_hash_bucket_size 128; client_header_buffer_size 2k; large_client_header_buffers 4 4k; client_max_body_size 8m; sendfile on; tcp_nopush on; keepalive_timeout 60; fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=TEST:10m inactive=5m; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 16k; fastcgi_buffers 16 16k; fastcgi_busy_buffers_size 16k; fastcgi_temp_file_write_size 16k; fastcgi_cache TEST; fastcgi_cache_valid 200 302 1h; fastcgi_cache_valid 301 1d; fastcgi_cache_valid any 1m; fastcgi_cache_min_uses 1; fastcgi_cache_use_stale error timeout invalid_header http_500; open_file_cache max=204800 inactive=20s; open_file_cache_min_uses 1; open_file_cache_valid 30s; tcp_nodelay on; gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.0; gzip_comp_level 2; gzip_types text/plain application/x-javascript text/css application/xml; gzip_vary on; server { listen 8080; server_name huan.wangshibo.com; index index.php index.htm; root /www/html/; location /status { stub_status on; } location ~ .*\.(php|php5)?$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fcgi.conf; } location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|js|css)$ { expires 30d; } log_format access '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $http_x_forwarded_for'; access_log /www/log/access.log access; } }
2. Several instructions about FastCGI
This instruction specifies a path for the FastCGI cache , directory structure level, keyword area storage time and inactive deletion time.
fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=TEST:10m inactive=5m;
Specify the timeout for connecting to the backend FastCGI.
fastcgi_connect_timeout 300;
向FastCGI传送请求的超时时间,这个值是指已经完成两次握手后向FastCGI传送请求的超时时间。
fastcgi_send_timeout 300;
接收FastCGI应答的超时时间,这个值是指已经完成两次握手后接收FastCGI应答的超时时间。
fastcgi_read_timeout 300;
指定读取FastCGI应答第一部分 需要用多大的缓冲区,这里可以设置为fastcgi_buffers指令指定的缓冲区大小,上面的指令指定它将使用1个 16k的缓冲区去读取应答的第一部分,即应答头,其实这个应答头一般情况下都很小(不会超过1k),但是你如果在fastcgi_buffers指令中指 定了缓冲区的大小,那么它也会分配一个fastcgi_buffers指定的缓冲区大小去缓存。
fastcgi_buffer_size 16k;
指定本地需要用多少和多大的缓冲区来 缓冲FastCGI的应答,如上所示,如果一个php脚本所产生的页面大小为256k,则会为其分配16个16k的缓冲区来缓存,如果大于256k,增大 于256k的部分会缓存到fastcgi_temp指定的路径中, 当然这对服务器负载来说是不明智的方案,因为内存中处理数据速度要快于硬盘,通常这个值 的设置应该选择一个你的站点中的php脚本所产生的页面大小的中间值,比如你的站点大部分脚本所产生的页面大小为 256k就可以把这个值设置为16 16k,或者4 64k 或者64 4k,但很显然,后两种并不是好的设置方法,因为如果产生的页面只有32k,如果用4 64k它会分配1个64k的缓冲区去缓存,而如果使用64 4k它会分配8个4k的缓冲区去缓存,而如果使用16 16k则它会分配2个16k去缓存页面,这样看起来似乎更加合理。
fastcgi_buffers 16 16k;
这个指令我也不知道是做什么用,只知道默认值是fastcgi_buffers的两倍。
fastcgi_busy_buffers_size 32k;
在写入fastcgi_temp_path时将用多大的数据块,默认值是fastcgi_buffers的两倍。
fastcgi_temp_file_write_size 32k;
开启FastCGI缓存并且为其制定一个名称。个人感觉开启缓存非常有用,可以有效降低CPU负载,并且防止502错误。但是这个缓存会引起很多问题,因为它缓存的是动态页面。具体使用还需根据自己的需求。
fastcgi_cache TEST
为指定的应答代码指定缓存时间,如上例中将200,302应答缓存一小时,301应答缓存1天,其他为1分钟。
fastcgi_cache_valid 200 302 1h; fastcgi_cache_valid 301 1d; fastcgi_cache_valid any 1m;
缓存在fastcgi_cache_path指令inactive参数值时间内的最少使用次数,如上例,如果在5分钟内某文件1次也没有被使用,那么这个文件将被移除。
fastcgi_cache_min_uses 1;
不知道这个参数的作用,猜想应该是让nginx知道哪些类型的缓存是没用的。
fastcgi_cache_use_stale error timeout invalid_header http_500;
相关推荐:
The above is the detailed content of Introduction to the better configuration method of nginx in high concurrency (code analysis). For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

To allow the Tomcat server to access the external network, you need to: modify the Tomcat configuration file to allow external connections. Add a firewall rule to allow access to the Tomcat server port. Create a DNS record pointing the domain name to the Tomcat server public IP. Optional: Use a reverse proxy to improve security and performance. Optional: Set up HTTPS for increased security.

Steps to run ThinkPHP Framework locally: Download and unzip ThinkPHP Framework to a local directory. Create a virtual host (optional) pointing to the ThinkPHP root directory. Configure database connection parameters. Start the web server. Initialize the ThinkPHP application. Access the ThinkPHP application URL and run it.

To solve the "Welcome to nginx!" error, you need to check the virtual host configuration, enable the virtual host, reload Nginx, if the virtual host configuration file cannot be found, create a default page and reload Nginx, then the error message will disappear and the website will be normal show.

Server deployment steps for a Node.js project: Prepare the deployment environment: obtain server access, install Node.js, set up a Git repository. Build the application: Use npm run build to generate deployable code and dependencies. Upload code to the server: via Git or File Transfer Protocol. Install dependencies: SSH into the server and use npm install to install application dependencies. Start the application: Use a command such as node index.js to start the application, or use a process manager such as pm2. Configure a reverse proxy (optional): Use a reverse proxy such as Nginx or Apache to route traffic to your application

Load balancing strategies are crucial in Java frameworks for efficient distribution of requests. Depending on the concurrency situation, different strategies have different performance: Polling method: stable performance under low concurrency. Weighted polling method: The performance is similar to the polling method under low concurrency. Least number of connections method: best performance under high concurrency. Random method: simple but poor performance. Consistent Hashing: Balancing server load. Combined with practical cases, this article explains how to choose appropriate strategies based on performance data to significantly improve application performance.

Converting an HTML file to a URL requires a web server, which involves the following steps: Obtain a web server. Set up a web server. Upload HTML file. Create a domain name. Route the request.

The most commonly used instructions in Dockerfile are: FROM: Create a new image or derive a new image RUN: Execute commands (install software, configure the system) COPY: Copy local files to the image ADD: Similar to COPY, it can automatically decompress tar archives or obtain URL files CMD: Specify the command when the container starts EXPOSE: Declare the container listening port (but not public) ENV: Set the environment variable VOLUME: Mount the host directory or anonymous volume WORKDIR: Set the working directory in the container ENTRYPOINT: Specify what to execute when the container starts Executable file (similar to CMD, but cannot be overwritten)

Yes, Node.js can be accessed from the outside. You can use the following methods: Use Cloud Functions to deploy the function and make it publicly accessible. Use the Express framework to create routes and define endpoints. Use Nginx to reverse proxy requests to Node.js applications. Use Docker containers to run Node.js applications and expose them through port mapping.
