Web server and cache server, under high concurrency, the limit of the maximum number of socket connections is adjusted:
1. Change the limit on the maximum number of files that a user process can open.
Effective immediately: ulimit-nxxx
Effective permanently:
echo "ulimit-HSn65536">>/etc/rc.local
echo "ulimit-HSn65536">>/root/.bash_profile
ulimit-HSn65536
2. Change the network kernel limit on the maximum number of tcp connections.
/etc/sysctl.conf
一、
On the Linux platform, whether you are writing a client program or a server program, when processing wide concurrent TCP connections, the highest number of concurrencies will be subject to the system's limit on the number of files that a user can open at the same time in a single process (this is because The system creates a socket handle for each TCP connection, and each socket handle is also a file handle). For example: a redis program that only starts one process can only open 1024 files (default 1024) (1024 tcp connections = 1024 socket connection handles = 1024 file handles),
You can use the ulimit command to check the limit on the number of files the system allows the current user process to open:
$ulimit-n
1024
This means that each process of the current user is allowed to open up to 1024 files at the same time. Among these 1024 files, the standard input, standard output, standard error, server eavesdropping socket, and inter-process communication unix domain that must be opened by each process must be removed. socket and other files, so the number of files left that can be used for client socket connection is only about 1024-10=1014. That is to say, by default, Linux-based communication programs allow up to 1014 concurrent TCP connections at the same time.
For communication handlers that want to support a higher number of concurrent TCP connections, you must change the number of files that Linux has open at the same time for the current user's process:
Soft limit (softlimit): refers to Linux further limiting the number of files that users can open at the same time within the range that the current system can bear;
Hard limit: The maximum number of files that can be opened simultaneously by the system is estimated based on the system hardware resources (mainly system video memory).
Generally the soft limit is greater than or equal to the hard limit
The easiest way to change the limit on the maximum number of files opened by a single process is to use the ulimit command:
[speng@as4~]$ulimit-n
In the above command, specify the maximum number of files that a single process is allowed to open. If the system echoes something like "Operationnotpermitted", it means that the above limit change failed. In fact, it is because the value specified at hand exceeds the Linux system's soft limit or hard limit on the number of files opened by the user. Therefore, it is necessary to change the soft and hard limits of the Linux system on the number of open files for users.
The first step is to change the /etc/security/limits.conf file and add the following lines to the file:
spengsoftnofile10240
spenghardnofile10240
speng specifies the limit on the number of open files for that user to be changed. The '*' sign can be used to indicate changing the limit for all users;
soft or hard specifies whether to change the soft limit or the hard limit; 10240 specifies the new limit value you want to change, that is, the maximum number of open files (please note that the soft limit value must be greater than or equal to the hard limit). Save the file after making changes.
The second step is to change the /etc/pam.d/login file and add the following lines to the file:
sessionrequired/lib/security/pam_limits.so This tells Linux that after the user completes the system login, the pam_limits.so module should be called to set the system’s maximum limit on the number of various resources that the user can use (including the maximum number of resources that the user can open) file number limit), and the pam_limits.so module will read the configuration from the /etc/security/limits.conf file to set this limit value. Save this file after making changes.
The third step is to check the Linux system-level limit on the maximum number of open files, use the following command:
[speng@as4~]$cat/proc/sys/fs/file-max
12158
This shows that this Linux system can allow up to 12158 files to be opened at the same time (that is, including the total number of files opened by all users), which is a Linux system-level hard limit. All user-level limits on the number of open files should not exceed this value. Generally, this system-level hard limit is the best maximum limit on the number of files opened at the same time based on the system hardware resources when the Linux system is started. If there is no special need, this limit should not be changed unless you want to limit the number of files opened at the user level. Set a value that exceeds this limit.
The way to change this hard limit is to change the /etc/rc.local script and add the following line to the script:
echo22158>/proc/sys/fs/file-max
This is to force Linux to set the system-level hard limit on the number of open files to 22158 after the startup is completed. Save this file after the change.
After completing the above steps, restart the system. Under normal circumstances, you can set the maximum number of files that the Linux system allows a single process of a specified user to open at the same time to a specified value. If after restarting, you use the ulimit-n command to check that the limit on the number of files that a user can open is always higher than the maximum value set in the above steps, this may be because the ulimit-n command in the user login script /etc/profile has already disabled the number of files that the user can open at the same time. The number of files is limited. Because when you use ulimit-n to change the system's limit on the maximum number of files that a user can open at the same time, the newly changed value can only be greater than or equal to the value previously set by ulimit-n. Therefore, it is impossible to use this command to reduce this limit value. .
So, if the above problem exists, you can only open the /etc/profile script file and search in the file to see if ulimit-n is used to limit the maximum number of files that the user can open at the same time. If found, delete this line command, or change the value set to an appropriate value, save the file, and then the user can exit and log in to the system again. Through the above steps, the system limit on the number of open files is lifted for the communication processing program that supports high-concurrency TCP connection processing.
2. Modify the network kernel’s restrictions on TCP connections
When writing a client communication handler on Linux that supports high-concurrency TCP connections, you sometimes find that although the system has lifted the limit on the number of files users can open at the same time, there will still be problems when the number of concurrent TCP connections drops to a certain number. It also fails to successfully establish a new TCP connection. There are many reasons for these occurrences.
第一種誘因可能是由於Linux網路核心對本地端標語範圍有限制。此時,進一步剖析為何未能完善TCP聯接,會發覺問題出在connect()呼叫返回失敗,查看系統錯誤提示訊息是"Can'tassignrequestedaddress".同時,假如在此時用tcpdump工具監視網路,會發覺根本沒有TCP連接時顧客端發SYN包的網路流量。這種情況說明問題在於本地Linux系統核心中有限制。
雖然,問題的根本緣由在於Linux核心的TCP/IP合約實現模組對系統中所有的顧客端TCP聯接對應的本地端標語的範圍進行了限制(比如,內核限製本地標語的範圍為1024~32768之間)。當系統中某一時刻同時存在太多的TCP顧客端聯接時,因為每位TCP顧客端聯接都要佔用一個惟一的本地端標語(此端標語在系統的本地端標語範圍限制中),假如現有的TCP客戶端聯接已將所有的本地端標語佔滿,則此時就難以為新的TCP客戶端聯接分配一個本地端標語了,為此系統會在這些情況下在connect()調用中返回失敗,並將錯誤提示訊息設為"Can'tassignrequestedaddress".
有關這種控制邏輯可以查看Linux核心原始碼,以linux2.6核心為例,可以查看tcp_ipv4.c檔案中如下函數:
staticinttcp_v4_hash_connect(structsock*sk)
請注意上述函數中對變數sysctl_local_port_range的存取控制。變數sysctl_local_port_range的初始化則是在tcp.c檔案中的下列函數中設定:
void__inittcp_init(void)
核心編譯時預設設定的本地端標語範圍可能太小,因而須要變更此本機連接埠範圍限制。
第一步,更改/etc/sysctl.conf檔案linux系統鏡像下載,在檔案中加入以下行:
net.ipv4.ip_local_port_range=102465000
這表示將系統對本地連接埠範圍限制設為1024~65000之間。請注意,本地連接埠範圍的最小值必須小於或等於1024;而連接埠範圍的最大值則應大於或等於65535.更改完後儲存此檔案。
第二步,執行sysctl指令:
[speng@as4~]$sysctl-p
假如係統沒有錯誤提示,就表示新的本機連接埠範圍設定成功。如果依照上述連接埠範圍進行設置,則理論上單獨一個進程最多可以同時完善60000多個TCP客戶端連接。
第二種未能完善TCP連結的誘因可能是由於Linux網路核心的防火牆對最大追蹤的TCP連結數有限制。此時程式會表現為在connect()呼叫中阻塞,就像關機,假如用tcpdump工具監視網路,也會發覺根本沒有TCP聯接時顧客端發SYN包的網路流量。因為防火牆在內核中會對每位TCP聯接的狀態進行跟踪,追蹤資訊將會置於坐落內核顯存中的conntrackdatabase中,這個資料庫的大小有限,當系統中存在過多的TCP聯接時,資料庫容量不足,IP_TABLE未能為新的TCP聯接建構追蹤訊息,於是表現為在connect()呼叫中阻塞。此時就必須更改核心對最大追蹤的TCP聯接數的限制,技巧同更改核心對本地端標語範圍的限制是類似的:
第一步,更改/etc/sysctl.conf文件,在文件中加入以下行:
net.ipv4.ip_conntrack_max=10240
這表示將系統對最大追蹤的TCP聯接數限制設為10240.請注意,此限制值要盡量小,以節約對內核顯存的佔用。
第二步,執行sysctl指令:
[speng@as4~]$sysctl-p
假如係統沒有錯誤提示,就表示系統對新的最大追蹤的TCP連結數限制變更成功。如果依照上述參數進行設置,則理論上單獨一個進程最多可以同時完善10000多個TCP客戶端連接。
三、
使用支援高並發網路I/O的程式技術在Linux上編撰高並發TCP連接應用程式時,必須使用適當的網路I/O技術和I/O風波分派機制。可用的I/O技術有同步I/O,非阻塞式同步I/O(亦稱反應式I/O),以及異步I/O.在高TCP並發的情形下,假如使用同步I/O,這會嚴重阻塞程式的運轉,除非為每位TCP聯接的I/O建立一個執行緒。
然而linux tcp連線數限制,過多的執行緒又會因係統對執行緒的調度而造成巨大開支。為此,在高TCP並發的情況下使用同步I/O是不可取的,這時可以考慮使用非阻塞式同步I/O或非同步I/O.非阻塞式同步I/O的技術包括使用select (),poll(),epoll等機制。非同步I/O的技術就是使用AIO.
從I/O風波分派機制來看,使用select()是不合適的,由於它所支援的並發聯接數有限(一般在1024個以內)。假如考慮性能,poll()也是不合適的linux tcp連接數限制,雖然它可以支持的較高的TCP並發數,並且因為其採用"協程"機制,當並發數較高時,其運行效率相當低,並可能存在I/O風波分派不均,造成部份TCP連結上的I/O出現"飢餓"現象。而假如使用epoll或AIO,則沒有上述問題(初期Linux核心的AIO技術實作是透過在內核中為每位I/O懇求創建一個線程來實現的,這些實現機制在高並發TCP聯接的情況下使用似乎也有嚴重的效能問題。
綜上所述,在開發支援高並發TCP聯接的Linux應用程式時,應盡量使用epoll或AIO技術來實現並發的TCP聯接上的I/O控制,這將為提高程序對高並發TCP聯接的支援提供有效的I/O保證。
核心參數sysctl.conf的最佳化
/etc/sysctl.conf是拿來控制linux網路的設定文件,對於依賴網路的程式(如web伺服器和cache伺服器)十分重要,RHEL預設提供的最好調整。
建議設定(把原/etc/sysctl.conf內容清除,把下邊內容複製進去):
cp/etc/sysctl.conf/etc/sysctl.conf.bak
echo"">/etc/sysctl.conf
vim/etc/sysctl.conf
net.ipv4.ip_local_port_range=102465535
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem=409687380167777216
net.ipv4.tcp_wmem=409665536167777216
net.ipv4.tcp_fin_timeout=10
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_window_scaling=0
net.ipv4.tcp_sack=0
dev_max_backlog=30000
net.ipv4.tcp_no_metrics_save=1
net.core.somaxconn=10240
net.ipv4.tcp_syncookies=0
net.ipv4.tcp_max_orphans=262144
net.ipv4.tcp_max_syn_backlog=262144
net.ipv4.tcp_synack_retries=2
net.ipv4.tcp_syn_retries=2
這個配置參考於cache伺服器varnish的建議配置和SunOne伺服器系統優化的建議配置。
不過varnish推薦的配置是有問題的,實際運行表明"net.ipv4.tcp_fin_timeout=3"的配置會造成頁面常常打不開;而且當網友使用的是IE6瀏覽器時,訪問網站一段時間後,所有網頁就會打不開,重新啟動瀏覽器後正常。可能是美國的網速快吧,我們國情決定須要調整"net.ipv4.tcp_fin_timeout=10",在10s的情況下,一切正常(實際運行推論)。
更改完畢後,執行:
sysctl-p/etc/sysctl.conf
sysctl-wnet.ipv4.route.flush=1
命令生效。為了保險起見,也可以reboot系統。
調整開啟最大檔案句柄數(單一進程最大tcp連接數=單一進程最大socket連接數):
linux系統優化完網路必須調高系統容許開啟的檔案數能夠支援大的並發,預設1024是遠遠不夠的。
執行指令:
Shell程式碼
echo"ulimit-HSn65536">>/etc/rc.local
echo"ulimit-HSn65536">>/root/.bash_profile
ulimit-HSn65535
The above is the detailed content of How to adjust the maximum number of socket connection limits for web servers and cache servers under high concurrency. For more information, please follow other related articles on the PHP Chinese website!