In Linux, high concurrency is a situation of "encountering a large number of operation requests in a short period of time" encountered during system operation. It mainly occurs when the web system intensively accesses a large number of requests and receives a large number of requests; this situation The occurrence will cause the system to perform a large number of operations during this period, such as requests for resources, database operations, etc.
#The operating environment of this tutorial: linux7.3 system, Dell G3 computer.
High Concurrency Concept
1.1 High Concurrency Concept
High Concurrency It is one of the factors that must be considered in the design of Internet distributed system architecture. It usually refers to ensuring that the system can handle many requests in parallel at the same time through design. High concurrency (High Concurrency) is a situation encountered during the operation of the system where "a large number of operation requests are encountered in a short period of time". It mainly occurs when the web system concentrates a large number of accesses and receives a large number of requests ( For example: 12306 ticket grabbing situation; Tmall Double Eleven event). The occurrence of this situation will cause the system to perform a large number of operations during this period, such as requests for resources, database operations, etc.
Response Time(Response Time)
- The system responds to the request time. For example, it takes 200ms for the system to process an HTTP request. This 200ms is the response time of the system.Throughput(Throughput)
- The number of requests processed per unit timeQuery rate per second QPS (Query Per Second)
- Number of response requests per second. In the Internet field, the distinction between this indicator and throughput is not so obvious Number of concurrent users (User Concurrence)
- The number of users who simultaneously carry normal use of system functions. For example, in an instant messaging system, the number of simultaneous online users represents the number of concurrent users of the system to a certain extent
##1.3 High concurrency optimization
Limit on the maximum number of open files in a single process
-
Kernel TCP parameters
-
IO events Allocation mechanism
2 Improve the concurrency capability of the system2.1 Vertical expansion
Improve stand-alone processing capability
- Enhance stand-alone hardware performance, for example: increase the number of CPU cores such as 32 cores, upgrade to a better network card such as 10G, and upgrade to a better hard drive such as SSD , expand the hard disk capacity such as 2T, expand the system memory such as 128G
-
to improve the performance of single-machine architecture, for example: use Cache to reduce the number of IO times, use asynchronous to increase single service throughput, use lock-free Data structure to reduce response time
2.2 Horizontal expansionAdd servers quantity, system performance can be linearly expanded
2.3 Common Internet layered architecture(1) Client layer: typical calls The party is a browser or mobile application APP
(2) Reverse proxy layer: system entrance, reverse proxy
(3) Site application layer: implement core application logic, return html or json
(4) Service layer: If servitization is implemented, there will be this layer
(5) Data-cache layer: Cache accelerates access to storage
(6 ) Data-database layer: database solidified data storage
2.4 Horizontal expansion architecture
The level of the reverse proxy layer Extension
# #Three single Linux servers improve concurrency
3.1 iptables related
##Close the iptables firewall and prevent the kernel from loading the iptables module-
Limit on the maximum number of open files in a single process (the default maximum number of open files in a single process is 1024) -
ulimit –n 65535
Copy after login
Software that modifies the number of open files in the Linux system for users Limits and hard limits-
vim /etc/security/limits.conf
* soft nofile 65535 #'*'表示修改所有用户的限制
* hard nofile 65535
Copy after login
#用户完成系统登录后读取/etc/security/limits.conf文件
vim /etc/pam.d/login
sessionrequired /lib/security/pam_limits.so
Copy after login
After the TCP connection is disconnected, it will remain in the TIME_WAIT state for a certain period of time before the port is released. When there are too many concurrent requests, a large number of connections in the TIME_WAIT state will be generated. If they cannot be disconnected in time, a large amount of port resources and server resources will be occupied -
#查看TIME_WAIT状态连接
netstat -n | grep tcp | grep TIME_WAIT |wc -l
Copy after login
# vim /etc/sysctl.conf
net.ipv4.tcp_syncookies= 1 #表示开启SYNCookies。当出现SYN等待队列溢出时,启用cookies来处理,可防范少量SYN攻击,默认为0,表示关闭;
net.ipv4.tcp_tw_reuse= 1 #表示开启重用。允许将TIME-WAITsockets重新用于新的TCP连接,默认为0,表示关闭;
net.ipv4.tcp_tw_recycle= 1 #表示开启TCP连接中TIME-WAITsockets的快速回收,默认为0,表示关闭;
net.ipv4.tcp_fin_timeout= 30 #修改系統默认的TIMEOUT 时间。
Copy after login
Related recommendations: "
Linux video tutorial
》
The above is the detailed content of What does linux high concurrency mean?. For more information, please follow other related articles on the PHP Chinese website!