nginx load balancing strategy and configuration

不言
Release: 2023-03-29 18:52:02
Original
1781 people have browsed it

This article mainly introduces the nginx load balancing strategy and configuration, which has certain reference value. Now I share it with everyone. Friends in need can refer to it

Preface

Let’s first briefly understand what load balancing is. If you understand it literally, you can explain that N servers share the load equally, not because A situation where a certain server has a high load and goes down while a certain server is idle. Then the premise of load balancing is that it can be achieved by multiple servers, that is, more than two servers are enough. Load balancing is a technology that must be implemented for high-concurrency, high-traffic websites.

Environment

Adopt two-machine load balancing deployment

Test domain name a.com

AServerIP: 10.1.108.31 (Main Server)

BServer IP: 192.168.112.128

Deployment Idea

## A server is used as the main server, domain name Directly parsed to the A server (10.1.108.31 ), loaded by the A server Balanced to itself (10.1.108.31 ) and B server (192.168.112.128 )superior.

Projects that require load balancing

nodejs webproject, project name social, port 6602.

Start social on the A server and B server respectively project (how to start nodejs project will not be introduced here), install on the A server nginx (How to install is not introduced here, B server does not need to be installed).

Deployment

Domain name resolution

Since it is not a real environment, Just use the domain name a.com for testing, so a.com can only be resolved in hostsFile settings.

Open: C:\Windows\System32\drivers\etc\hosts

Add at the end

10.1.108.31 a.com

Save and exit, then start the command modeping#Check if the setting is successful

It can be seen from the screenshot Successfully resolved a.com to 10.1.108.31

##configuration nginx.conf

is in the conf directory of the nginx installation directory , open the nginx.conf file

Add the following code in the http section


upstream a.com {
Copy after login
            server 127.0.0.1:6602;
Copy after login
            server 192.168.112.128:6602;
Copy after login
        }
Copy after login
Copy after login
server {
Copy after login
        listen       80;
Copy after login
        location / {
Copy after login
            proxy_pass http://a.com;      #设置反向代理的地址
Copy after login
            proxy_connect_timeout 2;      #代理服务器超时时间,秒
Copy after login
        }
Copy after login
Copy after login

Note:

2 nodes, one of them is down, nginxThe request will still be distributed to it, until timeout, no response, and then sent to another node. By default, no more requests will be sent within 1 minutes, and the above action will be repeated after one minute. The result is that the website is sometimes fast and sometimes slow. Set proxy_connect_timeout to 2 seconds to shorten the timeout so that it is not too slow. .

Save the configuration, start nginx, visit a.com##                     2 There is data in the ##A

server and

B

server. Indicates that nginx load balancing configuration is successful. nginx configuration details##

user    www-data;                        #运行用户worker_processes  4;                   #启动进程,通常设置成和cpu的核数相等
Copy after login
worker_cpu_affinity 0001 0010 0100 1000;   #为每个进程绑定cpu内核,参考下面注解1
Copy after login
error_log  /var/log/nginx/error.log;   #全局错误日志pid        /var/run/nginx.pid;          #PID文件
Copy after login
#events模块中包含nginx中所有处理连接的设置
Copy after login
events {                            
   use   epoll;  #连接处理方式,参考注解2
Copy after login
  worker_connections  1024;    #单个后台worker process进程的最大并发链接数,不能超过最大文件打开数,最大文件打开数可以通过worker_rlimit_nofile修改。  multi_accept on;   #on:worker process一次接受所有新连接, off: 一次接受一个新的连接
Copy after login
}
Copy after login
#设定http服务器,利用它的反向代理功能提供负载均衡支持
Copy after login
http {
     
    include       /etc/nginx/mime.types;       #设定mime类型,类型由mime.type文件定义
Copy after login
   include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
Copy after login
    default_type  application/octet-stream;  #默认文件类型
Copy after login
   #charset utf-8; #默认编码
Copy after login
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
Copy after login
                      '$status $body_bytes_sent "$http_referer" '
Copy after login
                      '"$http_user_agent" "$http_x_forwarded_for"';
Copy after login
    access_log  logs/access.log  main;       #设定日志格式
Copy after login
    
    sendfile        on;   #开启高效文件传输模式,sendfile 指令指定 nginx 是否调用 sendfile 函数(zero copy 方式)来输出文件,对于普通应用,必须设为 on,如果用来进行下载等应用磁盘IO重负载应用,可设置为 off,以平衡磁盘与网络I/O处理速度,降低系统的负载.注意:如果图片显示不正常把这个改成off。
Copy after login
   #防止网络阻塞,两者区别参考注解3    tcp_nopush     on;
Copy after login
   tcp_nodelay        on;
Copy after login
autoindex on; #开启目录列表访问,合适下载服务器,默认关闭。    keepalive_timeout  30;                 #连接超时时间,单位秒
Copy after login
  #gzip模块设置  gzip on; #开启gzip压缩输出  gzip_min_length 1k; #最小压缩文件大小,大于1K才压缩  gzip_buffers 4 16k; #压缩缓冲区  gzip_http_version 1.0; #压缩版本(默认1.1,前端如果是squid2.5请使用1.0)  gzip_comp_level 2; #压缩等级,1-10,数字越大压缩也越大  gzip_types text/plain application/x-javascript text/css application/xml;
  #压缩类型,默认就已经包含text/html,所以上面就不用再写了,写上去也不会有问题,但是会有一个warn。
Copy after login
  gzip_vary on;  # 和http头有关系,加个vary头,给代理服务器用的,有的浏览器支持压缩,有的不支持,所以避免浪费不支持的也压缩,所以根据客户端的HTTP头来判断,是否需要压缩
Copy after login
  gzip_disable "MSIE [1-6]\.";  #禁用IE6的gzip压缩,IE6的某些版本对gzip的压缩支持很不好,会造成页面的假死
Copy after login
    client_header_buffer_size    1k;    #设定请求头缓存大小    large_client_header_buffers  4 10k;  #设置用于读取大客户机请求头文件的最大数量和大小,超过则返回给客户端414或400错误。请求处理结束后缓存释放。
Copy after login
   #使用Nginx最大的好处就是负载均衡
Copy after login
   #upstream用于设置一组可以在proxy_pass指令中使用的代理服务器,默认的代理方式为轮询。    upstream mysvr {
Copy after login
      #设定负载均衡的服务器列表
Copy after login
      #server指令用于指定后端服务器的名称和参数。       server 192.168.8.1x:80  weight=5;
       server 192.168.8.2x:80  weight=1;   
       server 192.168.8.3x:80  weight=6; 
    }
Copy after login
   #在server内可通过proxy_pass指令设置关于反向代理的upstream服务器集群。             
   server {                
        listen       80;  #侦听192.168.8.x的80端口        server_name  192.168.8.x;
Copy after login
nginx configuration annotation1.worker_cpu_affinity

Note


Nginx

Utilizing multi-core CPU is not enabled by default,We can pass Add the worker_cpu_affinity configuration parameter to take full advantage of multi-core CPUs. The CPU is the most critical resource for task processing and calculation. The more CPU cores, the better the performance.

Configure Nginx multi-core CPU, worker_cpu_affinity usage methods and examples

2Core

CPU

, enable2
processes

worker_processes 2;worker_cpu_affinity 01 10;

01 means enabling the first CPU core, 10 means enabling the second CPU core
worker_cpu_affinity 01 10; means starting two processes, the first process corresponds corresponds to the first CPU core, and the second process corresponds to the second CPU core.

2Core CPU,Open 4 processes
worker_processes 4;
worker_cpu_affinity 01 10 01 10;

Four processes are opened, which correspond to opening 2 CPU cores

4Core CPU, open 4 processes
worker_processes 4;
worker_cpu_affinity 0001 0010 0100 1000;

0001 means enabling the first CPU core, 0010 means enabling the second CPU core, and so on

4 Core CPU, open 2 processes
worker_processes 2;
worker_cpu_affinity 0101 1010;

0101 means turning on the first and third cores, 1010 means turning on the second and fourth cores
2 processes correspond to four cores
worker_cpu_affinity configuration is written In /etc/nginx/nginx.conf.
2 cores are 01, 4 cores are 0001, and 8 cores are 00000001. There are several digits depending on the number of cores. 1 means the core is on, 0 means the core is off.

8Core CPU, open 8 processes
worker_processes 8;
worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 0010000001000000 10000000;

0001 means enabling the first CPU core, 0010 means enabling the second CPU core, and so on
worker_processes can open up to 8 processes. The performance improvement of more than 8 processes will not be improved, and the stability will become lower, so 8 processes are enough.

2.Connection processing method

##nginx Supports multiple connection processing method, which method is used depends on the system used. When the system supports multiple methods, nginx will automatically select the most efficient method. If necessary, the method to be used can be specified via the use directive.

select:

1.SocketQuantity limit: The number of Sockets that can be operated in this mode is determined by FD_SETSIZE, Kernel default 32*32=1024. 2.
Operation restrictions:By traversing FD_SETSIZE( 1024)Socket to complete the scheduling, no matter which Socket is active , is traversed all over again.

poll:

1.SocketThe number is almost unlimited Limitation:The corresponding fd list of Socket in this mode consists of an array To save , no limit to size(default4k).
2.
Operation restrictions:Same as Select.

epoll:

Linux 2.6or above

1.SocketUnlimited quantity:Same as Poll
2.
Unlimited operation:Based on the kernel provided Reflection mode ,When there is an active Socket, the kernel accesses the Socketcallback, does not require traversal polling.

##kqueue

is not much different from epoll, the principle is the same, and it is used in the operating system :FreeBSD 4.1, OpenBSD 2.9, NetBSD 2.0, and MacOS X

selectThe reason for the inefficiency of the model
select
The inefficiency of the model is caused by## The definition of #select has nothing to do with the operating system implementation. Any kernel must do a round robin when implementing select in order to know these # In the case of ##socket, this will consume cpu. Additionally, when you have a large set of sockets, even though only a small portion of the socket is at any one time "active", but every time you have to fill in all the socket Enter a FD_SET, which will also consume some cpu, and when selectAfter returning, you may need to do "Context mapping" when processing business, which will also have some performance impact. Therefore select is relatively inefficient than epoll. The applicable scenario for epoll
is that there are a large number of
socket, but the activity is not very high.

##3.

tcp_nodelay and tcp_nopushDifference# #tcp_nodelay

Nginx

The

TCP_NODELAY option enables opening a new Added TCP_NODELAY option when socket. But this will cause a situation:

The terminal application will send a packet every time an operation occurs, and under typical circumstances A packet will have one byte of data and a header of

40

bytes, resulting in an overload of 4000%, which is very It can easily cause network congestion. To avoid this situation,

TCP

the stack implements wait for data 0.2 seconds clock, so it will not send a data packet after the operation, but will pack the data within this period into one large packet.

This mechanism is guaranteed by the Nagle algorithm.

Nagleization later became a standard and was immediately implemented on the Internet. It is now the default configuration, but there are situations where it is desirable to turn this option off.

Now imagine that an application makes a request to send small chunks of data. We can choose to send the data immediately or wait for more data to be generated and then send it again.

Interactive as well as client/server-based applications will benefit greatly if we send data immediately. If the request is made immediately then the response time will be faster.

The above operation can be completed by setting the TCP_NODELAY = on option of the socket, thus disabling Nagle algorithm.

Another situation requires us to wait until the amount of data reaches the maximum before sending all the data through the network at once. This data transmission method is beneficial to the communication of large amounts of data. Performance, a typical application is a file server.

Nginx Use TCP_NODELAY in the connection of keepalive . keepalive The connection will remain connected after the data is sent, and will also allow more data to be sent through it.

In this way, many socked connections and the three-way handshake process of each connection can be instantiated.

tcp_nopush

In nginx , tcp_nopush configuration and tcp_nodelay are mutually exclusive. It can configure the packet size of data sent at one time.

In other words, it does not send packets after accumulating over time 0.2 seconds, but when the packets accumulate to a certain size, they are sent .

In nginx , tcp_nopush must be the same as sendfile For use with.

4.Load balancing

nginx supports the following A load balancing mechanism:

1)round-robin: Polling. Distribute requests to different servers in a polling manner. The default is polling.

2least-connected:最少连接数。将下一个请求分配到连接数最少的那台服务器上。

3ip-hash :基于客户端的IP地址。散列函数被用于确定下一个请求分配到哪台服务器上。

在一些要求需要更长的时间才能完成的应用情况下,最少连接可以更公平地控制应用程序实例的负载。使用最少连接负载均衡,nginx不会向负载繁忙的服务器上分发请求,而是将请求分发到负载低的服务器上。配置如下:


upstream mysvr {
Copy after login
   least-connected
  server 192.168.8.1:3128
  server 192.168.8.2:80
  server 192.168.8.3:80
}
Copy after login

The above is the detailed content of nginx load balancing strategy and configuration. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!