There are many ways to configure nginx load balancing. Here I will introduce to you two ways to configure Nginx load balancing. Next, let’s take a detailed look at the common configuration methods of nginx load balancing.
nginx acts as a reverse proxy for back-end web servers (apache, nginx, tomcat, weblogic), etc.
Several back-end web servers need to consider file sharing, database sharing, and session sharing issues. File sharing can use nfs, shared storage (fc, ip storage) redhat GFS cluster file system, rsync inotify file synchronization, etc. Small-scale clusters use nfs more. For content management systems, a single unit publishes information. , using rsync inotify to synchronize multiple machines is a good choice.
Small-scale cluster, a single high-performance database (such as Zhiqiang dual quad-core, 32/64/128G memory) is enough, large-scale cluster You may want to consider a database cluster. You can use the official cluster software provided by mysql, or you can use keepalived lvs to separate reading and writing to create a Mysql cluster.
Session sharing is a big problem. If nginx uses the ip_hash polling method , each IP will be fixed to the back-end server within a certain period of time, so that we do not need to solve the problem of session sharing. On the contrary,
The request for an IP is polled and distributed to multiple servers, so we need to solve the problem of session sharing. , you can use nfs to share the session, write the session to mysql or memcache, etc. When the machine size is relatively large
, you generally use the method of writing the session to memcache.
How to configure the back-end web server here? I won’t discuss it anymore. The back-end server may be apache, nginx, tomcat, lighthttp, etc. The front-end does not care what the back-end is.
First create a proxy.conf file to facilitate our calls later (if multiple clusters are configured) , it is a good method to write the public parameters to a file and then continue include)
vi /usr/local/nginx/conf/proxy.conf
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64
We discuss nginx’s two load balancing methods here: polling weighting (it can also be unweighted, which is 1:1 load) and ip_hash (the same IP will Assigned to a fixed back-end server to solve the session problem)
We can write this configuration file into nginx.conf (if there is only one web cluster). If there are multiple web clusters, it is best to write it into vhosts. In the form of a virtual host, here I write it in nginx.conf
The first configuration: weighted polling, giving weight according to the performance of the server, this example is 1:2 allocation
upstream lb { server 192.168.196.130 weight=1 fail_timeout=20s; server 192.168.196.132 weight=2 fail_timeout=20s; } server { listen 80; server_name safexjt.com www.safexjt.com; index index.html index.htm index.php; location / { proxy_pass http://lb; proxy_next_upstream http_500 http_502 http_503 error timeout invalid_header; include proxy.conf; } }
upstream lb { server 192.168.196.130 fail_timeout=20s; server 192.168.196.132 fail_timeout=20s; ip_hash; } server { listen 80; server_name safexjt.com www.safexjt.com; index index.html index.htm index.php; location / { proxy_pass http://lb; proxy_next_upstream http_500 http_502 http_503 error timeout invalid_header; include proxy.conf; } }
upstream backserver { server 192.168.0.14; server 192.168.0.15; }
Specify the polling probability. The weight is proportional to the access ratio and is used when the performance of the back-end server is uneven.
upstream backserver { server 192.168.0.14 weight=10; server 192.168.0.15 weight=10; }
Each request is assigned according to the hash result of the accessed IP, so that each visitor has fixed access to a back-end server, which can solve the session problem.
upstream backserver { ip_hash; server 192.168.0.14:88; server 192.168.0.15:80; }
Requests are allocated according to the response time of the back-end server, and those with short response times are allocated first.
upstream backserver { server server1; server server2; fair; }
Distribute requests according to the hash result of the accessed URL, so that each URL is directed to the same back-end server. It is more effective when the back-end server is cached. .
upstream backserver { server squid1:3128; server squid2:3128; hash $request_uri; hash_method crc32; }
proxy_pass http://backserver/; upstream backserver{ ip_hash; server 127.0.0.1:9090 down; (down 表示单前的server暂时不参与负载) server 127.0.0.1:8080 weight=2; (weight 默认为1.weight越大,负载的权重就越大) server 127.0.0.1:6060; server 127.0.0.1:7070 backup; (其它所有的非backup机器down或者忙的时候,请求backup机器) }
Related recommendations:
Several ways for Nginx to achieve load balancing
php interview question seven: How to configure nginx load balancing
The above is the detailed content of How to configure nginx load balancing? nginx load balancing configuration method. For more information, please follow other related articles on the PHP Chinese website!