The biggest highlight of Nginx is reverse proxy and load balancing. This article will explain the configuration of Nginx load balancing in detail.
Load balancing
First, let’s briefly understand what load balancing is. To understand it literally, it can explain that N servers share the load equally, and a certain server will not go down because of a high load and a certain server will not be idle. Then the premise of load balancing is that it can be achieved by multiple servers, that is, more than two servers are enough.
Test environment
Since there is no server, this test directly hosts the specified domain name, and then installs three CentOS in VMware.
Test domain name: a.com
A server IP: 192.168.5.149 (main)
B server IP: 192.168.5.27
C server IP: 192.168.5.126
Deployment ideas
Server A serves as the main server, and the domain name is directly resolved to server A (192.168.5.149). Server A is load balanced to server B (192.168.5.27) and server C (192.168.5.126).
As shown in the picture:
Domain name resolution
Since it is not a real environment, the domain name is just a.com for testing, so the resolution of a.com can only be set in the hosts file.
Open: C:WindowsSystem32driversetchosts
Add
at the end
192.168.5.149 a.com
Save and exit, then start the command mode and ping to see if the setting is successful
Judging from the screenshot, a.com has been successfully resolved to 192.168.5.149IP
A server nginx.conf settings
Open nginx.conf. The file location is in the conf directory of the nginx installation directory.
Add the following code to the http section
<code>upstream a<span>.</span>com { server <span>192.168</span><span>.5</span><span>.126</span>:<span>80</span>; server <span>192.168</span><span>.5</span><span>.27</span>:<span>80</span>; } server{ listen <span>80</span>; server_name a<span>.</span>com; location <span>/</span> { proxy_pass http:<span>//a.com;</span> proxy_set_header Host <span>$host</span>; proxy_set_header X<span>-Real</span><span>-IP</span><span>$remote_addr</span>; proxy_set_header X<span>-Forwarded</span><span>-For</span><span>$proxy_add_x_forwarded_for</span>; } }</code>
Save and restart nginx
B. C server nginx.conf settings
Open nginx.confi and add the following code to the http section
<code><span>server</span>{ listen <span>80</span>; server_name a.com; <span>index</span><span>index</span>.html; root /data0/htdocs/www; }</code>
Save and restart nginx
Test
When accessing a.com, in order to distinguish which server it is redirected to for processing, I wrote an index.html file with different contents under servers B and C respectively to make the distinction.
Open the browser to access a.com. If you refresh, you will find that all requests are distributed by the main server (192.168.5.149) to server B (192.168.5.27) and server C (192.168.5.126), achieving a load balancing effect.
B server processes the page
C server processes the page
What if one of the servers goes down?
When a server goes down, will access be affected?
Let's take a look at the example first. Based on the above example, assume that the machine C server 192.168.5.126 is down (since it is impossible to simulate the downtime, so I shut down the C server) and then visit it again.
Visit results:
We found that although the C server (192.168.5.126) was down, website access was not affected. In this way, you won't have to worry about dragging down the entire site because a certain machine is down in load balancing mode.
What if b.com also needs to set up load balancing?
It's very simple, just like a.com settings. As follows:
Assume that the main server IP of b.com is 192.168.5.149, and the load is balanced to 192.168.5.150 and 192.168.5.151 machines
Now resolve the domain name b.com to 192.168.5.149IP.
Add the following code to nginx.conf of the main server (192.168.5.149):
<code>upstream b<span>.</span>com { server <span>192.168</span><span>.5</span><span>.150</span>:<span>80</span>; server <span>192.168</span><span>.5</span><span>.151</span>:<span>80</span>; } server{ listen <span>80</span>; server_name b<span>.</span>com; location <span>/</span> { proxy_pass http:<span>//b.com;</span> proxy_set_header Host <span>$host</span>; proxy_set_header X<span>-Real</span><span>-IP</span><span>$remote_addr</span>; proxy_set_header X<span>-Forwarded</span><span>-For</span><span>$proxy_add_x_forwarded_for</span>; } }</code>
Save and restart nginx
Set up nginx on the 192.168.5.150 and 192.168.5.151 machines, open nginx.conf and add the following code at the end:
<code><span>server</span>{ listen <span>80</span>; server_name b.com; <span>index</span><span>index</span>.html; root /data0/htdocs/www; }</code>
保存重启nginx
完成以后步骤后即可实现b.com的负载均衡配置。
主服务器不能提供服务吗?
以上例子中,我们都是应用到了主服务器负载均衡到其它服务器上,那么主服务器本身能不能也加在服务器列表中,这样就不会白白浪费拿一台服务器纯当做转发功能,而是也参与到提供服务中来。
如以上案例三台服务器:
A服务器IP :192.168.5.149 (主)
B服务器IP :192.168.5.27
C服务器IP :192.168.5.126
我们把域名解析到A服务器,然后由A服务器转发到B服务器与C服务器,那么A服务器只做一个转发功能,现在我们让A服务器也提供站点服务。
我们先来分析一下,如果添加主服务器到upstream中,那么可能会有以下两种情况发生:
1、主服务器转发到了其它IP上,其它IP服务器正常处理;
2、主服务器转发到了自己IP上,然后又进到主服务器分配IP那里,假如一直分配到本机,则会造成一个死循环。
怎么解决这个问题呢?因为80端口已经用来监听负载均衡的处理,那么本服务器上就不能再使用80端口来处理a.com的访问请求,得用一个新的。于是我们把主服务器的nginx.conf加入以下一段代码:
<code><span>server</span>{ listen <span>8080</span>; server_name a.com; <span>index</span><span>index</span>.html; root /data0/htdocs/www; }</code>
重启nginx,在浏览器输入a.com:8080试试看能不能访问。结果可以正常访问
既然能正常访问,那么我们就可以把主服务器添加到upstream中,但是端口要改一下,如下代码:
<code>upstream a.com { <span>server</span><span>192.168</span><span>.5</span><span>.126</span>:<span>80</span>; <span>server</span><span>192.168</span><span>.5</span><span>.27</span>:<span>80</span>; <span>server</span><span>127.0</span><span>.0</span><span>.1</span>:<span>8080</span>; }</code>
由于这里可以添加主服务器IP192.168.5.149或者127.0.0.1均可以,都表示访问自己。
重启Nginx,然后再来访问a.com看看会不会分配到主服务器上。
主服务器也能正常加入服务了。
最后
一、负载均衡不是nginx独有,著名鼎鼎的apache也有,但性能可能不如nginx。
二、多台服务器提供服务,但域名只解析到主服务器,而真正的服务器IP不会被ping下即可获得,增加一定安全性。
三、upstream里的IP不一定是内网,外网IP也可以。不过经典的案例是,局域网中某台IP暴露在外网下,域名直接解析到此IP。然后又这台主服务器转发到内网服务器IP中。
四、某台服务器宕机、不会影响网站正常运行,Nginx不会把请求转发到已宕机的IP上。
http://www.qttc.net/201208181.html
以上就介绍了Nginx负载均衡设置实例,包括了方面的内容,希望对PHP教程有兴趣的朋友有所帮助。