Home > Backend Development > PHP Tutorial > Detailed explanation of Nginx load balancing configuration example

Detailed explanation of Nginx load balancing configuration example

WBOY
Release: 2016-08-08 09:21:03
Original
1020 people have browsed it

[Introduction] Load balancing is something that our high-traffic website needs to do. Now I will introduce to you the load balancing configuration method on the Nginx server. I hope it will be helpful to students in need. Load Balancing Let’s first briefly understand what load balancing is. You can understand it literally by understanding it

Load balancing is something that our high-traffic website needs to do. Let me introduce to you the Nginx server. How to configure load balancing on the above, I hope it will be helpful to students in need.

Load balancing

First, let’s briefly understand what load balancing is. If you understand it literally, it can explain that N servers share the load equally, and a certain server will not be down because of a high load. The server is idle. Then the premise of load balancing is that it can be achieved by multiple servers, that is, more than two servers are enough.

Test environment
Since there is no server, this test directly hosts the specified domain name, and then installs three CentOS in VMware.

Test domain name: a.com

A server IP: 192.168.5.149 (main)

B server IP: 192.168.5.27

C server IP: 192.168.5.126

Deployment idea
A server is used as the main server, The domain name is directly resolved to server A (192.168.5.149), and server A is load balanced to server B (192.168.5.27) and server C (192.168.5.126).


Domain name resolution

Since it is not a real environment, the domain name is just a.com for testing, so the resolution of a.com can only be set in the hosts file.

Open: C:WindowsSystem32driversetchosts

Add

192.168.5.149 at the end a.com

Save and exit, then start the command mode and ping to see if the setting is successful

From the screenshot, a.com has been successfully parsed Go to 192.168.5.149IP

A server nginx.conf settings
Open nginx.conf, the file location is in the conf directory of the nginx installation directory. +

server_name a.com;

location / {

        proxy_pass                                            header X-Forwarded-For $proxy_add_x_forwarded_for;

}
}

Save and restart nginx

B、 C server nginx.conf settings
Open nginx.confi and add the following code to the http section

server{
  listen 80;
  server_name a.com;
  index index.html;
  root /data0/htdocs/www;
}

Save and restart nginx

Test

When accessing a.com, in order to distinguish which server it is redirected to for processing, I wrote an index.html file with different contents under servers B and C respectively for differentiation.

Open the browser to visit a.com. Refresh and you will find that all requests are allocated by the main server (192.168.5.149) to server B (192.168.5.27) and server C (192.168.5.126), achieving load balancing. Effect.


B server processing page

C server processing page

What if one of the servers goes down?

When a server goes down, will access be affected?

Let’s take a look at the example first. Based on the above example, assume that the machine C server 192.168.5.126 is down (since it is impossible to simulate the downtime, so I shut down the C server) and then visit it again.


Access results:

We found that although the C server (192.168.5.126) was down, it did not affect website access. In this way, you won't have to worry about dragging down the entire site because a certain machine is down in load balancing mode.

What if b.com also needs to set up load balancing?

It’s very simple, just like a.com settings. As follows:

Assume that the main server IP of b.com is 192.168.5.149, and the load is balanced to 192.168.5.150 and 192.168.5.151 machines


Now resolve the domain name b.com to 192.168.5.149 IP.

Add the following code to nginx.conf of the main server (192.168.5.149):

Bupstream b.com {

server 192.168.5.150:80;
Server 192.168.5.151:80; Http: // b. com;
        proxy_set_header       Host                                                                       proxy_set_
}
}
Save and restart nginx

Set up nginx on machines 192.168.5.150 and 192.168.5.151 , open nginx.conf and add the following code at the end:

server{
listen 80;
server_name b.com;
index index.html;
root /data0/htdocs/www;
}

Save and restart nginx

Done You can implement the load balancing configuration of b.com in the following steps.


The main server cannot provide services?
In the above examples, we have applied the load balancing of the main server to other servers, so can the main server itself be added to the server list, so that it will not be wasted using a server purely as a forwarding function, but also Get involved in providing services.

Three servers in the above case:

A server IP: 192.168.5.149 (main)

B server IP: 192.168.5.27

C server IP: 192.168.5.126


We resolve the domain name to A server, and then From server A to server B and server C, server A only performs a forwarding function. Now we let server A also provide site services.

Let’s analyze it first. If you add the main server to upstream, the following two situations may occur:

1. The main server is forwarded to other IPs, and other IP servers handle it normally;

2. The main server It is forwarded to your own IP, and then goes to the main server to allocate IP. If it is always allocated to the local machine, it will cause an infinite loop.

How to solve this problem? Because port 80 has been used to monitor load balancing processing, port 80 can no longer be used on this server to process access requests for a.com, and a new one must be used. So we added the following code to the nginx.conf of the main server:

server{

listen 8080;

server_name a.com;

index index.html;

root /data0/htdocs/www;

}

Restart nginx, Enter a.com:8080 in the browser and try to see if you can access it. The result can be accessed normally

Since it can be accessed normally, then we can add the main server to upstream, but the port needs to be changed, as follows:


upstream a.com {
                                        server 192.168.5.126:80; .5.27:80;
server 127.0.0.1:8080;
}

Since you can add the main server IP 192.168.5.149 or 127.0.0.1 here, both indicate access to yourself.

Restart Nginx, and then visit a.com to see if it will be assigned to the main server.

The main server can also join the service normally.


Finally
First, load balancing is not unique to nginx. The famous Dingding apache also has it, but its performance may not be as good as nginx.

2. Multiple servers provide services, but the domain name is only resolved to the main server, and the real server IP can be obtained without being pinged, which increases a certain degree of security.

3. The IP in upstream does not have to be the internal network, external network IP can also be used. However, the classic case is that a certain IP in the LAN is exposed to the external network, and the domain name is directly resolved to this IP. Then the main server forwards it to the intranet server IP.

4. If a certain server is down, it will not affect the normal operation of the website. Nginx will not forward the request to the down IP.

Original address:

http://www.php100.com/html/program /nginx/2013/0905/5525.html

The above introduces the detailed explanation of the Nginx load balancing configuration example, including the relevant content. I hope it will be helpful to friends who are interested in PHP tutorials.

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template