Home > Backend Development > PHP Tutorial > PHP interview question 7: How to configure nginx load balancing

PHP interview question 7: How to configure nginx load balancing

不言
Release: 2023-03-24 09:26:02
Original
7177 people have browsed it

The content of this article is about how to configure the load balancing of nginx in PHP interview question seven. It has a certain reference value. Now I share it with you. Friends in need can refer to it

load balancing


nginx has 4 load balancing modes:

1), Polling (default)
Each request is assigned to a different back-end server one by one in chronological order. If the back-end server goes down, it can be automatically eliminated.
2)、weight
Specify the polling probability, the weight is proportional to the access ratio, and is used when the back-end server performance is uneven.
2)、ip_hash
Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server, which can solve the session problem.
3), fair (third party)
Requests are allocated according to the response time of the back-end server, and those with short response times are allocated first.
4), url_hash (third party)

Configuration method:

Open the nginx.cnf file

Add the upstream node under the http node :

upstream webname {  
  server 192.168.0.1:8080;  
  server 192.168.0.2:8080;  
}
Copy after login

The webname is the name you choose. Finally, it will be accessed in the URL through this name. Like the above example, adding nothing is the default polling. The first request comes to visit the first one. server, the second request comes to access the second server. Come in turn.

upstream webname {  
  server 192.168.0.1:8080 weight 2;  
  server 192.168.0.2:8080 weight 1;  
}
Copy after login

This weight is also easy to understand. The greater the weight, the greater the probability of being accessed. In the above example, server1 is visited twice and server2 is visited once.

upstream webname {  
  ip_hash;  
  server 192.168.0.1:8080;  
  server 192.168.0.2:8080;  
}
Copy after login

The configuration of ip_hash is also very simple. , just add a line, so that anyone coming from the same IP will go to the same server

Then configure it under the server node:

location /name {  
    proxy_pass http://webname/name/;  
    proxy_http_version 1.1;  
    proxy_set_header Upgrade $http_upgrade;  
    proxy_set_header Connection "upgrade";  
}
Copy after login

Use the webname configured above in proxy_pass Replaced the original ip address.

This basically completes the load balancing configuration.

The following is the configuration of the active and backup:

Still in upstream

upstream webname {  
  server 192.168.0.1:8080;  
  server 192.168.0.2:8080 backup;  
}
Copy after login

Set a certain node as backup, then under normal circumstances all requests access server1, when server1 hangs Server2 will be accessed only when it is down or busy. Set a node to down, then this server will not participate in the load.

Implementation Example

Load balancing is something that our high-traffic website needs to do. Now I will introduce to you the load balancing configuration method on the Nginx server. I hope it will be helpful to students in need. So helpful.

Load Balancing

First let’s briefly understand what load balancing is. To understand it literally, it can explain that N servers share the load equally, and it will not be because a certain server has a high load. A situation where a server is down and a server is idle. Then the premise of load balancing is that it can be achieved by multiple servers, that is, more than two servers are enough.

Test environment

Since there is no server, this test directly hosts the specified domain name, and then installs three CentOS in VMware.


Test domain name: a.com

A server IP: 192.168.5.149 (main)

B server IP: 192.168.5.27

C server IP :192.168.5.126

Deployment idea

A server serves as the main server, the domain name is directly resolved to A server (192.168.5.149), and the A server load balances to B server (192.168.5.27) and C on the server (192.168.5.126).


Domain name resolution

Since it is not a real environment, the domain name is just a.com for testing, so the resolution of a.com can only be set in the hosts file.

Open:

C:WindowsSystem32driversetchosts

Add

upstream webname {  
  server 192.168.0.1:8080;  
  server 192.168.0.2:8080 down;  
}
Copy after login

at the end and save and exit, then start the command mode and ping to see if the setting is successful

Judging from the screenshot, a.com has been successfully parsed to 192.168.5.149IP

A server nginx.conf settings

Open nginx.conf, the file location is in the conf directory of the nginx installation directory .


Add the following code to the http section

192.168.5.149    a.com
Copy after login

Save and restart nginx

B and C server nginx.conf settings

Open nginx.confi and add the following code to the http section

upstream a.com { 
      server  192.168.5.126:80; 
      server  192.168.5.27:80; 
} 

server{ 
    listen 80; 
    server_name a.com; 
    location / { 
        proxy_pass        http://a.com; 
        proxy_set_header  Host            $host; 
        proxy_set_header  X-Real-IP        $remote_addr; 
        proxy_set_header  X-Forwarded-For  $proxy_add_x_forwarded_for; 
    } 
}
Copy after login

Save and restart nginx

Test

When accessing a.com, in order to distinguish which server to turn to for processing, I write an index with different content under servers B and C respectively. .html file to distinguish.


Open the browser to access a.com. Refresh and you will find that all requests are allocated by the main server (192.168.5.149) to server B (192.168.5.27) and server C (192.168.5.126). Achieved load balancing effect.

B server processing page

C server processing page

What if one of the servers goes down?

When a certain server goes down, will access be affected?


Let’s take a look at the example first. Based on the above example, assume that the machine C server 192.168.5.126 is down (since it is impossible to simulate the downtime, so I shut down the C server) and then visit it again.

Access results:

We found that although the C server (192.168.5.126) was down, it did not affect website access. In this way, you won't have to worry about dragging down the entire site because a certain machine is down in load balancing mode.

如果b.com也要设置负载均衡怎么办?
很简单,跟a.com设置一样。如下:

假设b.com的主服务器IP是192.168.5.149,负载均衡到192.168.5.150和192.168.5.151机器上

现将域名b.com解析到192.168.5.149IP上。

在主服务器(192.168.5.149)的nginx.conf加入以下代码:

upstream b.com { 
      server  192.168.5.150:80; 
      server  192.168.5.151:80; 
} 

server{ 
    listen 80; 
    server_name b.com; 
    location / { 
        proxy_pass        http://b.com; 
        proxy_set_header  Host            $host; 
        proxy_set_header  X-Real-IP        $remote_addr; 
        proxy_set_header  X-Forwarded-For  $proxy_add_x_forwarded_for; 
    } 
}
Copy after login

保存重启nginx

在192.168.5.150与192.168.5.151机器上设置nginx,打开nginx.conf在末尾添加以下代码:

server{ 
    listen 80; 
    server_name b.com; 
    index index.html; 
    root /data0/htdocs/www; 
}
Copy after login

保存重启nginx

完成以后步骤后即可实现b.com的负载均衡配置。

主服务器不能提供服务吗?
以上例子中,我们都是应用到了主服务器负载均衡到其它服务器上,那么主服务器本身能不能也加在服务器列表中,这样就不会白白浪费拿一台服务器纯当做转发功能,而是也参与到提供服务中来。

如以上案例三台服务器:

A服务器IP :192.168.5.149 (主)

B服务器IP :192.168.5.27

C服务器IP :192.168.5.126

我们把域名解析到A服务器,然后由A服务器转发到B服务器与C服务器,那么A服务器只做一个转发功能,现在我们让A服务器也提供站点服务。

我们先来分析一下,如果添加主服务器到upstream中,那么可能会有以下两种情况发生:

1、主服务器转发到了其它IP上,其它IP服务器正常处理;

2、主服务器转发到了自己IP上,然后又进到主服务器分配IP那里,假如一直分配到本机,则会造成一个死循环。

怎么解决这个问题呢?因为80端口已经用来监听负载均衡的处理,那么本服务器上就不能再使用80端口来处理a.com的访问请求,得用一个新的。于是我们把主服务器的nginx.conf加入以下一段代码:

server{ 
    listen 8080; 
    server_name a.com; 
    index index.html; 
    root /data0/htdocs/www; 
}
Copy after login

重启nginx,在浏览器输入a.com:8080试试看能不能访问。结果可以正常访问

既然能正常访问,那么我们就可以把主服务器添加到upstream中,但是端口要改一下,如下代码:

upstream a.com { 
      server  192.168.5.126:80; 
      server  192.168.5.27:80; 
      server  127.0.0.1:8080; 
}
Copy after login

由于这里可以添加主服务器IP192.168.5.149或者127.0.0.1均可以,都表示访问自己。

重启Nginx,然后再来访问a.com看看会不会分配到主服务器上。

主服务器也能正常加入服务了。

最后
一、负载均衡不是nginx独有,著名鼎鼎的apache也有,但性能可能不如nginx。

二、多台服务器提供服务,但域名只解析到主服务器,而真正的服务器IP不会被ping下即可获得,增加一定安全性。

三、upstream里的IP不一定是内网,外网IP也可以。不过经典的案例是,局域网中某台IP暴露在外网下,域名直接解析到此IP。然后又这台主服务器转发到内网服务器IP中。

四、某台服务器宕机、不会影响网站正常运行,Nginx不会把请求转发到已宕机的IP上

相关推荐:

php面试题六之memcache和redis的区别

php面试题五之nginx如何调用php和php-fpm的作用和工作原理

php面试题四之实现autoload

The above is the detailed content of PHP interview question 7: How to configure nginx load balancing. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template