Several ways for Nginx to achieve load balancing

小云云
Release: 2023-03-20 18:32:01
Original
3227 people have browsed it

What is load balancing

When the number of visits per unit time of a server is greater, the pressure on the server will be greater. When it exceeds its own capacity, the server will will crash. In order to avoid server crashes and provide users with a better experience, we use load balancing to share server pressure.

We can build many servers to form a server cluster. When a user accesses the website, he first visits an intermediate server, and then lets the intermediate server choose a server with less pressure in the server cluster, and then transfers the server to the server cluster. Access requests are directed to the server. In this way, every visit by a user will ensure that the pressure of each server in the server cluster tends to be balanced, sharing the server pressure and avoiding server crash.

Load balancing is implemented using the principle of reverse proxy.

Several common methods of load balancing

1. Polling (default)
Each request is assigned to different servers one by one in chronological order The end server can be automatically eliminated if the backend server goes down.

upstream backserver {    server 192.168.0.14;    server 192.168.0.15;
}
Copy after login

2. Weight
Specifies the polling probability, weight is proportional to the access ratio, and is used for
situations where back-end server performance is uneven.

upstream backserver {    server 192.168.0.14 weight=3;    server 192.168.0.15 weight=7;
}
Copy after login

The higher the weight, the greater the probability of being accessed. As in the above example, they are 30% and 70% respectively.

3. There is a problem with the above method, that is, in the load balancing system, if the user logs in to a certain server, then when the user makes a second request, because we are a load balancing system, every time Every request will be redirected to one of the server clusters. If a user who has logged in to a server is redirected to another server, his login information will be lost. This is obviously inappropriate.

We can use the ip_hash instruction to solve this problem. If the customer has already visited a certain server, when the user visits again, the request will be automatically located to the server through the hash algorithm.

Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server, which can solve the session problem.

upstream backserver {
    ip_hash;    server 192.168.0.14:88;    server 192.168.0.15:80;
}
Copy after login

4. fair (third party)
Requests are allocated according to the response time of the backend server, and those with short response times are allocated first.

upstream backserver {    server server1;    server server2;
    fair;
}
Copy after login

5. url_hash (third party)
Distribute requests according to the hash result of the accessed URL, so that each URL is directed to the same back-end server. It is more effective when the back-end server is cached.

upstream backserver {
    server squid1:3128;
    server squid2:3128;    hash $request_uri;    hash_method crc32;
}
Copy after login

The status of each device is set to:

1.down means that the previous server will not participate in the load temporarily
2.weight The default is 1. The greater the weight, the weight of the load The bigger it gets.
3.max_fails: The number of allowed request failures is 1 by default. When the maximum number is exceeded, the error defined by the proxy_next_upstream module is returned
4.fail_timeout: The time to pause after max_fails failures.
5.backup: When all other non-backup machines are down or busy, request the backup machine. So this machine will have the least pressure.

Configuration example:

#user  nobody;worker_processes  4;
events {    # 最大并发数
    worker_connections  1024;
}
http{    # 待选服务器列表
    upstream myproject{        # ip_hash指令,将同一用户引入同一服务器。
        ip_hash;        server 125.219.42.4 fail_timeout=60s;        server 172.31.2.183;
        }    server{                # 监听端口
                listen 80;                # 根目录下
                location / {                    # 选择哪个服务器列表
                    proxy_pass http://myproject;
                }

            }
}
Copy after login

Related recommendations:

Nginx reverse proxy and load balancing practice

##nginx four Layer load balancing configuration

Detailed explanation of load balancing Nginx

The above is the detailed content of Several ways for Nginx to achieve load balancing. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!