What are the main application scenarios in Nginx?

PHPz
Release: 2023-05-16 14:55:17
forward
1536 people have browsed it

    The main application scenarios of Nginx

    Static website deployment

    nginx is an http web server that can store static files on the server (html, css, pictures) are returned to the browser client through the HTTP protocol.

    Example: We deploy a static resource index.html on the server

    What are the main application scenarios in Nginx?

    Upload index.html to linux /opt/www/test

    What are the main application scenarios in Nginx?

    Modify nginx.conf and add a location to intercept requests for /test. The /opt/www path corresponding to root represents the root path, which is the /slash in front of /test

    location /test {
                root   /opt/www;
                index  index.html;
            }
    Copy after login

    Start nginx or reload nginx

    What are the main application scenarios in Nginx?

    Let’s visit: http://192.168.253.130/test/

    What are the main application scenarios in Nginx?

    Load balancing

    Load balancing can be divided into hardware load balancing and software load balancing

    Hardware load balancing, such as F5, Sangfor, Array, etc., has the advantage of being supported by the manufacturer's professional team. Stable performance; the disadvantage is that it is expensive

    Software load balancing, such as Nginx, LVS, HAProxy, etc. The advantage is that it is free and open source and low cost

    Polling method: allocate requests in turn to On the backend server, it treats each backend server equally, regardless of the actual number of connections to the server and the current system load.

    http {
        upstream test{
        ##后端实际服务器 nginx在轮询访问以下几台服务器
            server 10.100.30.1:8080;
            server 10.100.30.2:8080;
            server 10.100.30.3:8080;
            server 10.100.30.4:8080;
        }
        server {
        ##前端拦截入口
            listen 80;
            server_name www.test.com;
            location / { 
                proxy_pass http://test;
            }
        }
    }
    Copy after login

    Weighted polling method: Different back-end servers may have different machine configurations and current system loads, so their pressure resistance is also different.

    Assign a higher weight to a machine with high configuration and low load to allow it to handle more requests; and assign a lower weight to a machine with low configuration and high load to reduce its system load. Weighted polling handles this problem very well and distributes requests to the backend in order and according to weight.

    http {
        upstream test{
        ##后端实际服务器 nginx在轮询访问以下几台服务器
            server 10.100.30.1:8080 weight=1;
            server 10.100.30.2:8080 weight=3;
            server 10.100.30.3:8080 weight=1;
            server 10.100.30.4:8080 weight=1;
        }
        server {
        ##前端拦截入口
            listen 80;
            server_name www.test.com;
            location / { 
                proxy_pass http://test;
            }
        }
    }
    Copy after login

    Source address hashing method: According to the IP address of the client, a value is calculated through the hash function, and the value is used to perform a modulo operation on the size of the server list. The result obtained is the client request. Serial number to access the server.

    Using the source address hash method for load balancing, a client with the same IP address will be mapped to the same backend server for access every time when the backend server list remains unchanged.

    upstream test{
            ip_hash;
            server 10.100.30.1:8080 weight=1;
            server 10.100.30.2:8080 weight=3;
            server 10.100.30.3:8080 weight=1;
            server 10.100.30.4:8080 weight=1;
        }
    Copy after login

    Minimum number of connections method: Since the configuration of the back-end server is different, the processing of requests may be faster or slower. The minimum number of connections method dynamically selects the current backlog based on the current connection status of the back-end server. The server with the least number of connections will handle the current request, improve the utilization efficiency of the back-end service as much as possible, and reasonably distribute the responsibility to each server.

    upstream test{
            least_conn;
            server 10.100.30.1:8080;
            server 10.100.30.2:8080;
            server 10.100.30.3:8080;
            server 10.100.30.4:8080;
        }
    Copy after login

    down: Indicates stopping a certain service

    upstream test{
            server 10.100.30.1:8080 down;
            server 10.100.30.2:8080;
            server 10.100.30.3:8080;
            server 10.100.30.4:8080;
        }
    Copy after login

    backup: Specifies the backup server. Under normal circumstances, as long as other servers can access it normally, the backup server will not be accessed, only other servers. The standby server will only be used when all are down, so this method is generally used to implement hot deployment. The code is updated to the standby server first, and then the normal server is stopped. After the normal server deployment is completed, the standby server is waiting again. status, the entire deployment process enables users to experience no downtime.

    upstream test{
            server 10.100.30.1:8080 backup;
            server 10.100.30.2:8080 backup;
            server 10.100.30.3:8080;
            server 10.100.30.4:8080;
        }
    Copy after login
    • Static proxy

    • ##Separation of dynamic and static

    • Virtual host

    Nginx usage scenarios and examples

    Nginx is a high-performance, high-concurrency HTTP server and reverse Proxy server can be used in various scenarios such as static resource server, load balancer, reverse proxy, cache server, and Web server.

    The following are several usage scenarios and examples:

    1. Static resource server

    When using Nginx as a static resource server, Nginx will directly return the requested file, thus Reduce the load on the web server. This scenario is usually used to provide static file downloads or access to large files such as videos.

    The following is an example Nginx configuration:

    server {
        listen       80;
        server_name  example.com;
        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }
        location /images/ {
            alias /var/www/images/;
        }
        location /downloads/ {
            alias /var/www/downloads/;
        }
    }
    Copy after login

    In the above configuration, Nginx will map requests to access the root directory (/) to the /usr/share/nginx/html directory. If the request If the requested file is a file in the /images/ directory, Nginx will map it to the /var/www/images/ directory. If the file requested is a file in the /downloads/ directory, Nginx will map it to the /var/www/downloads/ directory. Down.

    2. Reverse proxy

    When using Nginx as a reverse proxy server, Nginx will forward the request to the web server for processing, and then return the processing result to the client.

    This scenario is usually used to achieve load balancing, improve the security of the Web server, hide the real IP of the Web server, etc.

    The following is a sample Nginx configuration:

    upstream backend {
        server backend1.example.com:8080 weight=3;
        server backend2.example.com:8080;
    }
    server {
        listen 80;
        server_name example.com;
        location / {
            proxy_pass http://backend;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
    Copy after login

    上述配置中,Nginx会将访问根目录(/)的请求转发给后端的Web服务器(backend1.example.com和backend2.example.com),其中backend1.example.com的权重为3,backend2.example.com的权重为1,表示backend1.example.com的处理能力更强。

    在转发请求时,Nginx还会设置HTTP头信息中的Host和X-Real-IP字段,从而隐藏Web服务器的真实IP。

    3.负载均衡器

    在使用Nginx作为负载均衡器时,Nginx会将请求均衡地分发到多个Web服务器上,从而实现高并发、高可用的服务。这种场景通常用于Web应用程序的集群部署、分布式系统的部署等。下面是一个示例Nginx配置:

    upstream backend {
        server backend1.example.com:8080;
        server backend2.example.com:8080;
        server backend3.example.com:8080;
    }
    server {
        listen 80;
        server_name example.com;
        location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        }
    }
    Copy after login

    上述配置中,Nginx会将请求均衡地分发到三个Web服务器(backend1.example.com、backend2.example.com和backend3.example.com)上,从而实现负载均衡。

    在转发请求时,Nginx还会设置HTTP头信息中的Host和X-Real-IP字段,从而隐藏Web服务器的真实IP。

    4.缓存服务器

    在使用Nginx作为缓存服务器时,Nginx会缓存Web服务器返回的响应,从而减少对Web服务器的请求。这种场景通常用于提高Web应用程序的性能、降低Web服务器的负载等。下面是一个示例Nginx配置:

    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m inactive=60m;
    server {
    listen 80;
    server_name example.com;
        location / {
            proxy_cache my_cache;
            proxy_pass http://backend;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
    Copy after login

    上述配置中,Nginx会将Web服务器返回的响应缓存到/var/cache/nginx/my_cache目录下,并设置缓存有效期为60分钟。在缓存命中时,Nginx会直接返回缓存的响应,从而减少对Web服务器的请求。

    总之,Nginx具有很强的可扩展性和灵活性,可以根据不同的需求配置不同的使用场景。以上仅是一些示例,实际应用中还有很多其他的使用场景。

    5.反向代理服务器

    在使用Nginx作为反向代理服务器时,Nginx会将客户端请求转发到后端的Web服务器上,并将后端服务器返回的响应转发给客户端。这种场景通常用于隐藏后端服务器的真实IP、提高Web应用程序的可用性等。下面是一个示例Nginx配置:

    server {
        listen 80;
        server_name example.com;
        location / {
            proxy_pass http://backend;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
    Copy after login

    上述配置中,Nginx会将客户端请求转发到http://backend上,并设置HTTP头信息中的Host和X-Real-IP字段,从而隐藏后端服务器的真实IP。

    6.WebSocket服务器

    在使用Nginx作为WebSocket服务器时,Nginx会将客户端请求转发到后端的WebSocket服务器上,并实现WebSocket协议的连接管理。这种场景通常用于实时通信、游戏等应用程序。

    下面是一个示例Nginx配置:

    map $http_upgrade $connection_upgrade {
        default upgrade;
        '' close;
    }
    server {
        listen 80;
        server_name example.com;
        location / {
            proxy_pass http://backend;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
    Copy after login

    上述配置中,Nginx会将WebSocket请求转发到http://backend上,并设置HTTP头信息中的Upgrade、Connection、Host和X-Real-IP字段,从而实现WebSocket协议的连接管理。

    总之,Nginx具有很多的使用场景,可以根据不同的需求配置不同的服务器功能。以上仅是一些示例,实际应用中还有很多其他的使用场景。

    The above is the detailed content of What are the main application scenarios in Nginx?. For more information, please follow other related articles on the PHP Chinese website!

    Related labels:
    source:yisu.com
    Statement of this Website
    The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
    Popular Tutorials
    More>
    Latest Downloads
    More>
    Web Effects
    Website Source Code
    Website Materials
    Front End Template
    About us Disclaimer Sitemap
    php.cn:Public welfare online PHP training,Help PHP learners grow quickly!