Introduction to the main functions of nginx:
(Recommended tutorial: nginx tutorial)
1. Reverse proxy
Reverse proxy should be the most common thing Nginx does. What is a reverse proxy? The following is what Baidu Encyclopedia says: The reverse proxy (Reverse Proxy) method refers to using a proxy server to accept Internet traffic. The connection request is then forwarded to the server on the internal network, and the result obtained from the server is returned to the client requesting the connection on the Internet. At this time, the proxy server appears as a reverse proxy server to the outside world.
To put it simply, the real server cannot be directly accessed by the external network, so a proxy server is needed. The proxy server can be accessed by the external network and is in the same network environment as the real server. Of course, it is also possible. It's the same server, just different ports.
Paste a simple code to implement reverse proxy below:
server { listen 80; server_name localhost; client_max_body_size 1024M; location / { proxy_pass http://localhost:8080; proxy_set_header Host $host:$server_port; } }
Save the configuration file and start Nginx, so that when we access localhost, it is equivalent to accessing localhost:8080.
2. Load balancing
Load balancing is also a commonly used function of Nginx. Load balancing means to allocate execution to multiple operating units, such as: Web server, FTP server, enterprise key Application servers and other mission-critical servers, etc., to complete work tasks together.
To put it simply, when there are 2 or more servers, requests are randomly distributed to the designated server for processing according to the rules. Load balancing configuration generally requires the configuration of a reverse proxy at the same time, and the reverse proxy jumps through the reverse proxy. Go to load balancing. Nginx currently supports 3 built-in load balancing strategies, as well as 2 commonly used third-party strategies.
1. RR (default)
每个请求按时间顺序逐一分配到不同的后端服务器,如果后端服务器down掉,能自动剔除。 简单配置 upstream test { server localhost:8080; server localhost:8081; } server { listen 81; server_name localhost; client_max_body_size 1024M; location / { proxy_pass http://test; proxy_set_header Host $host:$server_port; } }
Configured 2 servers, of course it is actually one, but the ports are different, and the 8081 server does not exist, that is to say It cannot be accessed, but there will be no problem when we access http://localhost. It will jump to http://localhost:8080 by default. This is because Nginx will automatically determine the status of the server.
If the server is inaccessible (the server is down), it will not jump to this server, so it also avoids the situation where a server is down and affects the use. Since Nginx defaults to the RR policy, we No additional settings are required.
2. Weight
Specify the polling probability. The weight is proportional to the access ratio and is used when the performance of the back-end server is uneven.
For example:
upstream test { server localhost:8080 weight=9; server localhost:8081 weight=1; }
Then generally only 1 time in 10 times will access 8081, and 9 times will access 8080.
3. ip_hash
The above two methods have a problem, that is, when the next request comes, the request may be distributed to another server. When our program is not stateless, (using session to save data), there is a big problem at this time. For example, if the login information is saved in the session, then you need to log in again when you jump to another server, so many times we need If a client only accesses one server, then iphash needs to be used.
Each request of iphash is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server, which can solve the session problem.
upstream test { ip_hash; server localhost:8080; server localhost:8081; }
4. fair (third party)
Requests are allocated according to the response time of the backend server, and those with short response times are allocated first.
upstream backend { fair; server localhost:8080; server localhost:8081; }
5. url_hash (third party)
Distribute requests according to the hash result of the accessed URL, so that each URL is directed to the same back-end server. It is more effective when the back-end server is cached. . Add a hash statement to the upstream. Other parameters such as weight cannot be written in the server statement. Hash_method is the hash algorithm used.
upstream backend { hash $request_uri; hash_method crc32; server localhost:8080; server localhost:8081; }
The above five types of load balancing are suitable for use in different situations, so you can choose which strategy mode to use according to the actual situation. However, fair and url_hash need to install third-party modules to use, because this article mainly introduces what Nginx can do Things, so Nginx installation of third-party modules will not be introduced in this article.
3. HTTP Server
Nginx itself is also a static resource server. When there are only static resources, you can use Nginx as the server. At the same time, it is also very popular now to separate static resources from static resources. To implement it through Nginx, first look at Nginx as a static resource server.
server { listen 80; server_name localhost; client_max_body_size 1024M; location / { root e:\wwwroot; index index.html; } }
In this way, if you access http://localhost, you will access the index.html under the wwwroot directory of the E drive by default. If a website is only a static page, then it can be deployed in this way.
Separation of dynamic and static
Separation of dynamic and static allows dynamic web pages in dynamic websites to distinguish constant resources from frequently changing resources according to certain rules. After dynamic and static resources are split, We can cache static resources according to their characteristics. This is the core idea of static processing of websites.
upstream test{ server localhost:8080; server localhost:8081; } server { listen 80; server_name localhost; location / { root e:\wwwroot; index index.html; } # 所有静态请求都由nginx处理,存放目录为html location ~ \.(gif|jpg|jpeg|png|bmp|swf|css|js)$ { root e:\wwwroot; } # 所有动态请求都转发给tomcat处理 location ~ \.(jsp|do)$ { proxy_pass http://test; } error_page 500 502 503 504 /50x.html; location = /50x.html { root e:\wwwroot; } }
这样我们就可以把HTML以及图片和css以及js放到wwwroot目录下,而tomcat只负责处理jsp和请求,
例如当我们后缀为gif的时候,Nginx默认会从wwwroot获取到当前请求的动态图文件返回,当然这里的静态文件跟Nginx是同一台服务器。
我们也可以在另外一台服务器,然后通过反向代理和负载均衡配置过去就好了,只要搞清楚了最基本的流程,很多配置就很简单了,另外localtion后面其实是一个正则表达式,所以非常灵活。
四、正向代理
正向代理,意思是一个位于客户端和原始服务器(origin server)之间的服务器,为了从原始服务器取得内容,客户端向代理发送一个请求并指定目标(原始服务器),然后代理向原始服务器转交请求并将获得的内容返回给客户端。客户端才能使用正向代理。
当你需要把你的服务器作为代理服务器的时候,可以用Nginx来实现正向代理,但是目前Nginx有一个问题,那么就是不支持HTTPS,虽然我百度到过配置HTTPS的正向代理,但是到最后发现还是代理不了,当然可能是我配置的不对。
resolver 114.114.114.114 8.8.8.8; server { resolver_timeout 5s; listen 81; access_log e:\wwwroot\proxy.access.log; error_log e:\wwwroot\proxy.error.log; location / { proxy_pass http://$host$request_uri; } }
resolver是配置正向代理的DNS服务器,listen 是正向代理的端口,配置好了就可以在ie上面或者其他代理插件上面使用服务器ip+端口号进行代理了。
注意:Nginx是支持热启动的,也就是说当我们修改配置文件后,不用关闭Nginx,就可以实现让配置生效。Nginx从新读取配置的命令是:nginx -s reload。
The above is the detailed content of Introduction to the main functions of nginx. For more information, please follow other related articles on the PHP Chinese website!