1. Reverse proxy
2. Load balancing
3.HTTP server (Including dynamic and static separation)
4. Forward proxy The above is what I know that Nginx can handle without relying on third-party modules. Here is a detailed description of how to do each function
Reverse proxy should be the most common thing Nginx does. What is a reverse proxy? The following is what Baidu Encyclopedia says: The reverse proxy (Reverse Proxy) method refers to using a proxy server to Accepts connection requests from the Internet, then forwards the requests to the server on the internal network, and returns the results obtained from the server to the client requesting the connection on the Internet. At this time, the proxy server behaves as a reverse proxy server to the outside world. To put it simply, the real server cannot be directly accessed by the external network, so a proxy server is needed. The proxy server can be accessed by the external network and is in the same network environment as the real server. Of course, it may also be the same server and port. Just different. Paste below a simple code to implement reverse proxy
server { listen 80; server_name localhost; client_max_body_size 1024M; location / { proxy_pass http://localhost:8080; proxy_set_header Host $host:$server_port; } }
Save the configuration file and start Nginx, so that when we access localhost, it is equivalent to accessing localhost:8080
Load balancing is also a commonly used function of Nginx. Load balancing means to allocate execution to multiple operating units, such as Web servers, FTP servers, enterprise key application servers and other mission-critical servers, etc., so as to jointly Complete work tasks. To put it simply, when there are two or more servers, requests are randomly distributed to designated servers for processing according to rules. Load balancing configuration generally requires the configuration of a reverse proxy at the same time, and jumps to load balancing through the reverse proxy. Nginx currently supports 3 built-in load balancing strategies, as well as 2 commonly used third-party strategies.
Each request is assigned to a different backend server one by one in chronological order. If the backend server goes down, it can be automatically eliminated.
Simple configuration
upstream test { server localhost:8080; server localhost:8081; } server { listen 81; server_name localhost; client_max_body_size 1024M; location / { proxy_pass http://test; proxy_set_header Host $host:$server_port; } }
The core code of load balancing is
upstream test { server localhost:8080; server localhost:8081; }
Here I configured 2 servers. Of course, it is actually one server, but the ports are different. The 8081 server does not exist, which means it cannot be accessed. However, when we access http://localhost, there will be no problem. It will jump to http://localhost:8080 by default. This is because Nginx will automatically Determine the status of the server. If the server is inaccessible (the server is down), it will not jump to this server, so it also avoids the situation where a server is down and affects the use. Since Nginx defaults to the RR policy, we do not More settings are required.
Specify the polling probability. The weight is proportional to the access ratio and is used when the performance of the back-end server is uneven. For example,
upstream test { server localhost:8080 weight=9; server localhost:8081 weight=1; }
, then generally only 1 time out of 10 times will be accessed to 8081, and 9 times will be accessed to 8080
The above two methods There is a problem, that is, when the next request comes, the request may be distributed to another server. When our program is not stateless (session is used to save data), then there is a big problem. For example, if the login information is saved in the session, then you need to log in again when you jump to another server. So many times we need a customer to only access one server, then we need to use ip_hash. Each request of ip_hash presses The hash result of the access IP is allocated so that each visitor has fixed access to a back-end server, which can solve the session problem.
upstream test { ip_hash; server localhost:8080; server localhost:8081; }
Requests are allocated according to the response time of the backend server, and those with short response times are allocated first.
upstream backend { fair; server localhost:8080; server localhost:8081; }
Distribute requests according to the hash result of the accessed URL, so that each URL is directed to the same back-end server. It is more effective when the back-end server is cached. . Add a hash statement to the upstream. Other parameters such as weight cannot be written in the server statement. Hash_method is the hash algorithm used.
upstream backend { hash $request_uri; hash_method crc32; server localhost:8080; server localhost:8081; }
The above five load balancing methods are suitable for use in different situations, so you can choose according to the actual situation. Which strategy mode to use, but fair and url_hash need to install third-party modules before they can be used. Since this article mainly introduces what Nginx can do, Nginx installation of third-party modules will not be introduced in this article
Nginx itself is also a static resource server. When there are only static resources, you can use Nginx as the server. At the same time, it is also very popular now to separate static resources from static resources, which can be achieved through Nginx. First, let’s take a look at Nginx as a static resource server.
server { listen 80; server_name localhost; client_max_body_size 1024M; location / { root e:\wwwroot; index index.html; } }
这样如果访问http://localhost 就会默认访问到E盘wwwroot目录下面的index.html,如果一个网站只是静态页面的话,那么就可以通过这种方式来实现部署。 动静分离 动静分离是让动态网站里的动态网页根据一定规则把不变的资源和经常变的资源区分开来,动静资源做好了拆分以后,我们就可以根据静态资源的特点将其做缓存操作,这就是网站静态化处理的核心思路
upstream test{ server localhost:8080; server localhost:8081; } server { listen 80; server_name localhost; location / { root e:\wwwroot; index index.html; } # 所有静态请求都由nginx处理,存放目录为html location ~ \.(gif|jpg|jpeg|png|bmp|swf|css|js)$ { root e:\wwwroot; } # 所有动态请求都转发给tomcat处理 location ~ \.(jsp|do)$ { proxy_pass http://test; } error_page 500 502 503 504 /50x.html; location = /50x.html { root e:\wwwroot; } }
这样我们就可以吧HTML以及图片和css以及js放到wwwroot目录下,而tomcat只负责处理jsp和请求,例如当我们后缀为gif的时候,Nginx默认会从wwwroot获取到当前请求的动态图文件返回,当然这里的静态文件跟Nginx是同一台服务器,我们也可以在另外一台服务器,然后通过反向代理和负载均衡配置过去就好了,只要搞清楚了最基本的流程,很多配置就很简单了,另外localtion后面其实是一个正则表达式,所以非常灵活
正向代理,意思是一个位于客户端和原始服务器(origin server)之间的服务器,为了从原始服务器取得内容,客户端向代理发送一个请求并指定目标(原始服务器),然后代理向原始服务器转交请求并将获得的内容返回给客户端。客户端才能使用正向代理。当你需要把你的服务器作为代理服务器的时候,可以用Nginx来实现正向代理,但是目前Nginx有一个问题,那么就是不支持HTTPS,虽然我百度到过配置HTTPS的正向代理,但是到最后发现还是代理不了,当然可能是我配置的不对,所以也希望有知道正确方法的同志们留言说明一下。
resolver 114.114.114.114 8.8.8.8; server { resolver_timeout 5s; listen 81; access_log e:\wwwroot\proxy.access.log; error_log e:\wwwroot\proxy.error.log; location / { proxy_pass http://$host$request_uri; } }
resolver是配置正向代理的DNS服务器,listen 是正向代理的端口,配置好了就可以在ie上面或者其他代理插件上面使用服务器ip+端口号进行代理了。
Nginx是支持热启动的,也就是说当我们修改配置文件后,不用关闭Nginx,就可以实现让配置生效,当然我并不知道多少人知道这个,反正我一开始并不知道,导致经常杀死了Nginx线程再来启动。。。Nginx从新读取配置的命令是
nginx -s reload
windows下面就是
nginx.exe -s reload
The above is the detailed content of What are the main application scenarios of Nginx?. For more information, please follow other related articles on the PHP Chinese website!