Have you never heard of Nginx? Then you must have heard of its "peer" Apache! Nginx, like Apache, is a WEB server. Based on the REST architectural style, using Uniform Resources Identifier URI or Uniform Resources Locator URL as the basis for communication, various network services are provided through the HTTP protocol.
However, these servers were limited by the environment at the time when they were originally designed, such as the user scale, network bandwidth, product features and other limitations at the time, and their respective positioning and development were different. This also makes each WEB server have its own distinctive characteristics.
Apache has a long development period and is the undisputed number one server in the world. It has many advantages: stable, open source, cross-platform, etc. It has been around for too long. In the era when it emerged, the Internet industry was far inferior to what it is now. So it's designed to be a heavyweight. It does not support high-concurrency servers. Running tens of thousands of concurrent accesses on Apache will cause the server to consume a lot of memory. The switching between processes or threads by the operating system also consumes a large amount of CPU resources, resulting in a reduction in the average response speed of HTTP requests.
These all determine that Apache cannot become a high-performance WEB server, and the lightweight high-concurrency server Nginx came into being.
Russian engineer Igor Sysoev developed Nginx using C language while working for Rambler Media. As a WEB server, Nginx has always provided Rambler Media with excellent and stable services.
Then, Igor Sysoev open sourced the Nginx code and granted it a free software license.
Because:
So, Nginx is popular!
Nginx is a free, open source, high-performance HTTP server and reverse proxy server; it is also an IMAP, POP3, and SMTP proxy server; Nginx It can be used as an HTTP server to publish the website, and Nginx can be used as a reverse proxy to implement load balancing.
Speaking of agency, first of all we have to clarify a concept. The so-called agency is a representative and a channel;
At this time, two roles are involved, one One is the agent role, and the other is the target role. The process in which the agent character accesses the target role to complete some tasks through this agent is called the agent operation process; just like a specialty store in life ~ a customer went to an adidas specialty store and bought a pair of shoes. This specialty store It is an agent, the agent role is the adidas manufacturer, and the target role is the user.
Before talking about reverse proxy, let’s take a look at forward proxy. Forward proxy is also the most common proxy mode that everyone comes into contact with. We will look at it from two aspects. Talking about the processing model of forward proxy, I will explain what forward proxy is from the aspects of software and life.
In today's network environment, if we need to access certain foreign websites due to technical needs, you will find that we cannot access a certain foreign website through a browser. At this time, everyone An operation FQ may be used for access. The main method of FQ is to find a proxy server that can access foreign websites. We send the request to the proxy server, and the proxy server accesses the foreign website and then passes the accessed data to us!
The above proxy mode is called forward proxy. The biggest feature of forward proxy is that the client is very clear about the server address it wants to access; The server only knows which proxy server the request comes from, and It is unclear which specific client it comes from; the forward proxy mode shields or hides the real client information. Let’s look at a schematic diagram (I put the client and the forward proxy together, they belong to the same environment, I will introduce them later):
Client A forward proxy server must be set up. Of course, the premise is to know the IP address of the forward proxy server and the port of the agent program. As shown in the picture.
In summary: Forward proxy, "It acts as a proxy for the client and makes requests on behalf of the client", is a proxy located between the client and A server between origin servers. In order to obtain content from the origin server, the client sends a request to the proxy and specifies the target (origin server). The proxy then forwards the request to the origin server and returns the obtained content to the client. The client must make some special settings to use the forward proxy.
Use of forward proxy:
(1) Access resources that were originally inaccessible, such as Google
(2) Caching can be done to speed up access to resources
(3) Client access Authorize and authenticate online
(4) The agent can record user access records (online behavior management) and hide user information from the outside
Understand what a forward proxy is, we Continue to look at the processing methods of reverse proxy. For example, for a certain website in my country, the number of visitors connected to the website at the same time every day has exploded. A single server is far from being able to satisfy the people's growing desire to buy. At this time, there is A familiar term is used: distributed deployment; that is, by deploying multiple servers to solve the problem of limiting the number of visitors; most functions of a certain website are also implemented directly using Nginx for reverse proxy, and by encapsulating Nginx and other The component later got a fancy name: Tengine. Interested children can visit Tengine's official website to view specific information: http://tengine.taobao.org/. So in what way does the reverse proxy implement distributed cluster operations? Let’s first look at a schematic diagram (I frame the server and the reverse proxy together, and they both belong to the same environment. I will introduce them later):
You can see clearly from the above diagram that after receiving the requests sent by multiple clients to the server, the Nginx server distributes them to the back-end business according to certain rules. The processing server has processed it. At this time, the source of the request, that is, the client, is clear, but it is not clear which server handles the request. Nginx plays the role of a reverse proxy.
The client is unaware of the existence of the proxy. The reverse proxy is transparent to the outside world. Visitors do not know that they are visiting a proxy. Because the client does not require any configuration to access.
Reverse proxy, "It acts as a proxy for the server and receives requests on behalf of the server", is mainly used for servers In the case of cluster distributed deployment, reverse proxy hides server information.
The role of the reverse proxy:
(1) To ensure the security of the internal network, the reverse proxy is usually used as the public network access address, and the Web server is the internal network
(2) Load Balance, optimize the load of the website through the reverse proxy server
Normally, when we operate the actual project, the forward proxy and reverse proxy are likely to exist. In one application scenario, the forward proxy proxy client requests to access the target server. The target server is a reverse single-interest server, which reversely proxies multiple real business processing servers. The specific topology diagram is as follows:
I took a screenshot to illustrate the difference between forward proxy and reverse proxy. , as shown in the figure.
Illustration:
In the forward proxy, the Proxy and the Client belong to the same LAN (in the box in the picture), and the client information is hidden;
In the reverse proxy, Proxy and Server belong to the same LAN (inside the box in the picture), hiding the server information;
In fact, Proxy does the same thing in both proxies. It sends and receives requests and responses on behalf of the server, but from a structural point of view, the left and right are interchanged, so the proxy method that appeared later is called a reverse proxy.
We have clarified the concept of the so-called proxy server, so next, Nginx plays the role of a reverse proxy server, and what rules it distributes requests according to Woolen cloth? Can the distribution rules be controlled for different project application scenarios?
The number of requests sent by the client and received by the Nginx reverse proxy server mentioned here is what we call the load.
The rule that the number of requests is distributed to different servers for processing according to certain rules is a balancing rule.
So~The process of distributing requests received by the server according to rules is called load balancing.
In the actual project operation process, load balancing has two types: hardware load balancing and software load balancing. Hardware load balancing is also called hard load, such as F5 load balancing, which is relatively expensive and expensive, but the data is stable and safe. There is a very good guarantee for performance, etc. Only companies such as China Mobile and China Unicom will choose hard load operations; more companies will choose to use software load balancing for cost reasons. Software load balancing uses existing technology. A message queue distribution mechanism implemented in combination with host hardware.
The load balancing scheduling algorithm supported by Nginx is as follows:
Comparison items\server | Apache | Nginx | Lighttpd |
Proxy proxy | Very Good | Very good | Average |
Rewriter | Good | Very good | Average |
Fcgi | Bad | Good | Very good |
Not supported | Supported | Not supported | |
Very high | Very small | Relatively small | |
Good | Very good | Bad | |
Good | General | General | |
Average | Very good | Good | |
Average | Very good Good | General |
For more Nginx related technical articles, please visitNginx Tutorial column to study!
The above is the detailed content of What nginx can do. For more information, please follow other related articles on the PHP Chinese website!