Home > Backend Development > PHP Tutorial > Summary and sharing of nginx related knowledge points

Summary and sharing of nginx related knowledge points

小云云
Release: 2023-03-22 21:28:02
Original
1529 people have browsed it

Nginx itself will not parse PHP. The terminal's request for the PHP page will be handed over by Nginx to the IP address and port monitored by the FastCGI process, which will be processed by php-fpm as a dynamic parsing server, and finally the processing results will be returned to nginx. In fact, Nginx is a reverse proxy server. Nginx forwards dynamic requests to the backend php-fpm through the reverse proxy function, thereby realizing support for PHP parsing. This is the principle of Nginx implementing PHP dynamic parsing.

Nginx does not support direct calling or parsing of external programs. All external programs (including PHP) must be called through the FastCGI interface. The FastCGI interface is a socket under Linux (this socket can be a file socket or an ip socket). In order to call a CGI program, a FastCGI wrapper is also needed (a wrapper can be understood as a program used to start another program). This wrapper is bound to a fixed socket, such as a port or file socket. When Nginx sends a CGI request to this socket, the wrapper receives the request through the FastCGI interface, and then spawns a new thread. This thread calls the interpreter or external program to process the script and read the return data; then, the wrapper The returned data is passed to Nginx through the FastCGI interface along the fixed socket; finally, Nginx sends the returned data to the client.

The classic model is the Master-Worker multi-process asynchronous driver model used in Nginx.

The parent process creates a socket, binds, and listens, and creates multiple child processes through fork. Each child process inherits the socket of the parent process, and calls accpet to start listening and waiting for network connections. At this time, there are multiple processes waiting for the network connection event at the same time. When this event occurs, these processes are awakened at the same time, which is a "shock". When a process is awakened, the kernel needs to be rescheduled so that each process responds to this event at the same time. In the end, only one process can successfully handle the event, and other processes re-sleep or otherwise fail to handle the event.

In fact, after version 2.6 of Linux, the kernel has solved the "shock" problem of the accept() function. When the kernel receives a customer connection, it will only wake up the first process on the waiting queue. or thread.

Nginx uses accept_mutexmutex mutex lock to solve this problem. Specific measures include using global mutex lock. Each child process applies for the lock before epoll_wait(). If the application is obtained, it will continue processing. If it cannot obtain it, it will wait. , and set up a load balancing algorithm (when the task volume of a certain sub-process reaches 7/8 of the total set volume, no more attempts will be made to apply for locks) to balance the task volume of each process.

Now we summarize the processing of thundering groups and Nginx as follows:

  • accept will not cause thundering groups, but epoll_wait will.

  • Nginx's accept_mutex does not solve the accept thundering problem, but solves the epoll_wait thundering problem.

  • It is also wrong to say that Nginx solves the epoll_wait panic problem. It only controls whether to add the listening socket to epoll. The listening socket is only in the epoll of one child process. When a new connection comes, other child processes will certainly not wake up.

To put it simply, only one nginx worker is allowed to handle the listening handle in its own epoll at the same time. Its load balancing is also very simple. When 7/8 of the maximum connection is reached, the worker will not try to get the accept lock, nor will it process new connections, so that other nginx worker processes will have more opportunities to process the listening handle. A new connection is established. Moreover, due to the setting of timeout, the worker process that has not obtained the lock will obtain the lock more frequently.

Is the nginx multi-process model really lock-free? In fact, there is still one: ngx_accept_mutex.

nginx is a multi-process program, and port 80 is shared by each worker process. Whenever a connection comes, multiple worker processes are bound to compete to respond. This is the so-called thundering herd phenomenon.

When the kernel accepts a link, it will wake up all waiting processes, but in fact only one process can obtain the connection, and the other processes will be invalidly awakened. This invalid wakeup will undoubtedly increase the overhead of the application. . To this end, nginx provides an accept lock to avoid the tragedy of nine sons seizing the rightful son.

The function of ngx_accept_mutex is to allow those worker processes that are currently heavily loaded to actively give up processing new incoming requests, thereby improving the overall wake-up efficiency of the

application, thereby improving The overall performance of the application.

proxy_cache

upstream

fastcgi_pass

location

Non-standard status code 444 means closing the connection and not sending a response header to the client .

nginx -s reload command loads the modified configuration file. After the command is issued, the following events occur

1. The master process of Nginx checks the correctness of the configuration file. If it is wrong, an error message is returned, and nginx continues to use it. The original configuration file works (because the worker is not affected)

2. Nginx starts a new worker process and adopts the new configuration file

3. Nginx allocates new requests to the new worker process

4. Nginx waits for all requests from the previous worker processes to be returned, then closes the relevant worker processes

5. Repeat the above process until all old worker processes have been closed

The above process is based on the relevant official documents of nginx.

Taking proxy_next_upstream as an example, the general configuration is as follows:

proxy_next_upstream http_504 timeout;

This command has two functions:

  1. Tell nginx that if a connection timeout occurs and the upstream returns 504, you need to retry upstream

  2. Tell nginx that http 504, the connection timeout is a request failure

In general, nginx fails by default for error, timeout, invalid_header. If you want to treat other behaviors as failures, you need to add instructions like proxy_next_upstream. Among them, if http_403 and http_404 are not recognized, it is a failure.

Discuss two points, the definition of server failure and the behavior after server failure.

  • server Definition of failure: The above module failure definition is mainly used to explain under what circumstances a request fails; but what defines a server as failure? nginx mainly uses two parameters to control: max_fails and fail_timeout. Simply put, if a request fails max_fails times within fail_timeout, it means that the server is currently down.

  • # Behavior after failure: Mainly:

  • During the fail_timeout period, the server will not be selected;

  • After the fail_timeout time, the server will be marked as normal and the existing logic will be repeated.

Ngxin divides the processing of client requests into 11 stages

#1 NGX_HTTP_POST_READ_PHASE: Reading the request content stage

#2 NGX_HTTP_SERVER_REWRITE_PHASE : Server request address rewrite phase

#3 NGX_HTTP_FIND_CONFIG_PHASE: Configuration search phase

#4 NGX_HTTP_REWRITE_PHASE: Location request address rewrite phase

#5 NGX_HTTP_POST_REWRITE_PHASE: Request address rewrite Submission phase

#6 NGX_HTTP_PREACCESS_PHASE: Access permission check preparation phase

#7 NGX_HTTP_ACCESS_PHASE: Access permission check phase

#8 NGX_HTTP_POST_ACCESS_PHASE: Access permission check submission phase

# 9 ngx_http_try_files_Phase: The configuration item TRY_FILES processing phase

# 10 ngx_Content_Phase: The content of the content is generated

11 ngx_http_phase: #Nginx's request processing process

#1 How does Nginx confirm which server handles the request?

1. Use ip + port to confirm the server listening to the ip and port.

2. Confirm which server is selected to handle the request according to the host header in the request.

3. If no server is matched, the request will be transferred to the default server for processing.

Generally speaking, without any settings, the order in the configuration file will appear The first server serves as

default server.

4. You can use the default_server flag on the listen command to set a server as

default server.

#2 How does Nginx match the server based on the host header?

Nginx mainly matches the server by comparing the server_name and host headers in the server.

The comparison order is as follows:

1. Exact name;

2. The longest matching leading wildcard name (such as: *.zhidao.baidu.com) ;

3. The longest trailing wildcard name (such as: zhidao.baidu.*);

#3 Initialize http request, 11 stages of http request

Location matching command

~ The wavy line indicates that a regular match is performed, which is case-sensitive.

~* means performing a regular match, case-insensitive.

^~ ^~ means ordinary character matching. If this option matches, only this option will be matched, and other options will not be matched. It is generally used to match directories.

= Perform exact matching of common characters.

@ "@" defines a named location, used for internal orientation, such as error_page, try_files.

Example:


##Request URI example:

  • / -> Conforms to configuration A

  • ##/documents/document.html -> Conforms to configuration B
  • ##/images/1.gif -> ; Complies with configuration C
  • /documents/1.jpg -> Complies with configuration D
  • ##Related recommendations:

nginx reverse proxy mechanism solves front-end cross-domain problems

How nginx modifies the size of uploaded files

Nginx dynamic and static separation operation explanation

The above is the detailed content of Summary and sharing of nginx related knowledge points. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template