Why nginx is faster than apache

藏色散人
Release: 2019-06-10 10:03:37
Original
3991 people have browsed it

Why nginx is faster than apache

Why is nginx faster than apache?

Let’s talk about a few concepts in general:

1: nginx is faster than apache under high concurrency, and low concurrency is not obvious

2: The reason why it is fast is due to the epoll model of nginx

apache is multi-threaded or multi-process. When working, when an http response comes, one process receives it (listen) –>Identification processing—>Return request, during this process, a process processes all, apche reads or writes for socket I/O, but reading or writing are blocked, blocking means The process has to be suspended and enter the sleep state. Once there are many connections, Apache will inevitably generate more processes to respond to the request. Once there are many processes, the CPU will switch processes frequently, which consumes resources and time, so it leads to Apache performance has declined. To put it bluntly, it cannot handle so many processes.

Nginx adopts the epoll model, which is asynchronous and non-blocking. For Nginx, a complete connection request processing is divided into events, one event at a time. For example, accept(), receive(), disk I/O, send(), etc. Each part has a corresponding module to process. A complete request may be processed by hundreds of modules. The real core is the event collection and distribution module, which is the core of managing all modules.

Only the scheduling of the core module can allow the corresponding module to occupy CPU resources to process the request. Take an HTTP request as an example. First, register the listening event of interest in the event collection and distribution module. After registration, return directly without blocking. Then you don’t need to worry about it anymore. The kernel will notify you when a connection comes (epoll’s turn). The query will tell the process), and the CPU can handle other things.

Once a request comes, the corresponding context is assigned to the entire request (in fact, it has been allocated in advance). At this time, new events of interest (read function) are registered. Similarly, when client data comes, the kernel will The process is automatically notified that the data can be read. After reading the data, it is parsed. After parsing, it goes to the disk to find resources (I/O). Once the I/O is completed, the process is notified and the process starts sending data back to the client send(). It is not blocking at this time. After calling, just wait for the kernel to send back the notification result.

The entire request is divided into many stages. Each stage is registered with many modules and then processed, all asynchronously and non-blocking. Asynchronous here refers to doing something without waiting for the result to be returned. It will automatically notify you when it is done.

I found an example on the Internet:

You can give a simple example to illustrate the workflow of Apache. We usually go to a restaurant to eat. The working model of the restaurant is that one waiter serves the customer all the time. The process is as follows. The waiter waits for the guest at the door (listen). When the guest arrives, he greets the arranged table (accept), waits for the customer to order (request uri), and goes to the kitchen to call the chef. Place an order for cooking (disk I/O), wait for the kitchen to be ready (read), and then serve the dishes to the guests (send). The waiter (process) is blocked in many places.

In this way, when there are more guests (more HTTP requests), the restaurant can only call more waiters to serve (fork process). However, since the restaurant resources are limited (CPU), once there are too many waiters, there will be management costs. Very high (CPU context switching), thus entering a bottleneck.

Let’s see how Nginx handles it? Hang a doorbell at the door of the restaurant (register the listen of the epoll model). Once a guest (HTTP request) arrives, a waiter is sent to receive it (accept). After that, the waiter goes to do other things (such as receiving guests again) and waits for this guest. After the guest orders the meal, he calls the waiter (the data arrives in read()). The waiter comes and takes the menu to the kitchen (disk I/O). The waiter goes to do other things. When the kitchen is ready, he calls the waiter (disk I/O). O end), the waiter will serve the dishes to the guests (send()), the kitchen will serve one dish to the guests after it is ready, and the waiters can do other things in the middle.

The entire process is divided into many stages, and each stage has a corresponding service module. Let's think about it, so that once there are more guests, the restaurant can also accommodate more people.

For more Nginx technical articles, please visit the Nginx usage tutorial column!

The above is the detailed content of Why nginx is faster than apache. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template