Why nginx is faster than apache
Why is nginx faster than apache?
Let’s talk about a few concepts in general:
1: nginx is faster than apache under high concurrency, and low concurrency is not obvious
2: The reason why it is fast is due to the epoll model of nginx
apache is multi-threaded or multi-process. When working, when an http response comes, one process receives it (listen) –>Identification processing—>Return request, during this process, a process processes all, apche reads or writes for socket I/O, but reading or writing are blocked, blocking means The process has to be suspended and enter the sleep state. Once there are many connections, Apache will inevitably generate more processes to respond to the request. Once there are many processes, the CPU will switch processes frequently, which consumes resources and time, so it leads to Apache performance has declined. To put it bluntly, it cannot handle so many processes.
Nginx adopts the epoll model, which is asynchronous and non-blocking. For Nginx, a complete connection request processing is divided into events, one event at a time. For example, accept(), receive(), disk I/O, send(), etc. Each part has a corresponding module to process. A complete request may be processed by hundreds of modules. The real core is the event collection and distribution module, which is the core of managing all modules.
Only the scheduling of the core module can allow the corresponding module to occupy CPU resources to process the request. Take an HTTP request as an example. First, register the listening event of interest in the event collection and distribution module. After registration, return directly without blocking. Then you don’t need to worry about it anymore. The kernel will notify you when a connection comes (epoll’s turn). The query will tell the process), and the CPU can handle other things.
Once a request comes, the corresponding context is assigned to the entire request (in fact, it has been allocated in advance). At this time, new events of interest (read function) are registered. Similarly, when client data comes, the kernel will The process is automatically notified that the data can be read. After reading the data, it is parsed. After parsing, it goes to the disk to find resources (I/O). Once the I/O is completed, the process is notified and the process starts sending data back to the client send(). It is not blocking at this time. After calling, just wait for the kernel to send back the notification result.
The entire request is divided into many stages. Each stage is registered with many modules and then processed, all asynchronously and non-blocking. Asynchronous here refers to doing something without waiting for the result to be returned. It will automatically notify you when it is done.
I found an example on the Internet:
You can give a simple example to illustrate the workflow of Apache. We usually go to a restaurant to eat. The working model of the restaurant is that one waiter serves the customer all the time. The process is as follows. The waiter waits for the guest at the door (listen). When the guest arrives, he greets the arranged table (accept), waits for the customer to order (request uri), and goes to the kitchen to call the chef. Place an order for cooking (disk I/O), wait for the kitchen to be ready (read), and then serve the dishes to the guests (send). The waiter (process) is blocked in many places.
In this way, when there are more guests (more HTTP requests), the restaurant can only call more waiters to serve (fork process). However, since the restaurant resources are limited (CPU), once there are too many waiters, there will be management costs. Very high (CPU context switching), thus entering a bottleneck.
Let’s see how Nginx handles it? Hang a doorbell at the door of the restaurant (register the listen of the epoll model). Once a guest (HTTP request) arrives, a waiter is sent to receive it (accept). After that, the waiter goes to do other things (such as receiving guests again) and waits for this guest. After the guest orders the meal, he calls the waiter (the data arrives in read()). The waiter comes and takes the menu to the kitchen (disk I/O). The waiter goes to do other things. When the kitchen is ready, he calls the waiter (disk I/O). O end), the waiter will serve the dishes to the guests (send()), the kitchen will serve one dish to the guests after it is ready, and the waiters can do other things in the middle.
The entire process is divided into many stages, and each stage has a corresponding service module. Let's think about it, so that once there are more guests, the restaurant can also accommodate more people.
For more Nginx technical articles, please visit the Nginx usage tutorial column!
The above is the detailed content of Why nginx is faster than apache. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics





How to confirm whether Nginx is started: 1. Use the command line: systemctl status nginx (Linux/Unix), netstat -ano | findstr 80 (Windows); 2. Check whether port 80 is open; 3. Check the Nginx startup message in the system log; 4. Use third-party tools, such as Nagios, Zabbix, and Icinga.

The methods that can query the Nginx version are: use the nginx -v command; view the version directive in the nginx.conf file; open the Nginx error page and view the page title.

How to configure an Nginx domain name on a cloud server: Create an A record pointing to the public IP address of the cloud server. Add virtual host blocks in the Nginx configuration file, specifying the listening port, domain name, and website root directory. Restart Nginx to apply the changes. Access the domain name test configuration. Other notes: Install the SSL certificate to enable HTTPS, ensure that the firewall allows port 80 traffic, and wait for DNS resolution to take effect.

You can query the Docker container name by following the steps: List all containers (docker ps). Filter the container list (using the grep command). Gets the container name (located in the "NAMES" column).

How to configure Nginx in Windows? Install Nginx and create a virtual host configuration. Modify the main configuration file and include the virtual host configuration. Start or reload Nginx. Test the configuration and view the website. Selectively enable SSL and configure SSL certificates. Selectively set the firewall to allow port 80 and 443 traffic.

Starting an Nginx server requires different steps according to different operating systems: Linux/Unix system: Install the Nginx package (for example, using apt-get or yum). Use systemctl to start an Nginx service (for example, sudo systemctl start nginx). Windows system: Download and install Windows binary files. Start Nginx using the nginx.exe executable (for example, nginx.exe -c conf\nginx.conf). No matter which operating system you use, you can access the server IP

Deploying a ZooKeeper cluster on a CentOS system requires the following steps: The environment is ready to install the Java runtime environment: Use the following command to install the Java 8 development kit: sudoyumininstalljava-1.8.0-openjdk-devel Download ZooKeeper: Download the version for CentOS (such as ZooKeeper3.8.x) from the official ApacheZooKeeper website. Use the wget command to download and replace zookeeper-3.8.x with the actual version number: wgethttps://downloads.apache.or

There are many ways to solve CentOS system failures. Here are some common steps and techniques: 1. Check the log file /var/log/messages: system log, which contains various system events. /var/log/secure: Security-related logs, such as SSH login attempts. /var/log/httpd/error_log: If you use the Apache server, there will be an error message here. 2. Use the diagnostic tool dmesg: display the contents of the kernel ring buffer, which helps understand hardware and driver questions
