I have several servers A, B, C, D, E...Among them, A can access the external network, but the others cannot. I use the http_proxy method to access the external network through A. The main purpose is to access the partner's interface. For example, http://api.xxx.com/get/user, there are about hundreds of times per second.
In the past few days, I have been trying to see if I can maintain a long connection between A and the partner server (assumed to be Z). After modification and testing, the header returned by Z already has the http/1.1 connection as keep-alive, but the Time of the socket -wait is still very high. In addition, in A’s log, nginx’s $connection parameter is always increasing. Why is this?
The following is the proxy-pass configuration on A:
server{
resolver 10.10.2.118;
listen 1080;
error_log /var/log/nginx/proxy.error.log error;
access_log /var/log/nginx/proxy.access.log proxy_access;
location / {
proxy_pass http://$host$request_uri;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Connection "keep-alive";
proxy_http_version 1.1;
proxy_ignore_client_abort on;
proxy_connect_timeout 600;
proxy_read_timeout 600;
proxy_send_timeout 600;
proxy_buffer_size 64k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
allow 10.0.0.0/8;
deny all;
}
}
Is it a problem with my configuration? I don’t understand the Internet and I’m just messing around. I hope someone has some knowledge and can give me some advice. Thank you
I saw the official blog and forwarded this question, so I came to help you check it out. I don’t know if you have solved it yourself
First of all, let me talk about your configuration. I don’t know where it came from. It is equipped with a bunch of parameters about proxy. Take your proxy_pass as an example.
$host$request_uri
It points to your original address, such as accessing your A machine ( Assume it is 10.0.0.100/api/user (10.0.0.100), where $host is 10.0.0.100, $request_uri is/api/user, so something weird happened here. Access A and then proxy to A. Is it already? I'm broken down and a little sleepy. I won't continue to explain what will happen in this situation. I'll just tell you the simple solution----------------The decision is yours, upstream----------------
Here, set the external network IP you need to access to 123.123.123.123,
Also, if you don’t know what some parameters mean, don’t add them, as they may not be suitable for your business scenario
Here we focus on explaining this
keepalive
parameterin upstream We know that in the HTTP 1.1 specification, there is no request identifier like in HTTP/2. A keepalive TCP connection can only send one http request at a time. The second request cannot be sent until the request returns. This is why the high performance Why do webservers (such as Tengine) merge static files such as js into one (to save time and connections)
Why do I use the number 10 here? Assuming that it takes 100ms to request an external host and return it, a single TCP can complete 10 requests in 1s. To complete your requirement of 100 times per second, it needs to be a long connection, roughly The algorithm is like this. In actual situations, you can configure this number to be appropriately larger than the calculated value, and the stability may be better
If you have any questions, leave a message
Excuse me: If the backend server is dynamic at this time, how to use upstream's keepalive to ensure that nginx and backend have long connections? Thanks!