1. Install nginx
1. Download nginx
# wget http://nginx.org/download/nginx-1.2.4.tar.gz
2. Download tcp module patch
# wget https://github.com/yaoweibin/nginx_tcp_proxy_module/tarball/master
Source code homepage: https ://github.com/yaoweibin/nginx_tcp_proxy_module
3. Install nginx
# tar xvf nginx-1.2.4.tar.gz # tar xvf yaoweibin-nginx_tcp_proxy_module-v0.4-45-ga40c99a.tar.gz # cd nginx-1.2.4 # patch -p1 < ../yaoweibin-nginx_tcp_proxy_module-a40c99a/tcp.patch #./configure --prefix=/usr/local/nginx --with-pcre=../pcre-8.30 --add-module=../yaoweibin-nginx_tcp_proxy_module-ae321fd/ # make # make install
2. Modify the configuration file
Modify the nginx.conf configuration file
# cd /usr/local/nginx/conf # vim nginx.conf
worker_processes 1; events { worker_connections 1024; } tcp { upstream mssql { server 10.0.1.201:1433; server 10.0.1.202:1433; check interval=3000 rise=2 fall=5 timeout=1000; } server { listen 1433; server_name 10.0.1.212; proxy_pass mssql; } }
3. Start nginx
# cd /usr/local/nginx/sbin/ # ./nginx
View 1433 port:
#lsof :1433
4. Test
# telnet 10.0.1.201 1433
5. Use sql server client tool to test
##6. The execution principle of tcp load balancing
When nginx receives a new client link from the listening port, it immediately executes the routing scheduling algorithm, obtains the specified service IP that needs to be connected, and then creates a new upstream connection. to the specified server.
ps: Service robustness monitoring
tcp load balancing module supports built-in robustness detection. If an upstream server refuses a tcp connection for more than the proxy_connect_timeout configured time, it will is considered to have expired. In this case, nginx immediately tries to connect to another normal server in the upstream group. Connection failure information will be recorded in the nginx error log. (2) Prepare "commonly used" data in advance, actively "preheat" the service, and then open access to the server after the preheating is completed.
The tcp load balancing module supports built-in robustness detection. If an upstream server refuses a tcp connection for more than the proxy_connect_timeout configured time, it will be considered to have failed. In this case, nginx immediately tries to connect to another normal server in the upstream group. Connection failure information will be recorded in the nginx error log.
If a server fails repeatedly (exceeding the parameters configured by max_fails or fail_timeout), nginx will also kick the server. 60 seconds after the server is kicked off, nginx will occasionally try to reconnect to it to check whether it is back to normal. If the server returns to normal, nginx will add it back to the upstream group and slowly increase the proportion of connection requests.
The reason for "slowly increasing" is because usually a service has "hot data", that is to say, more than 80% or even more of the requests will actually be blocked in the "hot data cache" , only a small part of the requests are actually processed. When the machine is just started, the "hot data cache" has not actually been established. At this time, a large number of requests are forwarded explosively, which is likely to cause the machine to be unable to "bear" and hang up again. Taking mysql as an example, more than 95% of our mysql queries usually fall into the memory cache, and not many queries are actually executed.
In fact, whether it is a single machine or a cluster, restarting or switching in a high concurrent request scenario will have this risk. There are two main ways to solve it:
(1) The requests gradually increase, from less to more, gradually accumulating hotspot data, and finally reaching normal service status.
(2) Prepare "commonly used" data in advance, actively "preheat" the service, and then open access to the server after the preheating is completed.
The principle of tcp load balancing is the same as that of lvs. It works at a lower level and its performance will be much higher than the original http load balancing. However, it will not be better than lvs. lvs is placed in the kernel module, while nginx works in user mode, and nginx is relatively heavy. Another point, which is very regrettable, is that this module is a paid function.
The above is the detailed content of How to configure load balancing for TCP in Nginx server. For more information, please follow other related articles on the PHP Chinese website!