This article mainly focuses on how to communicate non-blockingly with the backend memcached and redis through ngx_lua.
1. Memcached
Accessing Memcached in Nginx requires module support. Here, HttpMemcModule is selected. This module can perform non-blocking communication with the back-end Memcached. We know that Memcached is officially provided. This module only supports get operations, while Memc supports most Memcached commands.
to be passed in the Memc module as parameters. All variables prefixed with $memc_ are the entry variables of Memc. memc_pass points to the backend Memcached Server.
Configuration:
[plain] view
plaincopyprint?
- #Use HttpMemcModule
- location = /memc {
- set $memc_cmd $arg_cmd;
- set $memc_key $arg_key;
-
set $memc_value $arg_val;
- set $memc_exptime $arg_exptime;
-
-
memc_pass '127.0.0.1:11211'; }
Output:
[plain] view
plaincopyprint?
$ curl 'http://localhost/memc?cmd=set&key=foo&val=Hello' -
$ STORED
-
$ curl 'http://localhost/memc?cmd=get&key =foo'
-
$ Hello
This realizes access to memcached. Let’s take a look at how to access memcached in lua.
Configuration:
[plain] view
plaincopyprint?
#Access Memcached in Lua -
location = /memc {
-
internal; #Only internal access
-
set $memc_cmd get;
-
set $memc_key $ arg_key;
-
memc_pass '127.0.0.1:11211';
-
}
-
location = /lua_memc {
-
content_by_lua '
-
local res = ngx.location.capture("/ memc", {
-
args = { key = ngx.var.arg_key }
-
})
-
if res.status == 200 then
-
ngx.say(res.body)
-
End
plaincopyprint?
- $ curl ‘http://localhost/lua_memc?key=foo’
- $ Hello
Accessing memcached through Lua is mainly implemented through sub-requests in a method similar to function calls. First, a memc location is defined for communication through backend memcached, which is equivalent to memcached storage. Since the entire Memc module is non-blocking, ngx.location.capture is also non-blocking, so the entire operation is non-blocking.
2. Redis
Accessing redis requires the support of HttpRedis2Module, which can also communicate with redis in a non-blocking manner. However, the response of redis2 is the native response of redis, so when used in Lua, this response needs to be parsed. LuaRedisModule can be used. This module can construct redis's native request and parse redis's native response.
Configuration:
[plain] view
plaincopyprint?
- #Access Redis in Lua
- location = /redis {
- internal; #Only internal access
- redis2 _query get $arg_key;
- redis2_pass '127.0. 0.1:6379';
- }
- location = /lua_redis { #Requires LuaRedisParser
- content_by_lua '
- Local parser = require("redis.parser")
- Local res = ngx.location.capture("/redis", {
- args = { key = ngx.var.arg_key }
- })
- if res.status == 200 then
- Reply = parser.parse_reply(res.body)
- ngx.say(reply)
- end
-
'; }
Output:
[plain] view
plaincopyprint?
$ curl 'http://localhost/lua_redis?key=foo' -
$ Hello
Similar to accessing memcached, you need to provide a redis storage specifically for querying redis. then Call redis through subrequests.
3. Redis Pipeline When actually accessing redis, it may be necessary to query multiple keys at the same time. We can use ngx.location.capture_multi to send multiple sub-requests to redis storage, and then parse the response content. However, there is a limit. The Nginx kernel stipulates that the number of sub-requests that can be initiated at one time cannot exceed 50, so when the number of keys exceeds 50, this solution is no longer applicable. Fortunately, redis provides a pipeline mechanism that can execute multiple commands in one connection, which can reduce the round-trip delay of executing commands multiple times. After the client sends multiple commands through the pipeline, redis receives and executes these commands sequentially, and then outputs the results of the commands in sequence. When using pipeline in Lua, you need to use redis2_raw_queries of the redis2 module to perform redis' native request queries. Configuration:
[plain] view
plaincopyprint?
- #Access Redis in Lua
- location = /redis {
- internal; #Only internal access
-
- redis2_raw_queries $args $echo_request_body;
- redis2_pass '127.0.0.1:6379';
- }
- location = /pipeline {
- content_by_lua 'conf/pipeline. lua';
- }
-
pipeline .lua
[plain] view
plaincopyprint?
-- conf/pipeline.lua file
- local parser = require('redis.parser')
- local reqs = {
- {'get', 'one' }, {'get', 'two'}
- }
- --Construct native redis query, get onernget twon
- local raw_reqs = {}
- for i , req in ipairs (reqs) do
- table.insert(raw_reqs, parser.build_query(req))
- end
- local res = ngx.location.capture('/redis?'..#reqs, { body = table.concat(raw_reqs, '') })
-
- if res.status and res.body then
- -- Parse the native response of redis
- local replies = parser. parse_replies(res.body, #reqs)
- for i, reply in ipairs(replies) do
- ngx.say(reply[1])
- End Output:
-
[plain] view
plaincopyprint?
$ curl 'http://localhost/pipeline' $ first
second -
4. In the previous example of accessing redis and memcached, in Every time a request is processed, a connection is established with the backend server, and then the connection is released after the request is processed. In this process, there will be some overhead such as three handshakes and timewait, which is intolerable for high-concurrency applications. The connection pool is introduced here to eliminate this overhead.
- The connection pool requires the support of the HttpUpstreamKeepaliveModule module. Configuration:
-
[plain] view
plaincopyprint?
- Http {##Need httpupstreamKeepaliveModule
- upstream redis_pool {
- Server 127.0.0.1:6379;
- #can accommodate 1024 connected connectors
- 1024 single; in}
-
- server {
- local =/redis {
- ...
- redis2_pass redis_pool;
- }
- This module provides keepalive instructions, and its context is upstream. We know that upstream is used when using Nginx as a reverse proxy. Actually upstream refers to "upstream". This "upstream" can be some servers such as redis, memcached or mysql. Upstream can define a virtual server cluster, and these backend servers can enjoy load balancing. Keepalive 1024 defines the size of the connection pool. When the number of connections exceeds this size, subsequent connections will automatically degenerate into short connections. The use of the connection pool is very simple, just replace the original IP and port number.
Someone once measured that when accessing memcached (using the previous Memc module) without using a connection pool, the rps was 20,000. After using the connection pool, rps soared all the way to 140,000. In actual situations, such a big improvement may not be achieved, but basically a 100-200% improvement is still possible. -
5. Summary
- Here is a summary of access to memcached and redis. 1. Nginx provides a powerful programming model. Location is equivalent to a function, sub-request is equivalent to a function call, and location can also send sub-requests to itself, thus forming a recursive model, so this model is used to implement complex businesses logic.
- 2. Nginx’s IO operations must be non-blocking. If Nginx blocks there, it will greatly reduce Nginx’s performance. Therefore, in Lua, subrequests must be issued through ngx.location.capture to delegate these IO operations to Nginx's event model. 3. When you need to use tcp connection, try to use the connection pool. This eliminates a lot of overhead in establishing and releasing connections.
The above introduces the use of ngx_lua to build high-concurrency applications, including aspects of the content. I hope it will be helpful to friends who are interested in PHP tutorials.