Home > Database > Redis > Introduction to redis performance analysis and monitoring solutions

Introduction to redis performance analysis and monitoring solutions

Release: 2020-03-10 09:23:29
forward
2519 people have browsed it

Introduction to redis performance analysis and monitoring solutions

1. redis slowlog analysis

SLOWLOG subcommand [argument]

The following are the slow query configuration parameters of redis.conf :

slowlog-log-slower-than 10000     #查询时间超过10ms的会被记录
slowlog-max-len 128     #最多记录128个慢查询
Copy after login

The above parameters can be dynamically queried or set in redis: use the config get and config set commands

Read slow queries: You can get a specified number of slow queries

127.0.0.1:6320> slowlog get 1
 1) 1) (integer) 394689     #slowlog的唯一编号
    2) (integer) 1480851711     #此次slowlog事件的发生时间
    3) (integer) 10639     #耗时
    4) 1) "HGET"     #slowlog事件所对应的redis 命令
       2) "hash:submit_sent_150004"
       3) "15000429648122734363745165312"
Copy after login

Reset slowlog statistics:

SLOWLOG RESET
Copy after login

redis slow commands analysis:

1. Redis uses a single thread to process client requests. The result is that when the request is served slowly , all other clients will wait for this request to be serviced. If you need to execute a lot of slow commands, it is recommended to put slow queries on the redis slave for execution.

2. About the keys command: A very common source of delay caused by executing slow commands is the use of the KEYS command in a production environment. As stated in the Redis documentation, KEYS should only be used for debugging purposes.

Redis2.8 has introduced new commands for querying large collections. Please check the SCAN, SSCAN, HSCAN and ZSCAN commands for more information.

  • SCAN iterates over the set of keys in the currently selected Redis database.

  • SSCAN iterates over elements of the sets collection type.

  • HSCAN iterates over fields of hash types and their associated values.

  • ZSCAN iterates over elements of a sorted set type and their associated scores.

Since these commands allow incremental iteration, returning only a small number of elements per call, they can be used in production without the drawbacks of commands like KEYS or SMEMBERS, which when called Large collections of keys or elements may block the server for a long time (even seconds).

2. How to use the SCAN, SSCAN, HSCAN and ZSCAN commands

SCAN is a cursor-based iterator. This means that on each call to the command, the server returns an updated cursor that the user needs to use as the cursor parameter on the next call.

Iteration begins when the cursor is set to 0, and terminates when the cursor returned by the server is 0. The following is an example of SCAN iteration:

127.0.0.1:6319> scan 0
1) "65536"
2)  1) "list:submit_sent_2016-12-02-13:50_130806"
    2) "list:submit_sent_2016-12-01-15:01_130479"
    3) "list:submit_sent_2016-12-01-13:21_178888"
    4) "list:submit_sent_2016-12-02-10:46_186064"
    5) "list:submit_sent_2016-12-01-16:48_135546"
    6) "list:submit_sent_2016-12-02-14:27_185158"
    7) "list:submit_sent_2016-12-02-09:57_186532"
    8) "list:submit_sent_2016-12-01-13:24_183217"
    9) "list:submit_sent_2016-12-02-08:29_189316"
   10) "list:submit_sent_2016-12-01-13:46_177645"
127.0.0.1:6319> scan 65536
1) "884736"
2)  1) "list:submit_sent_2016-12-01-17:55_175010"
    2) "list:submit_sent_2016-12-02-18:28_138052"
    3) "list:submit_sent_2016-12-01-18:17_150243"
    4) "list:submit_sent_2016-12-01-11:22_137606"
    5) "list:submit_sent_2016-12-01-21:15_183748"
    6) "list:submit_sent_2016-12-02-11:47_155212"
    7) "list:submit_sent_2016-12-01-11:01_137065"
    8) "list:submit_sent_2016-12-02-08:03_181202"
    9) "list:submit_sent_2016-12-02-12:16_136096"
   10) "list:submit_sent_2016-12-01-18:12_159893"
Copy after login

Start an iteration with the cursor value 0 and call SCAN until the returned cursor is 0 again, called a full iteration.

The count option of scan: Specify the number of outputs

127.0.0.1:6319> scan 0 count 20
1) "884736"
2)  1) "list:submit_sent_2016-12-02-13:50_130806"
    2) "list:submit_sent_2016-12-01-15:01_130479"
    3) "list:submit_sent_2016-12-01-13:21_178888"
    4) "list:submit_sent_2016-12-02-10:46_186064"
    5) "list:submit_sent_2016-12-01-16:48_135546"
    6) "list:submit_sent_2016-12-02-14:27_185158"
    7) "list:submit_sent_2016-12-02-09:57_186532"
    8) "list:submit_sent_2016-12-01-13:24_183217"
    9) "list:submit_sent_2016-12-02-08:29_189316"
   10) "list:submit_sent_2016-12-01-13:46_177645"
   11) "list:submit_sent_2016-12-01-17:55_175010"
   12) "list:submit_sent_2016-12-02-18:28_138052"
   13) "list:submit_sent_2016-12-01-18:17_150243"
   14) "list:submit_sent_2016-12-01-11:22_137606"
   15) "list:submit_sent_2016-12-01-21:15_183748"
   16) "list:submit_sent_2016-12-02-11:47_155212"
   17) "list:submit_sent_2016-12-01-11:01_137065"
   18) "list:submit_sent_2016-12-02-08:03_181202"
   19) "list:submit_sent_2016-12-02-12:16_136096"
   20) "list:submit_sent_2016-12-01-18:12_159893"
Copy after login

The match option of scan: Similar to the keys command to match by pattern, just append the MATCH parameter at the end of the SCAN command

127.0.0.1:6319> scan 0 match *submit_sent*
1) "65536"
2)  1) "list:submit_sent_2016-12-02-13:50_130806"
    2) "list:submit_sent_2016-12-01-15:01_130479"
    3) "list:submit_sent_2016-12-01-13:21_178888"
    4) "list:submit_sent_2016-12-02-10:46_186064"
    5) "list:submit_sent_2016-12-01-16:48_135546"
    6) "list:submit_sent_2016-12-02-14:27_185158"
    7) "list:submit_sent_2016-12-02-09:57_186532"
    8) "list:submit_sent_2016-12-01-13:24_183217"
    9) "list:submit_sent_2016-12-02-08:29_189316"
   10) "list:submit_sent_2016-12-01-13:46_177645"
127.0.0.1:6319> scan 0 count 15  match *submit_sent*     #查找匹配的数据并返回15个
1) "2031616"
2)  1) "list:submit_sent_2016-12-02-13:50_130806"
    2) "list:submit_sent_2016-12-01-15:01_130479"
    3) "list:submit_sent_2016-12-01-13:21_178888"
    4) "list:submit_sent_2016-12-02-10:46_186064"
    5) "list:submit_sent_2016-12-01-16:48_135546"
    6) "list:submit_sent_2016-12-02-14:27_185158"
    7) "list:submit_sent_2016-12-02-09:57_186532"
    8) "list:submit_sent_2016-12-01-13:24_183217"
    9) "list:submit_sent_2016-12-02-08:29_189316"
   10) "list:submit_sent_2016-12-01-13:46_177645"
   11) "list:submit_sent_2016-12-01-17:55_175010"
   12) "list:submit_sent_2016-12-02-18:28_138052"
   13) "list:submit_sent_2016-12-01-18:17_150243"
   14) "list:submit_sent_2016-12-01-11:22_137606"
   15) "list:submit_sent_2016-12-01-21:15_183748"
Copy after login

Sscan query sets collection method:

redis 127.0.0.1:6379> sadd myset 1 2 3 foo foobar feelsgood
(integer) 6
redis 127.0.0.1:6379> sscan myset 0 match f*
1) "0"
2) 1) "foo"
   2) "feelsgood"
   3) "foobar"
redis 127.0.0.1:6379>
Copy after login

hscan query hash collection method:

redis 127.0.0.1:6379> hmset hash name Jack age 33
OK
redis 127.0.0.1:6379> hscan hash 0
1) "0"
2) 1) "name"
   2) "Jack"
   3) "age"
   4) "33"
Copy after login

When a Linux kernel enables the transparent huge page function, Redis uses fork call There will then be a large latency penalty for data persistence on disk.

You can use this method to turn off this feature of the system kernel:

echo never > /sys/kernel/mm/transparent_hugepage/enabled
Copy after login

Note: redis needs to be restarted to take effect.

3. Check whether redis is affected by the system's use of swap:

查找redis进程id:
redis-cli -p 6319 info|grep process_id
process_id:32139
查看redis进程的内存使用信息:
cd /proc/32139
查看该进程使用swap分区的统计信息,以不使用或只有少量的4kB为佳:
cat smaps | grep 'Swap:'
同时打印出内存映射和swap使用信息:查看那些较大的内存消耗是否引发了大的swap使用
cat smaps | egrep '^(Swap:Size)'
Copy after login

4. Use redis watchdog to locate the delay: experimental function, please Make sure the redis data is backed up.

Redis software watchdog
该功能只能动态启用,使用以下命令:
CONFIG SET watchdog-period 500
注:redis会开始频繁监控自身的延时问题,并把问题输出到日志文件中去。
 
关闭watchdog:
CONFIG SET watchdog-period 0
 
注:打开watchdog功能,会对redis服务性能产生影响。
Copy after login

5. About redis latency monitoring framework

Redis latency monitoring framework

How to enable the redis monitoring framework:

CONFIG SET latency-monitor-threshold 100
Copy after login

Default In this case, the threshold is set to 0, which disables redis monitoring. In fact, enabling this monitoring function adds very little cost to redis. However, for a redis that is running well, there is no need to turn on this monitoring function.

How to use the LATENCY command

View the latest delay event:

127.0.0.1:6319> latency latest
1) 1) "command"     #event name
   2) (integer) 1480865648     #发生时间
   3) (integer) 207     #耗时,毫秒
   4) (integer) 239     #从redis启动或上次latency reset以来,这种事件的最大延时记录
Copy after login

View the historical information of the delay event:

LATENCY HISTORY event-name

For a given event, the command will return up to 160 elements. Applications may want to obtain raw data in order to perform monitoring, display graphics, etc.

127.0.0.1:6319> latency history command
  1) 1) (integer) 1480865710
     2) (integer) 207
  2) 1) (integer) 1480865711
     2) (integer) 217
  3) 1) (integer) 1480865712
     2) (integer) 198
  4) 1) (integer) 1480865713
     2) (integer) 226
  5) 1) (integer) 1480865714
     2) (integer) 224
Copy after login

Reset statistical data to zero:

LATENCY RESET [event-name ... event-name]

Display delayed events in the form of text charts:

LATENCY GRAPH event-name
127.0.0.1:6379> latency graph command
command - high 500 ms, low 101 ms (all time high 500 ms)
--------------------------------------------------------------------------------
   #_
  _||
 _|||
_||||
 
11186
542ss
sss
Copy after login

Note: Each column represents a delayed event; what is shown below is how long the event occurred before the current time, such as 2m or 38s; in the statistical events, the minimum delay event is recorded as a short underline, and the maximum delay event is expressed as A # from above. This figure can reflect a changing trend of delayed events.

LATENCY DOCTOR, intelligent analysis and suggestions for delayed event statistics:

127.0.0.1:6379> latency doctor
Dave, I have observed latency spikes in this Redis instance.
You don't mind talking about it, do you Dave?
1. command: 5 latency spikes (average 300ms, mean deviation 120ms,
  period 73.40 sec). Worst all time event 500ms.
I have a few advices for you:
- Your current Slow Log configuration only logs events that are
  slower than your configured latency monitor threshold. Please
  use 'CONFIG SET slowlog-log-slower-than 1000'.
- Check your Slow Log to understand what are the commands you are
  running which are too slow to execute. Please check
  http://redis.io/commands/slowlog for more information.
- Deleting, expiring or evicting (becaus
Copy after login
127.0.0.1:6320> latency doctor
I have a few advices for you:
- I detected a non zero amount of anonymous huge pages used by your process. This creates very serious latency events in different conditions, especially 
when Redis is persisting on disk. To disable THP support use the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled', make sure to also 
add it into /etc/rc.local so that the command will be executed again after a reboot. Note that even if you have already disabled THP, you still need to
 restart the Redis process to get rid of the huge pages already created.
Copy after login

For more redis knowledge, please pay attention to the redis tutorial column on the PHP Chinese website.

The above is the detailed content of Introduction to redis performance analysis and monitoring solutions. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:csdn.net
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template