Home > Database > Mysql Tutorial > pt-query-digest(percona toolkit)小解

pt-query-digest(percona toolkit)小解

巴扎黑
Release: 2017-06-23 11:05:28
Original
1319 people have browsed it
pt-query-digest can analyze MySQL query-related information through logs, processlist, and tcpdump. The basic syntax is as follows:
pt-query-digest [OPTIONS] [FILES] [DSN]
Copy after login

pt-query-digest is a simple and easy-to-use tool for analyzing MySQL queries. It can analyze MySQL slow log, general LOG and binary log queries. (Binary logs must first be converted to text, via the mysqlbinlog tool). It also works with SHOW PROCESSLIST and MySQL protocol data from tcpdump. By default, the tool reports which query is the slowest, so it's most important to optimize. More customized reports can be created by using parameters such as --group-by, --filter, and --embedded-attributes.
pt-query-digest mainly has the following functions:
(1) Use slow.log to generate statistical information:
pt-query-digest slow.log
Copy after login

(2) Analyze and generate reports from processlist:
pt-query-digest --processlist h=host1
Copy after login

(3) Analyze slow queries through tcppdump packet capture:
tcpdump -s 65535 -x -nn -q -tttt -i any -c 1000 port 3306 > mysql.tcp.txt
pt-query-digest --type tcpdump mysql.tcp.txt
Copy after login

(4) Analyze slow log queries to another host:
pt-query-digest --review h=host2 --no-report slow.log
Copy after login

Let’s take a look at the main parameters:
--type defaults to slowlog, and the parameter value can be set to binlog, genlog, slowlog, tcpdump, rawlog, etc.
--processlist Analyze MySQL's full log query through processlist
--create-review-table When using the --review parameter to output the analysis results to the table, it will automatically create.
--create-history-table When using the --history parameter to output the analysis results to a table, it will be automatically created if there is no table.
--filter matches and filters the input slow query according to the specified string and then analyzes it
--limit limits the percentage or number of output results. The default value is 20, which is the slowest The 20 statements are output. If it is 50%, they are sorted from large to small according to the total response time, and the output is cut off when the total reaches 50%.
--host MySQL server address
--user mysql username
--password mysql user password
--history Save the analysis results to In the table, the analysis results are more detailed. The next time you use --history, if the same statement exists and the time interval of the query is different from that in the history table, it will be recorded in the data table. You can query the same CHECKSUM Compare historical changes for a certain type of query.
--review Save the analysis results to the table. This analysis only parameterizes the query conditions. One type of query is for one record, which is relatively simple. When --review is used next time, if the same statement analysis exists, it will not be recorded in the data table.
--output analysis result output type, the value can be report (standard analysis report), slowlog (Mysql slow log), json, json-anon, generally use report for easier reading.
--since When to start analyzing, the value is a string, it can be a specified time point in the format of "yyyy-mm-dd [hh:mm:ss]", or it can be a simple A time value: s (seconds), h (hours), m (minutes), d (days), for example, 12h means counting started 12 hours ago.
--until deadline, combined with --since can analyze slow queries within a period of time.
Let’s take a look at the information related to the default output report:
(1) Data statistics information
# 2291.9s user time, 6.4s system time, 41.68M rss, 193.36M vsz
# Current date: Mon Jun 19 11:19:51 2017# Hostname: mxqmongodb2
# Files: /home/mysql/db3306/log/slowlog_343306.log
# Overall: 6.72M total, 140 unique, 16.12 QPS, 0.69x concurrency _________
# Time range: 2017-06-13T14:34:41 to 2017-06-18T10:22:04# Attribute total min max avg 95% stddev median
# ============ ======= ======= ======= ======= ======= ======= =======# Exec time 287519s 1us 20s 43ms 148ms 339ms 214us
# Lock time 151259s 0 20s 23ms 144us 319ms 47us
# Rows sent 5.40M 0 1000 0.84 0.99 6.58 0.99# Rows examine 388.33M 0 3.72k 60.59 5.75 388.16 0.99# Query size 692.26M 6 799 108.02 202.40 69.96 80.10
Copy after login

The above information includes Hostname host name, Overall query, unique query, analysis time period Time range, the Attribute part is the same as the third part, and is best analyzed
(2) Slow Query SQL statistical results and cost statistics
# Profile
# Rank Query ID Response time Calls R/Call V/M Item
# ==== ================== ================= ======= ====== ===== =========# 1 0x255C57D761A899A9 146053.6926 50.8% 75972 1.9225 2.93 UPDATE warehouse
# 2 0x813031B8BBC3B329 94038.9621 32.7% 242741 0.3874 0.23 COMMIT
# 3 0xA0352AA54FDD5DF2 10125.5055 3.5% 75892 0.1334 0.43 UPDATE order_line
# 4 0xE5E8C12332AD11C5 5660.5113 2.0% 75977 0.0745 0.83 SELECT district
# 5 0xBD195A4F9D50914F 3634.6219 1.3% 757760 0.0048 1.01 SELECT stock
# 6 0xF078A9E73D7A8520 3431.3527 1.2% 75874 0.0452 0.81 UPDATE district
# 7 0x9577D48F480A1260 2307.4342 0.8% 50255 0.0459 1.25 SELECT customer
# 8 0xFFDA79BA14F0A223 2158.4731 0.8% 75977 0.0284 0.54 SELECT customer warehouse
# 9 0x5E61FF668A8E8456 1838.4440 0.6% 1507614 0.0012 0.74 SELECT stock
# 10 0x10BEBFE721A275F6 1671.8274 0.6% 757751 0.0022 0.52 INSERT order_line
# 11 0x8B2716B5B486F6AA 1658.5984 0.6% 75871 0.0219 0.75 INSERT history
# 12 0xBF40A4C7016F2BAE 1504.7939 0.5% 758569 0.0020 0.77 SELECT item
# 13 0x37AEB73B59EFC119 1470.5951 0.5% 2838 0.5182 0.27 INSERT SELECT tpcc._stock_new tpcc.stock
# 15 0x26C4F579BF19956D 1030.4416 0.4% 1982 0.5199 0.28 INSERT SELECT tpcc.__stock_new tpcc.stock
# 22 0xD80B7970DBF2419C 493.0831 0.2% 947 0.5207 0.28 INSERT SELECT tpcc.__stock_new tpcc.stock
# 23 0xDE7EA4E363CAD006 488.2134 0.2% 943 0.5177 0.25 INSERT SELECT tpcc.__stock_new tpcc.stock
# 25 0x985B012461683472 470.6418 0.2% 907 0.5189 0.25 INSERT SELECT tpcc.__stock_new tpcc.stock
# MISC 0xMISC 9482.0467 3.3% 2182254 0.0043 0.0 <123 ITEMS>
Copy after login

The information includes Response: total response time, time: the time of this query The total time spent in the analysis. calls: The number of executions, that is, the total number of query statements of this type in this analysis. R/Call: Average response time per execution. Item: SQL operation table.
(3) The third part, the detailed information of each SQL
# Query 1: 1.14 QPS, 2.19x concurrency, ID 0x255C57D761A899A9 at byte 1782619576# This item is included in the report because it matches --limit.
# Scores: V/M = 2.93# Time range: 2017-06-13T14:34:42 to 2017-06-14T09:05:56# Attribute pct total min max avg 95% stddev median
# ============ === ======= ======= ======= ======= ======= ======= =======# Count 1 75972# Exec time 50 146054s 160us 20s 2s 7s 2s 1s
# Lock time 94 142872s 39us 20s 2s 7s 2s 992ms
# Rows sent 0 0 0 0 0 0 0 0# Rows examine 0 74.19k 1 1 1 1 0 1# Query size 0 4.05M 53 57 55.88 56.92 0.82 54.21# String:
# Hosts 127.0.0.1# Users root
# Query_time distribution
# 1us
# 10us
# 100us ######################
# 1ms ##
# 10ms ###
# 100ms ##################################
# 1s ################################################################
# 10s+ ##
# Tables
# SHOW TABLE STATUS LIKE 'warehouse'\G
# SHOW CREATE TABLE `warehouse`\G
UPDATE warehouse SET w_ytd = w_ytd + 3651 WHERE w_id = 4\G
# Converted for EXPLAIN
# EXPLAIN /*!50100 PARTITIONS*/select w_ytd = w_ytd + 3651 from warehouse where w_id = 4\G
Copy after login

Query 1, is based on the cost For the top ranked query, the first row is the column headers of the table. Percent is the percentage of the total for the entire analysis run, and total is the actual value of the specified metric. For example, in this case we can see that the query was executed 75972 times, which is 50% of the total queries in the file. The min, max and avg columns are self-explanatory. The 95th percentile column shows the 95th percentile; 95% of values ​​are less than or equal to that value. Standard deviation shows how closely the values ​​are grouped together. The standard deviation and median are calculated from the 95th percentile, discarding the largest and smallest values.
Let’s take a look at the regular usage:
1: Analyze slow logs
Default report
[root@mxqmongodb2 bin]# ./pt-query-digest /home/mysql/db3306/log/slowlog_343306.log >/home/sa/slowlog_343306.log
Copy after login

Divide it according to time. Generally, we will analyze the slow log of one day:
[root@mxqmongodb2 bin]# ./pt-query-digest --since=24h /home/mysql/db3306/log/slowlog_343306.log >/home/sa/slowlog_343306_24.log
Copy after login

而且我们可以设置过滤条天通过--filter参数,更好生成我们想要的报表。
例如只查询select:--filter '$event->{arg} =~ m/^select/i',只查询某个用户:--filter '($event->{user} || "") =~ m/^dba/i' ,全表扫描等:--filter '(($event->{Full_scan} || "") eq "yes") ||(($event->{Full_join} || "") eq "yes")' 
2:保存分析结果到表文件:
[root@mxqmongodb2 bin]# ./pt-query-digest --user=root --password=123456 --port=3306 --review h=172.16.16.35,D=test,t=query_report /home/mysql/db3306/log/slowlog_343306.log
Copy after login

 

看一下结果样式
mysql> select * from query_report limit 1\G*************************** 1. row ***************************checksum: 1206612749604517366fingerprint: insert into order_line (ol_o_id, ol_d_id, ol_w_id, ol_number, ol_i_id, ol_supply_w_id, ol_quantity, ol_amount, ol_dist_info) values(?+)
sample: INSERT INTO order_line (ol_o_id, ol_d_id, ol_w_id, ol_number, ol_i_id, ol_supply_w_id, ol_quantity, ol_amount, ol_dist_info) VALUES (3730, 6, 10, 1, 6657, 10, 8, 62.41910171508789, 'N3F5fAhga7U51tlXr8AEgZdi')
first_seen: 2017-06-13 14:34:42last_seen: 2017-06-14 09:05:54reviewed_by: NULL
reviewed_on: NULL
comments: NULL1 row in set (0.00 sec)
Copy after login

 

3:分析binlog(要先使用mysqlbinlog将binlog转换)
[root@mxqmongodb2 log]# mysqlbinlog mysql-bin.000012 >/home/sa/mysql-bin_000012.log
[root@mxqmongodb2 bin]# ./pt-query-digest --type=binlog /home/sa/mysql-bin_000012.log >/home/sa/mysql-bin_000012_report.log
Copy after login

 

这个测试的时候还是有点小迷茫的,因为打印的结果并不是我要的,难道是因为我的binlog格式是ROW?保留下来,后面在测试。
4:分析general log
加上--type=genlog 即可,没有验证。。。。。。
5:tcpdump抓包分析
我们先要开启压力测试:
[root@mxqmongodb2 tpcc-mysql]# ./tpcc_start -h127.0.0.1 -P3306 -d tpcc -u root -p123456 -w 10 -c 10 -r 10 -l 3000
Copy after login

 

连续测试三十分钟,提供我们的抓取数据:
[root@mxqmongodb2 log]# tcpdump -s 65535 -x -nn -q -tttt -i any -c 10000 port 3306 >/home/sa/mysql.tcp.txt
[root@mxqmongodb2 bin]# ./pt-query-digest --type=tcpdump /home/sa/mysql.tcp.txt >/home/sa/mysql.tcp_repot.txt
Copy after login

 

看一下效果:
[root@mxqmongodb2 sa]# cat mysql.tcp_repot.txt
 
# 4.2s user time, 50ms system time, 27.65M rss, 179.15M vsz
# Current date: Tue Jun 20 17:08:40 2017# Hostname: mxqmongodb2
# Files: /home/sa/mysql.tcp.txt
# Overall: 155 total, 3 unique, 9.76 QPS, 4.52x concurrency ______________
# Time range: 2017-06-20 17:06:19.850032 to 17:06:35.731291# Attribute total min max avg 95% stddev median
# ============ ======= ======= ======= ======= ======= ======= =======# Exec time 72s 63us 2s 463ms 1s 352ms 393ms
# Rows affecte 25 0 15 0.16 0.99 1.18 0# Query size 956 6 30 6.17 5.75 1.85 5.75# Warning coun 1 0 1 0.01 0 0.08 0
 # Profile
# Rank Query ID Response time Calls R/Call V/M Item
# ==== ================== ============= ===== ====== ===== =========# 1 0x813031B8BBC3B329 69.9077 97.4% 153 0.4569 0.25 COMMIT
# MISC 0xMISC 1.8904 2.6% 2 0.9452 0.0 <2 ITEMS>
 # Query 1: 9.63 QPS, 4.40x concurrency, ID 0x813031B8BBC3B329 at byte 10100332# This item is included in the report because it matches --limit.
# Scores: V/M = 0.25# Time range: 2017-06-20 17:06:19.850032 to 17:06:35.731291# Attribute pct total min max avg 95% stddev median
# ============ === ======= ======= ======= ======= ======= ======= =======# Count 98 153# Exec time 97 70s 63us 2s 457ms 1s 336ms 393ms
# Rows affecte 100 25 0 15 0.16 0.99 1.19 0# Query size 96 918 6 6 6 6 0 6# Warning coun 100 1 0 1 0.01 0 0.08 0# String:
# Hosts 127.0.0.1# Query_time distribution
# 1us
# 10us #
# 100us ####
# 1ms #
# 10ms #
# 100ms ################################################################
# 1s ##########
# 10s+commit\G
Copy after login

 

The above is the detailed content of pt-query-digest(percona toolkit)小解. For more information, please follow other related articles on the PHP Chinese website!

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template