Mysql performance inspection and optimization methods_PHP tutorial

WBOY
Release: 2016-07-21 15:46:14
Original
693 people have browsed it

1. The index is not built;
2. The SQL writing method is too complicated;
3. The configuration is wrong;
4. The machine cannot bear the load;
1. The index is not built
If you see Mysql consumes a lot of CPU, which can be checked using the mysql client tool.
Execute under linux
/usr/local/mysql/bin/mysql -hlocalhost -uroot -p
Enter the password. If there is no password, you can enter the client interface without the -p parameter.
Look at the current running status
show full processlist
You can run it several times
This command can see the sql statement currently being executed. It will inform the executed sql, database name, and execution Status, client IP from, account used, running time and other information
In my cache backend, most of the time there is no SQL statement displayed. I think this is relatively normal. If you see a lot of sql statements, then this mysql must have performance problems
If performance problems occur, you can analyze:
1. Is there a sql statement that is stuck?
This There are many situations. If the database uses myisam, then there may be a writing thread that locks the data table. If this statement does not end, other statements cannot run.
Check the time item in the processlist to see if there are any statements that take a long time to execute. Pay attention to these statements.
2. A large number of the same sql statements are being executed
If this happens, it may be that the execution efficiency of the sql statement is low. Pay attention to these statements as well.
Then collect all the statements you suspect and use desc(explain) to check these statements.
First look at a normal desc output:
mysql> desc select * from imgs where imgid=1651768337;
+----+-------------+ -------+-------+---------------+---------+-------- -+-------+------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+------------ ---+---------+---------+-------+------+-------+
| 1 | SIMPLE | imgs | const | PRIMARY | PRIMARY | 8 | const | 1 | |
+----+-------------+------- +-------+---------------+----------+----------+----- --+------+-------+
1 row in set (0.00 sec)
Pay attention to the three items key, rows and Extra. The result returned by this statement illustrates This sql will use the PRIMARY primary key index to query, and the number of result sets is 1. Extra is not displayed, which proves that no sorting or other operations are used. It can be inferred from this result that mysql will query the record imgid=1651768337 from the index, and then retrieve all fields from the real table, which is a very simple operation.
key indicates the index that will be used by the current sql. Mysql can only use one index when executing a simple statement. Pay attention to this restriction; rows is the size of the result set returned, and the result set is all matches using this index for a search. Results; Extra will generally display the query and sorting methods.
If the key is not used, or the rows are very large and filesort is used, the efficiency will generally be affected, for example:
mysql> desc select * from imgs where userid="7mini" order by clicks desc limit 10;
+----+-------------+-------+------+------------ ---+------+---------+------+-------+-------------- ---------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+- ------------+-------+------+---------------+------ +---------+------+-------+------------------------ -----+
| 1 | SIMPLE | imgs | ALL | NULL | NULL | NULL | NULL | 12506 | Using where; Using filesort |
+----+------ ------+-------+------+---------------+------+----- ----+------+-------+--------------------------------+
1 row in set (0.00 sec)
This SQL result set has 12506 entries and uses filesort, so it will be very efficient to execute. At this time, when mysql is executed, it will scan the entire table to find records matching userid="7mini" one by one, and then sort the clicks of these records. The efficiency can be imagined. If it is found to be relatively fast during actual execution, it is because the server memory is enough to read all 12506 relatively short records into the memory, so it is still relatively fast. However, if the concurrency increases or the table becomes larger, the efficiency problem will become serious.
At this time I added userid to the index:
create index userid on imgs (userid);
Then check again:
mysql> desc select * from imgs where userid="7mini" order by clicks desc limit 10;
+----+-------------+-------+------+--------- ------+--------+---------+-------+------+--------- --------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+- ---+-------------+-------+------+---------------+- -------+---------+-------+------+----------------- ------------+
| 1 | SIMPLE | imgs | ref | userid | userid | 51 | const | 8 | Using where; Using filesort |
+----+ -------------+-------+------+---------------+----- ---+---------+-------+------+--------------------- --------+
1 row in set (0.00 sec)
Well, you can see that mysql uses the userid index to search. After a search using the userid index, the result set has 8 strip.Although filesort was used to sort one by one, the efficiency problem was alleviated because the result set only had 8 items.
However, if I query with another userid, the results will be different:
mysql> desc select * from imgs where userid="admin" order by clicks desc limit 10;
+--- -+-------------+-------+------+---------------+--- -----+---------+-------+------+------------------- ----------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+------ -------+-------+------+---------------+--------+-- -------+-------+------+--------------------------- --+
| 1 | SIMPLE | imgs | ref | userid | userid | 51 | const | 2944 | Using where; Using filesort |
+----+---------- ---+-------+------+---------------+--------+------ ---+-------+------+-----------------------------+
1 row in set (0.00 sec)
This result is basically the same as the result of userid="7mini", but after mysql uses the userid index to search once, the size of the result set reaches 2944, and these 2944 records will be added to the memory. The efficiency of filesort is much worse than that of 7mini. There are two ways to solve this problem. The first way is to add an index and judgment condition. Because I only need to get the largest 10 pieces of data based on the number of clicks, so there is a lot of data that I don’t need to add to sort at all, such as If the number of clicks is less than 10, these data may account for a large part.
I add an index to clicks, then add a where condition and query again:
create index clicks on imgs(clicks);
mysql> desc select * from imgs where userid="admin" order by clicks desc limit 10;
+----+-------------+-------+------+--------- ------+--------+---------+-------+------+--------- --------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+- ---+-------------+-------+------+---------------+- -------+---------+-------+------+----------------- ------------+
| 1 | SIMPLE | imgs | ref | userid,clicks | userid | 51 | const | 2944 | Using where; Using filesort |
+--- -+-------------+-------+------+---------------+--- -----+---------+-------+------+------------------- ----------+
1 row in set (0.00 sec)
At this time you can see that possible_keys has become userid, clicks, possible_keys is all the indexes that can be matched, mysql will change from possible_keys You can judge by yourself and use one of the indexes to execute the statement. It is worth noting that the index used by mysql may not be optimized. This time, mysql was queried using the userid index, which did not follow my wishes, so the results still did not change. Change the sql and add use index to force mysql to use clicks index:
mysql> desc select * from imgs use index (clicks) where userid='admin' and clicks>10 order by clicks desc limit 10
+-- --+-------------+-------+-------+---------------+- -------+----------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------- +-------+---------------+--------+----------+------ +------+-------------+
| 1 | SIMPLE | imgs | range | clicks | clicks | 4 | NULL | 5455 | Using where |
+----+-------------+------+-------+-------------- -+--------+---------+------+------+-------------+
1 row in set (0.00 sec)
At this time, mysql uses the clicks index for query, but the result set is larger than the userid! It seems that there are further restrictions:
mysql> desc select * from imgs use index (clicks) where userid='admin' and clicks>1000 order by clicks desc limit 10
+----+-------------+------ -+-------+---------------+--------+----------+----- -+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------+-------+-------------- -+--------+---------+------+------+-------------+
| 1 | SIMPLE | imgs | range | clicks | clicks | 4 | NULL | 312 | Using where |
+----+-------------+--- ----+-------+--------------+--------+----------+-- ----+------+-------------+
1 row in set (0.00 sec)
When added to 1000, the result set becomes 312 items, the sorting efficiency should be acceptable.
However, using the index changing optimization method requires taking a sampling point, such as the number 1000 in this example. In this way, for each value of userid, a sampling point must be found, which is a problem for the program. Very difficult to handle. If sampling is based on 1000, then in the example of userid='7mini', the results obtained will not be 8, but 2, causing confusion to the user.
Of course there is another way to add dual indexes:
create index userid_clicks on imgs (userid, clicks)
mysql> desc select * from imgs where userid="admin" order by clicks desc limit 10 ;
+----+-------------+-------+------+----------- -----------+---------------+---------+-------+---- --+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+ -------------+------+------+--------------------- -+---------------+----------+-------+------+------- ------+
| 1 | SIMPLE | imgs | ref | userid,userid_clicks | userid_clicks | 51 | const | 2944 | Using where |
+----+------ ------+-------+------+----------------------+----- ----------+---------+-------+------+-------------+
1 row in set (0.00 sec)
You can see at this time that the result set still has 2944 entries, but the filesort in Extra is missing. At this time, mysql uses the userid_clicks index to query. Not only can all records with userid="admin" be quickly queried, but the results are sorted according to clicks! So there is no need to read this result set into the memory and sort it one by one. The efficiency will be much higher.
But there is a problem with using multi-field index. If you query many types of SQL, you have to plan it carefully. Otherwise, a lot of indexes will be built, which will not only affect the efficiency of data insert and update, but also Data tables are also prone to damage.
The above is a way to optimize the index. Because the reasons may be more complicated, it will be longer to write. Generally, after optimizing the index, the efficiency of mysql will be improved by n levels, so there is no need to consider adding more machines to solve the problem. Problem.
However, mysql or even all databases may not be able to solve the limit problem. In mysql, there is no problem as long as the index is suitable for limit 0 and 10, but limit 100000 and 10 will be very slow, because mysql will scan the sorted results, then find the 100000 point, take out 10 records and return them. To find the 100,000 point, it is necessary to scan 100,000 records. This cycle is relatively time-consuming. I don’t know if there is any good algorithm that can optimize this scanning engine. I have been thinking hard and can’t think of any good way. Regarding limit, from now until the relatively distant future, I think it can only be optimized through business, program and data table planning. None of the optimization methods I have thought of is a foolproof strategy, so we will discuss it later.
2. The sql writing method is too complicated
If some special functions are used in the sql writing method, such as groupby or multi-table joint query, what method does mysql use to query? You can also use desc to analyze. I use it here. There are not many cases of complex SQL, so I don't analyze it often, and I don't have any good suggestions for the time being.
3. Configuration error
The main parameters in the configuration are key_buffer, sort_buffer_size/myisam_sort_buffer_size, these two parameters mean:
key_buffer=128M: The indexes of all tables will be placed in this memory area as much as possible. If the index is relatively large, you can open it slightly larger. I usually set it to 128M. A good suggestion is to find a way to move the rarely used and relatively large tables to other places. This can significantly reduce the memory of MySQL. occupied.
sort_buffer_size=1M: The memory used by a single thread for sorting. The query result set will be put into this memory. If it is smaller, mysql will put it several times more, so just make it a little larger. The important thing is optimization. Good indexes and query statements so that they don't generate too large a result set.
Some other configurations:
thread_concurrency=8: This configuration comes standard with the number of CPUs x 2
interactive_timeout=30
wait_timeout=30: These two configurations can be used for 10-30 seconds. This will Release memory resources as soon as possible. Note: Connections that have been used will not be disconnected. This configuration only disconnects connections that have not been active for a long time.
query_cache: Do not use this function. Now many people see the letters cache as if they are seeing a treasure. This is not materialistic. MySQL's query_cache will re-clear all caches connected to the table every time the table data changes. If updates are frequent, query_cache will not only be unable to help, but will also have a great impact on efficiency. This parameter is only suitable for read-only databases. If you must use it, you can only use query_cache_type=2 to specify some sql for caching using SQL_CACHE.
max_connections: The default is 100, which is generally sufficient, but generally it needs to be set higher, 400-600 is enough. If it exceeds 600, there will generally be efficiency problems, and you have to find other countermeasures. Just rely on Increasing this number is not the answer.
You can leave other configurations as default. Personally, I don’t think the problem is that big. Here’s a reminder: 1. Although configuration is very important, in most cases it is not the culprit of efficiency problems. 2. MySQL is a database. The most important thing to consider for a database should not be efficiency, but stability and data accuracy.
4. The machine really cannot bear the load
If the server still cannot bear the load after making the above adjustments, it can only be optimized through architecture-level adjustments.
1. mysql synchronization.
Synchronize data to several slave databases through the mysql synchronization function, and the master database writes and reads from the slave database.
I personally am not so happy to use mysql synchronization, because this method will increase the complexity of the program and often cause data errors. In a high-load service, it can be quickly restarted if it crashes, but it will be more troublesome to recover if the data is wrong.
2. Add cache
After adding cache, you can solve the concurrency problem, and the effect is obvious. If it is a real-time system, you can consider refreshing the cache to keep the cache up to date.
It is recommended to add Squid to the front-end architecture. In applications with high hit rates, it can basically solve the problem.
If caching is done in the program logic layer, it will add a lot of complexity, and the problems will be more difficult to solve. It is not recommended to make adjustments at this level.
3. Adjust the program architecture to support connecting to multiple databases at the same time
If the problem is still serious after the web is added to the cache, the only way is to adjust the program architecture, dismantle the application, and use multiple machines to provide services at the same time.
If it is dismantled, it will have a slight impact on the business. If some functions in the business must use all the data, you can use an architecture like a complete library + n scattered libraries. Each modification will be done in the complete library and the scattered libraries. Do each operation once, or organize the complete library regularly.
Of course, there is also the stupidest way, which is to make a complete copy of the database, and then the program executes the complete SQL in these libraries every time, and polls the access during access. I think this is more synchronous than mysql safe way.
4. Use mysql proxy proxy
mysql proxy can disperse various tables in the database to several servers through proxies, but its problem is that it cannot solve the problem of popular tables. If popular content is scattered in multiple tables , this method can solve the problem relatively easily.
I have neither used this software nor checked it carefully, but I have a little doubt about its function, that is, how does it implement joint queries between multiple tables? If it can be implemented, how efficient is it?
5. Use memcachedb
Changing the database to memcachedb that supports mysql is an idea that you can try. From the perspective of the implementation method and level of memcachedb, it will have no impact on the data and will not cause any trouble to users.
Because I don’t have many problems with the database, I haven’t tried this thing yet. However, as long as it supports most of the main syntax of MySQL and is stable, there is no doubt about its usability.

www.bkjia.comtruehttp: //www.bkjia.com/PHPjc/320161.htmlTechArticle1. The index is not built; 2. The SQL writing method is too complicated; 3. The configuration is wrong; 4. The machine is really loaded No; 1. The index has not been built. If you see that mysql consumes a lot of CPU, you can use mysql's clie...
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!