How to optimize Mysql tens of millions of fast paging

高洛峰
Release: 2023-03-02 21:46:01
Original
1298 people have browsed it

MySQL database optimization processing enables tens of millions of fast paging analysis, let’s take a look.
Data table collect (id, title, info, vtype) has these 4 fields, of which title is fixed length, info is text, id is gradual, vtype is tinyint, and vtype is index. This is a simple model of a basic news system. Now fill it with data and fill it with 100,000 news items.
The final collect is 100,000 records, and the database table occupies 1.6G of hard disk. OK, look at the following sql statement:
select id,title from collect limit 1000,10; Very fast; basically OK in 0.01 seconds, look at the following
select id,title from collect limit 90000,10; From 90,000 The article starts pagination, the result?
Complete in 8-9 seconds, what’s wrong with my god? ? ? ? In fact, if you want to optimize this data, you can find the answer online. Look at the following statement:
select id from collect order by id limit 90000,10; It’s very fast, it’s OK in 0.04 seconds. Why? Because using the id primary key for indexing is of course faster. The online modification is:
select id,title from collect where id>=(select id from collect order by id limit 90000,1) limit 10;
This is the result of using id for indexing. But if the problem is just a little bit complicated, it’s over. Look at the following statement
select id from collect where vtype=1 order by id limit 90000,10; It’s very slow, it took 8-9 seconds!
Now that I’m here, I believe many people will feel like I’m having a breakdown! Is vtype indexed? Why is it so slow? It is good to have vtype indexed. You can directly select id from collect where vtype=1 limit 1000,10; which is very fast, basically 0.05 seconds, but it can be increased by 90 times. Starting from 90,000, that is 0.05*90=4.5 seconds. speed. And the test result is 8-9 seconds to an order of magnitude. From here, someone started to put forward the idea of ​​sub-table, which is the same idea as the discuz forum. The idea is as follows:
Build an index table: t (id, title, vtype) and set it to a fixed length, then do paging, paging out the results and then go to collect to find info. Is it possible? You will know through experimentation.
100,000 records are stored in t(id,title,vtype), and the data table size is about 20M. Use
select id from t where vtype=1 order by id limit 90000,10; It’s fast. Basically it can be run in 0.1-0.2 seconds. Why is this so? I guess it's because there is too much collect data, so paging takes a long time. limit is completely related to the size of the data table. In fact, this is still a full table scan, just because the amount of data is small, and it is only faster if it is 100,000. OK, let’s do a crazy experiment, add it to 1 million, and test the performance.
After adding 10 times the data, the t table immediately reached more than 200M, and it was a fixed length. The query statement just now takes 0.1-0.2 seconds to complete! Is there any problem with the sub-table performance? wrong! Because our limit is still 90,000, it’s fast. Give it a big one, start at 900,000
select id from t where vtype=1 order by id limit 900000,10; Look at the result, the time is 1-2 seconds!
Why?? Even after dividing the timetable, the time is still so long, which is very frustrating! Some people say that fixed length will improve the performance of limit. At first, I thought that because the length of a record is fixed, mysql should be able to calculate the position of 900,000, right? But we overestimated the intelligence of MySQL. It is not a business database. Facts have proved that fixed length and non-fixed length have little impact on limit? No wonder some people say that discuz will be very slow when it reaches 1 million records. I believe this is true. This is related to the database design!
Can’t MySQL break through the 1 million limit? ? ? Is the limit really reached when the number of pages reaches 1 million? ? ?
The answer is: NO!!!! The reason why it cannot exceed 1 million is because it does not know how to design mysql. Let’s introduce the non-table method, let’s have a crazy test! One table can handle 1 million records and a 10G database. How to quickly paginate!
Okay, our test has returned to the collect table. The conclusion of the test is: 300,000 data, using the split table method is feasible. If it exceeds 300,000, the speed will be unbearable! Of course, if you use the method of sub-table + me, it will be absolutely perfect. But after using my method, it can be solved perfectly without splitting the table!
The answer is: compound index! Once when designing a mysql index, I accidentally discovered that the index name can be chosen arbitrarily, and you can select several fields to enter. What is the use of this? The initial select id from collect order by id limit 90000,10; is so fast because the index is used, but if you add where, the index will not be used. I added an index like search(vtype,id) with the idea of ​​giving it a try. Then test
select id from collect where vtype=1 limit 90000,10; very fast! Completed in 0.04 seconds!
Test again: select id ,title from collect where vtype=1 limit 90000,10; Very sorry, 8-9 seconds, no search index!
Test again: search(id,vtype), or select id statement, which is also very regrettable, 0.5 seconds.
To sum up: If there is a where condition and you want to use limit for indexing, you must design an index that puts where first and the primary key used by limit second, and you can only select the primary key!
Perfectly solved the paging problem. If you can quickly return the ID, you can hope to optimize the limit. According to this logic, a million-level limit should be divided in 0.0x seconds. It seems that the optimization and indexing of mysql statements are very important!
Okay, back to the original question, how can we successfully and quickly apply the above research to development? If I use compound queries, my lightweight framework will be useless. I have to write the paging string myself. How troublesome is that? Here is another example, and the idea comes out:
select * from collect where id in (9000,12,50,7000); It can be checked in 0 seconds!
mygod, the index of mysql is also effective for the in statement! It seems that it is wrong to say that in cannot be indexed online!
With this conclusion, it can be easily applied to lightweight frameworks:
The code is as follows:
$db=dblink();
$db->pagesize=20;
$sql="select id from collect where vtype=$vtype";
$db->execute($sql);
$strpage=$db->strpage(); //Save the paging string in a temporary variable to facilitate output
while($rs =$db->fetch_array()){
$strid.=$rs['id'].',';
}
$strid=substr($strid,0,strlen($strid)-1); //Construct the id string
$db->pagesize=0; //It is very important to clear the paging without logging out the class, so that you only need to use the database connection once and do not need to open it again;
$db ->execute("select id,title,url,sTime,gTime,vtype,tag from collect where id in ($strid)");
fetch_array() ): ?>

 
 
 
  ;
 
   < /td>



echo $strpage;
Through simple transformation, the idea is actually very simple: 1) Through optimization Index, find the id, and spell it into a string like "123,90000,12000". 2) The second query finds the results.
A small index + a little change enables mysql to support millions or even tens of millions of efficient paging!
Through the examples here, I reflected on something: for large systems, PHP must not use frameworks, especially frameworks that cannot even see SQL statements! Because my lightweight framework almost collapsed at first! It is only suitable for the rapid development of small applications. For ERP, OA, large websites, the data layer, including the logic layer, cannot use the framework. If programmers lose control of SQL statements, the risk of the project will increase exponentially! Especially when using mysql, mysql must require a professional DBA to achieve its best performance. The performance difference caused by an index can be thousands of times!
PS: After actual testing, when it comes to 1 million data, 1.6 million data, 15G table, 190M index, even if the index is used, the limit is 0.49 seconds. Therefore, it is best not to let others see the data after 100,000 pieces of data during paging, otherwise it will be very slow! Even if you use an index. After such optimization, MySQL has reached the limit of millions of pages! But such a result is already very good. If you are using sqlserver, it will definitely get stuck! The 1.6 million data using id in (str) is very fast, basically still 0 seconds. If so, mysql should be able to easily handle tens of millions of data.

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!