Home Backend Development PHP Tutorial How to optimize Mysql tens of millions of fast paging

How to optimize Mysql tens of millions of fast paging

Dec 02, 2016 pm 03:13 PM
mysql

MySQL database optimization processing enables tens of millions of fast paging analysis, let’s take a look.
Data table collect (id, title, info, vtype) has these 4 fields, of which title is fixed length, info is text, id is gradual, vtype is tinyint, and vtype is index. This is a simple model of a basic news system. Now fill it with data and fill it with 100,000 news items.
The final collect is 100,000 records, and the database table occupies 1.6G of hard disk. OK, look at the following sql statement:
select id,title from collect limit 1000,10; Very fast; basically OK in 0.01 seconds, look at the following
select id,title from collect limit 90000,10; From 90,000 The article starts pagination, the result?
Complete in 8-9 seconds, what’s wrong with my god? ? ? ? In fact, if you want to optimize this data, you can find the answer online. Look at the following statement:
select id from collect order by id limit 90000,10; It’s very fast, it’s OK in 0.04 seconds. Why? Because using the id primary key for indexing is of course faster. The online modification is:
select id,title from collect where id>=(select id from collect order by id limit 90000,1) limit 10;
This is the result of using id for indexing. But if the problem is just a little bit complicated, it’s over. Look at the following statement
select id from collect where vtype=1 order by id limit 90000,10; It’s very slow, it took 8-9 seconds!
Now that I’m here, I believe many people will feel like I’m having a breakdown! Is vtype indexed? Why is it so slow? It is good to have vtype indexed. You can directly select id from collect where vtype=1 limit 1000,10; which is very fast, basically 0.05 seconds, but it can be increased by 90 times. Starting from 90,000, that is 0.05*90=4.5 seconds. speed. And the test result is 8-9 seconds to an order of magnitude. From here, someone started to put forward the idea of ​​sub-table, which is the same idea as the discuz forum. The idea is as follows:
Build an index table: t (id, title, vtype) and set it to a fixed length, then do paging, paging out the results and then go to collect to find info. Is it possible? You will know through experimentation.
100,000 records are stored in t(id,title,vtype), and the data table size is about 20M. Use
select id from t where vtype=1 order by id limit 90000,10; It’s fast. Basically it can be run in 0.1-0.2 seconds. Why is this so? I guess it's because there is too much collect data, so paging takes a long time. limit is completely related to the size of the data table. In fact, this is still a full table scan, just because the amount of data is small, and it is only faster if it is 100,000. OK, let’s do a crazy experiment, add it to 1 million, and test the performance.
After adding 10 times the data, the t table immediately reached more than 200M, and it was a fixed length. The query statement just now takes 0.1-0.2 seconds to complete! Is there any problem with the sub-table performance? wrong! Because our limit is still 90,000, it’s fast. Give it a big one, start at 900,000
select id from t where vtype=1 order by id limit 900000,10; Look at the result, the time is 1-2 seconds!
Why?? Even after dividing the timetable, the time is still so long, which is very frustrating! Some people say that fixed length will improve the performance of limit. At first, I thought that because the length of a record is fixed, mysql should be able to calculate the position of 900,000, right? But we overestimated the intelligence of MySQL. It is not a business database. Facts have proved that fixed length and non-fixed length have little impact on limit? No wonder some people say that discuz will be very slow when it reaches 1 million records. I believe this is true. This is related to the database design!
Can’t MySQL break through the 1 million limit? ? ? Is the limit really reached when the number of pages reaches 1 million? ? ?
The answer is: NO!!!! The reason why it cannot exceed 1 million is because it does not know how to design mysql. Let’s introduce the non-table method, let’s have a crazy test! One table can handle 1 million records and a 10G database. How to quickly paginate!
Okay, our test has returned to the collect table. The conclusion of the test is: 300,000 data, using the split table method is feasible. If it exceeds 300,000, the speed will be unbearable! Of course, if you use the method of sub-table + me, it will be absolutely perfect. But after using my method, it can be solved perfectly without splitting the table!
The answer is: compound index! Once when designing a mysql index, I accidentally discovered that the index name can be chosen arbitrarily, and you can select several fields to enter. What is the use of this? The initial select id from collect order by id limit 90000,10; is so fast because the index is used, but if you add where, the index will not be used. I added an index like search(vtype,id) with the idea of ​​giving it a try. Then test
select id from collect where vtype=1 limit 90000,10; very fast! Completed in 0.04 seconds!
Test again: select id ,title from collect where vtype=1 limit 90000,10; Very sorry, 8-9 seconds, no search index!
Test again: search(id,vtype), or select id statement, which is also very regrettable, 0.5 seconds.
To sum up: If there is a where condition and you want to use limit for indexing, you must design an index that puts where first and the primary key used by limit second, and you can only select the primary key!
Perfectly solved the paging problem. If you can quickly return the ID, you can hope to optimize the limit. According to this logic, a million-level limit should be divided in 0.0x seconds. It seems that the optimization and indexing of mysql statements are very important!
Okay, back to the original question, how can we successfully and quickly apply the above research to development? If I use compound queries, my lightweight framework will be useless. I have to write the paging string myself. How troublesome is that? Here is another example, and the idea comes out:
select * from collect where id in (9000,12,50,7000); It can be checked in 0 seconds!
mygod, the index of mysql is also effective for the in statement! It seems that it is wrong to say that in cannot be indexed online!
With this conclusion, it can be easily applied to lightweight frameworks:
The code is as follows:
$db=dblink();
$db->pagesize=20;
$sql="select id from collect where vtype=$vtype";
$db->execute($sql);
$strpage=$db->strpage(); //Save the paging string in a temporary variable to facilitate output
while($rs =$db->fetch_array()){
$strid.=$rs['id'].',';
}
$strid=substr($strid,0,strlen($strid)-1); //Construct the id string
$db->pagesize=0; //It is very important to clear the paging without logging out the class, so that you only need to use the database connection once and do not need to open it again;
$db ->execute("select id,title,url,sTime,gTime,vtype,tag from collect where id in ($strid)");
fetch_array() ): ?>

 
 
 
  ;
 
   < /td>



echo $strpage;
Through simple transformation, the idea is actually very simple: 1) Through optimization Index, find the id, and spell it into a string like "123,90000,12000". 2) The second query finds the results.
A small index + a little change enables mysql to support millions or even tens of millions of efficient paging!
Through the examples here, I reflected on something: for large systems, PHP must not use frameworks, especially frameworks that cannot even see SQL statements! Because my lightweight framework almost collapsed at first! It is only suitable for the rapid development of small applications. For ERP, OA, large websites, the data layer, including the logic layer, cannot use the framework. If programmers lose control of SQL statements, the risk of the project will increase exponentially! Especially when using mysql, mysql must require a professional DBA to achieve its best performance. The performance difference caused by an index can be thousands of times!
PS: After actual testing, when it comes to 1 million data, 1.6 million data, 15G table, 190M index, even if the index is used, the limit is 0.49 seconds. Therefore, it is best not to let others see the data after 100,000 pieces of data during paging, otherwise it will be very slow! Even if you use an index. After such optimization, MySQL has reached the limit of millions of pages! But such a result is already very good. If you are using sqlserver, it will definitely get stuck! The 1.6 million data using id in (str) is very fast, basically still 0 seconds. If so, mysql should be able to easily handle tens of millions of data.

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
Two Point Museum: All Exhibits And Where To Find Them
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

PHP's big data structure processing skills PHP's big data structure processing skills May 08, 2024 am 10:24 AM

Big data structure processing skills: Chunking: Break down the data set and process it in chunks to reduce memory consumption. Generator: Generate data items one by one without loading the entire data set, suitable for unlimited data sets. Streaming: Read files or query results line by line, suitable for large files or remote data. External storage: For very large data sets, store the data in a database or NoSQL.

How to use MySQL backup and restore in PHP? How to use MySQL backup and restore in PHP? Jun 03, 2024 pm 12:19 PM

Backing up and restoring a MySQL database in PHP can be achieved by following these steps: Back up the database: Use the mysqldump command to dump the database into a SQL file. Restore database: Use the mysql command to restore the database from SQL files.

How to optimize MySQL query performance in PHP? How to optimize MySQL query performance in PHP? Jun 03, 2024 pm 08:11 PM

MySQL query performance can be optimized by building indexes that reduce lookup time from linear complexity to logarithmic complexity. Use PreparedStatements to prevent SQL injection and improve query performance. Limit query results and reduce the amount of data processed by the server. Optimize join queries, including using appropriate join types, creating indexes, and considering using subqueries. Analyze queries to identify bottlenecks; use caching to reduce database load; optimize PHP code to minimize overhead.

How to insert data into a MySQL table using PHP? How to insert data into a MySQL table using PHP? Jun 02, 2024 pm 02:26 PM

How to insert data into MySQL table? Connect to the database: Use mysqli to establish a connection to the database. Prepare the SQL query: Write an INSERT statement to specify the columns and values ​​to be inserted. Execute query: Use the query() method to execute the insertion query. If successful, a confirmation message will be output.

How to create a MySQL table using PHP? How to create a MySQL table using PHP? Jun 04, 2024 pm 01:57 PM

Creating a MySQL table using PHP requires the following steps: Connect to the database. Create the database if it does not exist. Select a database. Create table. Execute the query. Close the connection.

How to use MySQL stored procedures in PHP? How to use MySQL stored procedures in PHP? Jun 02, 2024 pm 02:13 PM

To use MySQL stored procedures in PHP: Use PDO or the MySQLi extension to connect to a MySQL database. Prepare the statement to call the stored procedure. Execute the stored procedure. Process the result set (if the stored procedure returns results). Close the database connection.

How to fix mysql_native_password not loaded errors on MySQL 8.4 How to fix mysql_native_password not loaded errors on MySQL 8.4 Dec 09, 2024 am 11:42 AM

One of the major changes introduced in MySQL 8.4 (the latest LTS release as of 2024) is that the &quot;MySQL Native Password&quot; plugin is no longer enabled by default. Further, MySQL 9.0 removes this plugin completely. This change affects PHP and other app

The difference between oracle database and mysql The difference between oracle database and mysql May 10, 2024 am 01:54 AM

Oracle database and MySQL are both databases based on the relational model, but Oracle is superior in terms of compatibility, scalability, data types and security; while MySQL focuses on speed and flexibility and is more suitable for small to medium-sized data sets. . ① Oracle provides a wide range of data types, ② provides advanced security features, ③ is suitable for enterprise-level applications; ① MySQL supports NoSQL data types, ② has fewer security measures, and ③ is suitable for small to medium-sized applications.

See all articles