LIMIT clause is commonly used to retrieve a subset of records from a MySQL table. However, when the offset value (m) becomes large, the query's performance can significantly deteriorate. This is because MySQL typically employs a sequential scan to traverse the entire table until it reaches the desired offset.
To alleviate this performance issue, consider creating an indexing table that establishes a sequential key corresponding to the primary key of your target table. This auxiliary table, known as the "indexing table," will act as a bridge between your target table and the desired rows.
By joining the indexing table to your target table, you can use a WHERE clause to efficiently retrieve the specific rows you need. Here's how to implement this approach:
CREATE TABLE seq ( seq_no int not null auto_increment, id int not null, primary key(seq_no), unique(id) );
TRUNCATE seq; INSERT INTO seq (id) SELECT id FROM mytable ORDER BY id;
To retrieve rows with offset 1,000,000 and a limit of 1000, execute the following query:
SELECT mytable.* FROM mytable INNER JOIN seq USING(id) WHERE seq.seq_no BETWEEN 1000000 AND 1000999;
This optimized query leverages the indexing table to bypass the sequential scan and directly access the desired rows. As a result, the performance should be considerably improved, even with large offset values.
The above is the detailed content of How to Optimize MySQL LIMIT Queries with Large Offsets?. For more information, please follow other related articles on the PHP Chinese website!