Did you know that MySQL Limit has performance issues?
MySQL's paging query is usually implemented through limit
.
MySQL's limit
The basic usage is very simple. limit
Receives 1 or 2 integer parameters. If there are 2 parameters, the first one specifies the offset of the first returned record row, and the second one specifies the maximum number of returned record rows. The offset of the initial record row is 0.
For compatibility with PostgreSQL, limit
also supports limit # offset
#.
Problem:
For small offsets, there is no problem in directly using limit
to query, but as the amount of data increases, , the farther back the page is paged, the greater the offset of the limit
statement will be, and the speed will be significantly slower.
Optimization ideas:
Avoid scanning too many records when the amount of data is large
Solution :
Subquery paging method or JOIN paging method.
The efficiency of JOIN paging and subquery paging are basically at the same level, and the time consumed is basically the same.
Here’s an example. Generally, the primary key of MySQL is an auto-incrementing numeric type. In this case, the following method can be used for optimization.
Take a table with 800,000 pieces of data in a real production environment as an example to compare the query time before and after optimization:
-- 传统limit,文件扫描 [SQL]SELECT * FROM tableName ORDER BY id LIMIT 500000,2; 受影响的行: 0 时间: 5.371s -- 子查询方式,索引扫描 [SQL] SELECT * FROM tableName WHERE id >= (SELECT id FROM tableName ORDER BY id LIMIT 500000 , 1) LIMIT 2; 受影响的行: 0 时间: 0.274s -- JOIN分页方式 [SQL] SELECT * FROM tableName AS t1 JOIN (SELECT id FROM tableName ORDER BY id desc LIMIT 500000, 1) AS t2 WHERE t1.id <= t2.id ORDER BY t1.id desc LIMIT 2; 受影响的行: 0 时间: 0.278s
You can see that the performance has improved by nearly 20% after optimization times.
Optimization principle:
Subquery is completed on the index, while ordinary query is completed on the data file. Generally speaking, the index file is larger than The data files are much smaller, so operations will be more efficient. Because to retrieve all the field contents, the first method needs to span a large number of data blocks and retrieve them, while the second method basically directly locates according to the index field before retrieving the corresponding content, so the efficiency is naturally greatly improved.
Therefore, the optimization of limit
is not to use limit
directly, but to first obtain the offset id, and then directly use limit
size to get data.
In actual project use, you can use a method similar to the strategy mode to handle paging. For example, if there are 100 pieces of data per page, if it is judged to be within 100 pages, use the most basic paging method. If it is greater than 100, use The subquery pagination method.
For more MySQL related technical articles, please visit the MySQL Tutorial column to learn!
The above is the detailed content of Did you know that MySQL Limit has performance issues?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



In MySQL database, the relationship between the user and the database is defined by permissions and tables. The user has a username and password to access the database. Permissions are granted through the GRANT command, while the table is created by the CREATE TABLE command. To establish a relationship between a user and a database, you need to create a database, create a user, and then grant permissions.

MySQL is suitable for beginners because it is simple to install, powerful and easy to manage data. 1. Simple installation and configuration, suitable for a variety of operating systems. 2. Support basic operations such as creating databases and tables, inserting, querying, updating and deleting data. 3. Provide advanced functions such as JOIN operations and subqueries. 4. Performance can be improved through indexing, query optimization and table partitioning. 5. Support backup, recovery and security measures to ensure data security and consistency.

Data Integration Simplification: AmazonRDSMySQL and Redshift's zero ETL integration Efficient data integration is at the heart of a data-driven organization. Traditional ETL (extract, convert, load) processes are complex and time-consuming, especially when integrating databases (such as AmazonRDSMySQL) with data warehouses (such as Redshift). However, AWS provides zero ETL integration solutions that have completely changed this situation, providing a simplified, near-real-time solution for data migration from RDSMySQL to Redshift. This article will dive into RDSMySQL zero ETL integration with Redshift, explaining how it works and the advantages it brings to data engineers and developers.

To fill in the MySQL username and password: 1. Determine the username and password; 2. Connect to the database; 3. Use the username and password to execute queries and commands.

1. Use the correct index to speed up data retrieval by reducing the amount of data scanned select*frommployeeswherelast_name='smith'; if you look up a column of a table multiple times, create an index for that column. If you or your app needs data from multiple columns according to the criteria, create a composite index 2. Avoid select * only those required columns, if you select all unwanted columns, this will only consume more server memory and cause the server to slow down at high load or frequency times For example, your table contains columns such as created_at and updated_at and timestamps, and then avoid selecting * because they do not require inefficient query se

Navicat itself does not store the database password, and can only retrieve the encrypted password. Solution: 1. Check the password manager; 2. Check Navicat's "Remember Password" function; 3. Reset the database password; 4. Contact the database administrator.

Detailed explanation of database ACID attributes ACID attributes are a set of rules to ensure the reliability and consistency of database transactions. They define how database systems handle transactions, and ensure data integrity and accuracy even in case of system crashes, power interruptions, or multiple users concurrent access. ACID Attribute Overview Atomicity: A transaction is regarded as an indivisible unit. Any part fails, the entire transaction is rolled back, and the database does not retain any changes. For example, if a bank transfer is deducted from one account but not increased to another, the entire operation is revoked. begintransaction; updateaccountssetbalance=balance-100wh

SQLLIMIT clause: Control the number of rows in query results. The LIMIT clause in SQL is used to limit the number of rows returned by the query. This is very useful when processing large data sets, paginated displays and test data, and can effectively improve query efficiency. Basic syntax of syntax: SELECTcolumn1,column2,...FROMtable_nameLIMITnumber_of_rows;number_of_rows: Specify the number of rows returned. Syntax with offset: SELECTcolumn1,column2,...FROMtable_nameLIMIToffset,number_of_rows;offset: Skip
