转载:Why does MYSQL higher LIMIT offset slow the query down
Jun 07, 2016 pm 03:40 PM来自:http://stackoverflow.com/questions/4481388/why-does-mysql-higher-limit-offset-slow-the-query-down Scenario in short: A table with more than 16 million records [2GB in size]. The higher LIMIT offset with SELECT, the slower the query b
来自:http://stackoverflow.com/questions/4481388/why-does-mysql-higher-limit-offset-slow-the-query-down
Scenario in short: A table with more than 16 million records [2GB in size]. The higher LIMIT offset with SELECT, the slower the query becomes, when using ORDER BY *primary_key*
So
<code>SELECT * FROM large ORDER BY `id` LIMIT 0, 30 </code>
takes far less than
<code>SELECT * FROM large ORDER BY `id` LIMIT 10000, 30 </code>
That only orders 30 records and same eitherway. So it's not the overhead from ORDER BY.
Now when fetching the latest 30 rows it takes around 180 seconds. How can I optimize that simple query?
It's normal that higher offsets slow the query down, since the query needs to count off the first OFFSET
+ LIMIT
records (and take only LIMIT
of
them). The higher is this value, the longer the query runs.
The query cannot go right to OFFSET
because,
first, the records can be of different length, and, second, there can be gaps from deleted records. It needs to check and count each record on its way.
Assuming that id
is
a PRIMARY
KEY
of a MyISAM
table,
you can speed it up by using this trick:
<code>SELECT t.* FROM ( SELECT id FROM mytable ORDER BY id LIMIT 10000, 30 ) q JOIN mytable t ON t.id = q.id </code>
See this article:
- MySQL ORDER BY / LIMIT performance: late row lookups
MySQL cannot go directly to the 10000th record (or the 80000th byte as your suggesting) because it cannot assume that it's packed/ordered like that (or that it has continuous values in 1 to 10000). Although it might be that way in actuality, MySQL cannot assume that there are no holes/gaps/deleted ids.
So, as bobs noted, MySQL will have to fetch 10000 rows (or traverse through 10000th entries of the index on id
)
before finding the 30 to return.
EDIT : To illustrate my point
Note that although
<code>SELECT * FROM large ORDER BY id LIMIT 10000, 30 </code>
would be slow(er),
<code>SELECT * FROM large WHERE id > 10000 ORDER BY id LIMIT 30 </code>
would be fast(er), and would return the same results provided that there are no missing id
s
(i.e. gaps).
参考:
1.
为什么长尾数据的翻页技术实现复杂 --文章很好
http://timyang.net/data/key-list-pagination/

熱門文章

熱門文章

熱門文章標籤

記事本++7.3.1
好用且免費的程式碼編輯器

SublimeText3漢化版
中文版,非常好用

禪工作室 13.0.1
強大的PHP整合開發環境

Dreamweaver CS6
視覺化網頁開發工具

SublimeText3 Mac版
神級程式碼編輯軟體(SublimeText3)

如何修復 MySQL 8.4 上的 mysql_native_password 未載入錯誤
