mysql中longtext存在大量数据时,会导致查询很慢?
PHPz
PHPz 2017-04-17 16:00:16
0
3
2259

一个表,1.5w条数据,字段: id,name,content,last_update_time
id,自定义主键
name,varchar类型
content是longtext类型,
last_update_time为datetime类型,不为空

content当中是文本和代码等,平均长度在20k+。

case1:
select id, name from t order by last_update_time limit 10000, 10

当content当中有大量的文本时,case1的效率极慢。

及时给 last_update_time 加上btree索引, 效率有提升,但是依然慢

把content一列删掉,效率很高。毫秒级别。

使用explain:
有content时结果:

mysql> explain select id, name, last_update_time from t order by last_update_time desc limit 11120, 11;
+----+-------------+-----------+-------+---------------+----------------------+---------+------+-------+-------+
| id | select_type | table     | type  | possible_keys | key                  | key_len | ref  | rows  | Extra |
+----+-------------+-----------+-------+---------------+----------------------+---------+------+-------+-------+
|  1 | SIMPLE      | t | index | NULL          | idx_last_update_time | 8       | NULL | 11131 | NULL  |
+----+-------------+-----------+-------+---------------+----------------------+---------+------+-------+-------+

无content列的结果:

+----+-------------+----------------+------+---------------+------+---------+------+-------+----------------+
| id | select_type | table          | type | possible_keys | key  | key_len | ref  | rows  | Extra          |
+----+-------------+----------------+------+---------------+------+---------+------+-------+----------------+
|  1 | SIMPLE      | t2 | ALL  | NULL          | NULL | NULL    | NULL | 15544 | Using filesort |
+----+-------------+----------------+------+---------------+------+---------+------+-------+----------------+
1 row in set (0.00 sec)

请大神请教,是什么问题?该怎么优化?

PHPz
PHPz

学习是最好的投资!

reply all(3)
左手右手慢动作

When there is no content, the query is idx_last_update_time. I guess this index contains the id and name fields, so the required data can be obtained only through the index, so it is very fast.
When there is content, because there is a limit 10000 statement and the content of the content field cannot be obtained from the index, a full table scan is used.
It is recommended to rewrite the sql statement to make the database execution plan more fully use the index, assuming that id is the primary key:

select id, name, content 
from t
where id in (
  select id
  from t
  order by last_update_time limit 10000, 10
)
巴扎黑

Content includes text and code, etc., with an average length of 20k+.

This should establish a full-text index (FUNLLTEXT INDEX). Simple indexes are not suitable for such extremely long text fields.

Ty80

I think it’s mainly related to your paging query method. Limit 10000,10 means scanning 10010 pieces of data that meet the conditions, discarding the first 10000 rows, and returning the last 10 rows. Add to your table A very large field, which will inevitably increase the I/O time of database query. For query optimization, you can refer to @Xing Aiming's
SELECT id, title, content FROM items WHERE id IN (SELECT id FROM items ORDER BY last_update_time limit 10000, 10);
There is another optimization method: you can record the last value of last_update_time each time. Then the query can be written like this:
SELECT * FROM items WHERE last_update_time > "Last recorded value" order by last_update_time limit 0,10;

You can execute these two methods to see which one is more efficient. I hope it will be helpful to you. .

Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template