MongoDB中数据量大的话,不推荐使用 skip 和 limit,那其它方式怎么实现?
迷茫
迷茫 2017-04-24 09:10:31
0
2
609
  1. 如题,我前两天去参加 MongoDB 用户组交流会时,听讲师说到数据量大的话不要使用 skip 和 limit
    因为,这样会一行一行的数到想到的页数再取 PageSize 的数量。当然他们也提了另外一个方法,但是只略讲了一句。

    我在这里提问是想知道具体怎么做?求思路。

迷茫
迷茫

业精于勤,荒于嬉;行成于思,毁于随。

reply all(2)
大家讲道理

If you want to get the "next page" or "previous page", you can do it by querying sort+limit that is greater than a certain _id.
If you want to get "page xxx", if you want to be completely accurate, there is actually no good way. The paging itself is a "count one by one" logic, and this time-consuming process cannot be avoided regardless of whether there is an index.
When the number of pages is very large, not many people care about the numbers after tens of millions. You can use redis to cache the _id list by page number and update it every once in a while.
The fundamental point of this problem is: "The b+ structure on which the index relies cannot be used for ranking calculations."

Ty80

Record the last _id of the last query, and the next query will be {_id: {$gt: last_id}}

Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template