mysql enables paging of stored results for complex queries.
It seems that few people discuss paging. Is everyone obsessed with limit m,n?
With indexing, limit m,n is fast enough, but when searching with complex conditions,
where somthing order by somefield+somefield
mysql will search the database, find "all" records that meet the conditions, and then take out m,n records.
If your data volume is hundreds of thousands, and users search for some very popular words,
then they have to read the last few pages in order to relive their old dreams. . . Mysql should be very tragic and keep operating the hard disk.
So, you can try to let mysql also store paging, of course, the program must cooperate.
(This is just an idea, everyone is welcome to discuss it)
ASP paging: In the ASP system, there is a Recordset object to implement paging, but a large amount of data is placed in the memory, and you don’t know when it will be Invalid (please give guidance from ASP experts).
SQL database paging: paging using stored procedures + cursors. The specific implementation principle is not very clear. Imagine that if you use a single query to get the required results, or an id set, when subsequent pages are needed Just read out the relevant records according to the IDs in the results. In this way, only a small space is needed to retain all the IDs of this query. (I don’t know how to clear the expired garbage from the query results in SQL?)
In this way, mysql can be used to simulate the storage paging mechanism:
1. select id from $table where $condition order by $field limit $max_pages*$count;
Query the IDs that meet the conditions.
Limit the maximum number of records that meet the conditions, or not add it.
2. Because all PHP variables will be lost after execution, you can consider:
Solution a. Create a temporary table in mysql, and insert the query results with a time or random number as the unique identifier.
Establish page1~pagen fields, each field saves the ids required on the page, so that one id corresponds to one record.
Option b. If you open the session, you can also save it in the session. In fact, Save it in a file.
Create a $ IDS array, $ IDS [1] ~ $ IDS [$ max_pages]. Considering that sometimes users will open a few
windows at the same time, to make a unique sign for $ IDs to avoid query results cover each other. Two-dimensional array
and $$var are both good methods.
3. In each page request, find the corresponding IDs directly, separated by ",":
select * from $table where id in ($ids); The speed is absolutely fast