Which method is faster for php to extract data from mysql?
巴扎黑
巴扎黑 2017-05-18 10:44:34
0
3
620

1. A single piece of data, that is, there is only one row of data, and the field article in this row of data contains 400,000 words, separated by commas (,)

2. There are 400,000 pieces of data, that is, 400,000 rows, and there is one word in the field article of each row

If scheme 1 is fast, how to extract them separately in a loop, and then combine them (separated by commas) into 400,000 items in a loop, <a href="a single word after splitting"> ;A single word after split</a>

巴扎黑
巴扎黑

reply all(3)
洪涛

I think option 2 should be faster.

First get 1000 pieces of data:

SELECT `article` FROM `table` ORDER BY id DESC LIMIT 0,1000

Process 1000 pieces of data one by one:

foreach ($list as $key => $value) {    
    $link = '<a href="'.$value['article'].'">'.$value['article'].'</a>';
     ....
}

Process the next 1000 items

洪涛

In terms of query alone, the former is definitely faster, but one field stores 400,000 words (a word is counted as 7 letters - including commas), which is nearly 3 million letters, and about 3M of data. However, after querying it, it seems unreliable to cut such a long field.

淡淡烟草味

If it is a simple business of listing 400,000 words on a page, I think the first method is faster

Reasons:
1. Query
Method 1. Scan one row to get the record. Method 2 requires scanning many rows, and the time spent is self-evident (the larger the table, the more obvious).
This process method 1 is much superior to method 2
2. Output
Method 1 needs to be divided independently, which is not a problem at all for PHP. Then both need to cache the output.

In general, method 1 has less overhead than method 2.

Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template