1. A single piece of data, that is, there is only one row of data, and the field article in this row of data contains 400,000 words, separated by commas (,)
2. There are 400,000 pieces of data, that is, 400,000 rows, and there is one word in the field article of each row
If scheme 1 is fast, how to extract them separately in a loop, and then combine them (separated by commas) into 400,000 items in a loop, <a href="a single word after splitting"> ;A single word after split</a>
I think option 2 should be faster.
First get 1000 pieces of data:
Process 1000 pieces of data one by one:
Process the next 1000 items
In terms of query alone, the former is definitely faster, but one field stores 400,000 words (a word is counted as 7 letters - including commas), which is nearly 3 million letters, and about 3M of data. However, after querying it, it seems unreliable to cut such a long field.
If it is a simple business of listing 400,000 words on a page, I think the first method is faster
Reasons:
1. Query
Method 1. Scan one row to get the record. Method 2 requires scanning many rows, and the time spent is self-evident (the larger the table, the more obvious).
This process method 1 is much superior to method 2
2. Output
Method 1 needs to be divided independently, which is not a problem at all for PHP. Then both need to cache the output.
In general, method 1 has less overhead than method 2.