When working with large data sets, selecting random rows can be a computationally intensive task. This article explores various methods for retrieving random rows from a table containing approximately 500 million rows, and discusses their performance and accuracy.
The first method involves using the RANDOM() function to generate random numbers and then using the LIMIT clause to filter the results to get the required number of rows.
SELECT * FROM table WHERE RANDOM() < 0.000002 LIMIT 1000;
This approach has the advantage of being easy to implement, but may be inefficient for large tables. Because of the LIMIT clause, the database must scan all rows of the table to pick random rows and discard the rest.
Another approach is to first sort the rows by the RANDOM() function and then use the LIMIT clause to get random rows.
SELECT * FROM table ORDER BY RANDOM() LIMIT 1000;
This method is similar to the first method, but the sorting guarantees more efficient selection of random rows. It reduces the number of scans required, making it a better choice for large tables. However, it is still not the best choice for tables with extremely large number of rows.
For tables with numeric ID columns and fewer gaps, a more efficient approach can be used. This involves generating random numbers within a range of IDs and using them to join with the table.
WITH params AS ( SELECT 1 AS min_id, -- 最小 ID <= 当前最小 ID 5100000 AS id_span -- 四舍五入。(max_id - min_id + buffer) ) SELECT * FROM ( SELECT p.min_id + trunc(random() * p.id_span)::integer AS id FROM params p, generate_series(1, 1100) g -- 1000 + buffer GROUP BY 1 -- 去除重复项 ) r JOIN table USING (id) LIMIT 1000;
This approach leverages index access to significantly reduce the number of scans required. It is ideal for tables with a large number of rows and few gaps in the ID column.
The best way to select random rows depends on specific table characteristics and performance requirements. For small tables, the RANDOM() or ORDER BY RANDOM() methods may be sufficient. However, for large tables with numeric ID columns and few gaps, it is recommended to use the above optimization method for best performance.
It should be noted that due to the nature of pseudo-random number generation in computers, none of these methods can guarantee true randomness. However, they provide a practical way to obtain a random sample of rows from a large table with reasonable efficiency and accuracy.
The above is the detailed content of How to Efficiently Select Random Rows from a Large PostgreSQL Table?. For more information, please follow other related articles on the PHP Chinese website!