Personally, it is recommended to exclude tables one by one to see which table affects the query speed. After determining which table it is, then do index optimization based on it
From the perspective of database optimization, the final result is just one count(*). A large number of associations lead to memory consumption and time waste.
Give me an idea: Make the sql result as follows:
The final count(*) is equal to the multiplication and final addition of the quantities in each row, That is: (Quantity 11*Quantity 12*...*Quantity 16)+(Quantity 21*...*Quantity 26) To reduce a large number of Cartesian product.
sql:
select t_for_sale.id, nvl(t1.count1, 0), ...
left join (select id, count(*) count1 from t_user group by id) t1 on t1.id=t_for_sale.id
....
from t_for_sale
Separate query, using key-value pairs
A very simple and practical solution is to create an intermediate table and exchange space for time.
In addition, when the amount of data reaches a certain level, consider dividing databases and tables. You can look at the mycat middleware.
Personally, it is recommended to exclude tables one by one to see which table affects the query speed. After determining which table it is, then do index optimization based on it
From the perspective of database optimization, the final result is just one count(*). A large number of associations lead to memory consumption and time waste.
Give me an idea:
Make the sql result as follows:
The final count(*) is equal to the multiplication and final addition of the quantities in each row,
That is: (Quantity 11*Quantity 12*...*Quantity 16)+(Quantity 21*...*Quantity 26)
To reduce a large number of Cartesian product.
sql: