<code>请教: 现在有每天的日表数据(一天生成一张), 每张表数据大概在500w左右。 需要从每天的日表数据中统计:根据appid统计ip数,同时ip需要去重。 大概的sql是:</code>
select appid, count(distinct(ip)) from log0812_tb where iptype = 4 group by appid;
<code>然后将统计的appid 和 ip数,放入到另一张统计表中。 1、直接执行sql的话,肯定超时了(系统仅配置了400ms读取时间)。 2、如果将数据都取出到内存中再做操作,内存又不足了,给的内存只有50M。。。(不为难程序员的需求不是好公司) 请问,还有优化的解决方案吗? 谢谢 </code>
<code>请教: 现在有每天的日表数据(一天生成一张), 每张表数据大概在500w左右。 需要从每天的日表数据中统计:根据appid统计ip数,同时ip需要去重。 大概的sql是:</code>
select appid, count(distinct(ip)) from log0812_tb where iptype = 4 group by appid;
<code>然后将统计的appid 和 ip数,放入到另一张统计表中。 1、直接执行sql的话,肯定超时了(系统仅配置了400ms读取时间)。 2、如果将数据都取出到内存中再做操作,内存又不足了,给的内存只有50M。。。(不为难程序员的需求不是好公司) 请问,还有优化的解决方案吗? 谢谢 </code>
Let’s first talk about the possible optimizations in the table below:
Make a combined index (appid, ip)
IP stores integers, not strings
If it still times out, then try to read the data into the memory, but your memory is only 50M, then you can try to use HyperLogLog. The memory consumed is very small, but the statistical data will be slightly biased, about 2%
Finally, it is best not to store this kind of log data in sql. You can choose some nosql such as hbase and mongodb, which can meet your needs very well
@manong
Thank you, the two optimization solutions you mentioned are both good.
I built a joint index of typeid, appid, and ip, so that this statement is executed through the index query without returning the table, and the time is controlled below 1.5s, which is effective.
As for the HyperLogLog algorithm, I just roughly checked it and didn’t put it into practice, but thank you for the recommendation.
I use another method to process: schedule tasks to process these 5 million+ data in batches. After deduplicating the data taken twice, do array_diff to compare the second different data, and then sum it to get the total count. In this way, the time can also be controlled below 1s. A trick here is to convert the array of the first comparison into a string and then store it in the array. Convert the string to array for the second comparison. This will save a lot of memory, because after trying it, nested arrays are better than long characters. Arrays of string values consume memory.