The company's project requires a lot of high-precision operations. I first used double type operations. Later I found that the accuracy of some values exceeded the ideal range after using double type operations, so I used BigDecimal to calculate. The problem now is BigDecimal The computing efficiency is dozens of times slower than that of double. If the amount of data is large, it will be extremely slow. Is there any good solution? This problem needs to be solved urgently.
//相关性系数计算
public BigDecimal getRelativityTool_bydim(RelativityTool u) {
BigDecimal sim = new BigDecimal("0"); //最后的皮尔逊相关度系数
BigDecimal common_items_len = new BigDecimal(this.rating_map_list.size()); //操作数的个数
BigDecimal this_sum = new BigDecimal("0"); //第一个相关数的和
BigDecimal u_sum = new BigDecimal("0"); //第二个相关数的和
BigDecimal this_sum_sq = new BigDecimal("0"); //第一个相关数的平方和
BigDecimal u_sum_sq = new BigDecimal("0"); //第二个相关数的平方和
BigDecimal p_sum = new BigDecimal("0"); //两个相关数乘积的和
for (int i = 0; i < this.rating_map_list.size(); i++) {
BigDecimal this_grade = this.rating_map_list.get(i);
BigDecimal u_grade = u.rating_map_list.get(i);
//评分求和 //平方和 //乘积和
this_sum = this_sum.add(this_grade);
u_sum = u_sum.add(u_grade);
this_sum_sq = this_sum_sq.add(this_grade.pow(2));
u_sum_sq = u_sum_sq.add(u_grade.pow(2));
p_sum = p_sum.add(this_grade.multiply(u_grade));
}
BigDecimal num = common_items_len.multiply(p_sum).subtract(this_sum.multiply(u_sum));
BigDecimal den = sqrt(common_items_len.multiply(this_sum_sq).subtract(this_sum.pow(2)).multiply(common_items_len.multiply(u_sum_sq).subtract(u_sum.pow(2))));
if (den.compareTo(new BigDecimal("0")) == 0) {
sim = new BigDecimal("1");
} else {
sim = num.pide(den,5, BigDecimal.ROUND_HALF_UP);
}
return sim;
}
//大数字开方
public static BigDecimal sqrt(BigDecimal x) {
BigDecimal n1 = BigDecimal.ONE;
BigDecimal ans = BigDecimal.ZERO;
while ((n1.multiply(n1).subtract(x)).abs().compareTo(BigDecimal.valueOf(0.001)) == 1) {
BigDecimal s1 = x.pide(n1, 2000, BigDecimal.ROUND_HALF_UP);
BigDecimal s2 = n1.add(s1);
n1 = s2.pide(BigDecimal.valueOf(2), 2000, BigDecimal.ROUND_HALF_UP);
}
ans = n1;
BigDecimal rt = new BigDecimal(ans.toString().split("\.")[0]);
return rt;
}
Except for using C or C++ to do high-precision operations, there seems to be no way to take into account both performance and precision.
There is a course called "Computational Methods" in the computer major of the university, which specifically discusses how to minimize errors in the calculation process with limited precision. If you are interested, you can find relevant teaching materials.
Later I found that the accuracy of some values after using double type operations exceeded the ideal range
Is it exceeded or not satisfied?
Here is a piece of code for calculating square roots. I found it on stackoverflow. When tested on my own machine, it is about ten times faster than the one you have above.
So: Firstly, you can improve performance through improved algorithms. Secondly, the best way is to find some existing libraries and use them directly: such as the one listed above