The content of this article is a detailed explanation of classification evaluation indicators and regression evaluation indicators as well as Python code implementation. It has certain reference value. Now I share it with you. Friends in need can refer to it.
Performance measurement (evaluation) indicators are mainly divided into two categories:
1) Classification evaluation indicators (classification), mainly analysis, discrete, Integer. Its specific indicators include accuracy (accuracy), precision (precision), recall (recall), F value, P-R curve, ROC curve and AUC.
2) Regression evaluation index (regression), mainly analyzes the relationship between integers and real numbers. Its specific indicators include explained variance score (explianed_variance_score), mean absolute error MAE (mean_absolute_error), mean square error MSE (mean-squared_error), root mean square difference RMSE, cross entropy loss (Log loss, cross-entropy loss), R Square value (coefficient of determination, r2_score).
Assume that there are only two categories - positive and negative. Usually the category of concern is the positive category and other categories are the negative category ( Therefore, multiple types of problems can also be summarized into two categories)
The confusion matrix is as follows
Actual category | Predicted category | |||
|
Positive | Negative | Summary | |
TP | FN | P(actually positive) | ||
FP | TN | N (actually negative) |
2.1. Classification evaluation indicators2.1.1 Value indicators-Accuracy, Precision, Recall, F value ##Measurement2. Evaluation indicators (performance measurement)
Precision | Recall | F value | Definition | |||||||||||||||||
The ratio of the number of true positive cases among the positive cases to the number of positive cases (the proportion of all real spam text messages that are classified and correctly found) | The number of correct cases judged to be positive and Ratio of the total number of positive examples | Harmonic average F | -score means | |||||||||||||||||
##precision= |
##recall= |
F |
## - score = 1.Precision is also often called precision rate, and recall is called recall rate
Steps:
2) ROC curve 1) Explainable variance score 2) Mean absolute error MAE (Mean absolute error) from sklearn.metrics import log_loss log_loss(y_true, y_pred)from scipy.stats import pearsonr pearsonr(rater1, rater2)from sklearn.metrics import cohen_kappa_score cohen_kappa_score(rater1, rater2) Copy after login |
The above is the detailed content of Detailed explanation of classification evaluation indicators and regression evaluation indicators and Python code implementation. For more information, please follow other related articles on the PHP Chinese website!