In order not to change the original meaning, what needs to be re-expressed is: First, we need to figure out why cross-validation is needed?
Cross-validation is a technique commonly used in machine learning and statistics to evaluate the performance and generalization ability of a predictive model, especially when data are limited or to evaluate the model's generalization to new unseen data. Cross-validation is extremely valuable when it comes to capabilities.
Under what circumstances will cross-validation be used?
The general idea of cross-validation can be shown in Figure 5 fold cross. In each iteration, the new model is trained on four sub-datasets and performed on the last retained sub-dataset. Test to make sure all data is utilized. Through indicators such as average score and standard deviation, a true measure of model performance is provided.
Everything has to start with K-fold crossover.
K-fold cross-validation has been integrated in Sklearn. Here is a 7-fold example:
from sklearn.datasets import make_regressionfrom sklearn.model_selection import KFoldx, y = make_regression(n_samples=100)# Init the splittercross_validation = KFold(n_splits=7)
There is another A common operation is to perform a Shuffle before performing the split, which further minimizes the risk of overfitting by destroying the original order of the samples:
cross_validation = KFold(n_splits=7, shuffle=True)
In this way, a simple k-fold Cross-validation can be done, be sure to check the source code! Be sure to check out the source code! Be sure to check out the source code!
StratifiedKFold is specially designed for classification problems.
In some classification problems, the target distribution should remain unchanged even if the data is divided into multiple sets. For example, in most cases, a binary target with a class ratio of 30 to 70 should still maintain the same ratio in the training set and the test set. In ordinary KFold, this rule is broken because the data is shuffled before splitting. , the category proportions will not be maintained.
To solve this problem, another splitter class specifically for classification is used in Sklearn - StratifiedKFold:
from sklearn.datasets import make_classificationfrom sklearn.model_selection import StratifiedKFoldx, y = make_classification(n_samples=100, n_classes=2)cross_validation = StratifiedKFold(n_splits=7, shuffle=True, random_state=1121218)
Although it is different from KFold Looks similar, but now the class proportions remain consistent across all splits and iterations Cross-validation is very similar
Scikit-learn library also provides corresponding interfaces:
from sklearn.model_selection import ShuffleSplitcross_validation = ShuffleSplit(n_splits=7, train_size=0.75, test_size=0.25)
When the data set is a time series, traditional cross-validation cannot be used , this will completely disrupt the order. In order to solve this problem, refer to Sklearn provides another splitter-TimeSeriesSplit,
from sklearn.model_selection import TimeSeriesSplitcross_validation = TimeSeriesSplit(n_splits=7)
The above method is processed for independent and identically distributed data sets, that is, the process of generating data will not be affected by other samples
However, in some cases, the data does not meet the conditions of independent and identical distribution (IID), that is, there is a dependency between some samples. This situation also occurs in Kaggle competitions, such as the Google Brain Ventilator Pressure competition. This data records the air pressure values of the artificial lung during thousands of breaths (inhalation and exhalation), and is recorded at every moment of each breath. There are approximately 80 rows of data for each breathing process, and these rows are related to each other. In this case, traditional cross-validation methods cannot be used because the division of the data may "happen just in the middle of a breathing process"
This can be understood as the need to "group" these data, Because the data within the group are related. For example, when collecting medical data from multiple patients, each patient has multiple samples. However, these data are likely to be affected by individual patient differences and therefore also need to be grouped
Often we hope that a model trained on a specific group will generalize well to other unseen groups. groups, so when performing cross-validation, "tag" the data of these groups and tell them how to distinguish them from each other.
Several interfaces are provided in Sklearn to handle these situations:
It is strongly recommended to understand the idea of cross-validation and how to implement it. It is a good way to look at the Sklearn source code. In addition, you need to have a clear definition of your own data set, and data preprocessing is really important.
The above is the detailed content of The importance of cross-validation cannot be ignored!. For more information, please follow other related articles on the PHP Chinese website!