Home > Technology peripherals > AI > ImageNet label error removed, model ranking changed significantly

ImageNet label error removed, model ranking changed significantly

WBOY
Release: 2023-04-12 17:46:11
forward
1149 people have browsed it

Previously, ImageNet became a hot topic because of the problem of label errors. You may be surprised to hear this number. There are at least 100,000 labels with problems. Studies based on incorrect labels may have to be overturned and repeated.

From this point of view, managing the quality of data sets is still very important.

Many people will use the ImageNet data set as a benchmark, but based on the ImageNet pre-trained model, the final results may vary due to data quality.

In this article, Kenichi Higuchi, an engineer from Adansons Company, re-studies the ImageNet data set in the article "Are we done with ImageNet?", and after removing the wrong label data, re-evaluates it and publishes it on torchvision model.

Remove erroneous data from ImageNet and re-evaluate the model

This paper divides labeling errors in ImageNet into three categories, as follows.

(1) Data with incorrect labeling

(2) Data corresponding to multiple labels

(3) Data that does not belong to any label

ImageNet label error removed, model ranking changed significantly

In summary, there are approximately more than 14,000 erroneous data. Considering that the number of evaluation data is 50,000, it can be seen that the proportion of erroneous data is extremely high. The figure below shows some representative error data.

ImageNet label error removed, model ranking changed significantly

Method

Without retraining the model, this study only excludes incorrectly labeled data, That is, the above-mentioned type (1) erroneous data, and excluding all erroneous data from the evaluation data, that is, (1)-(3) erroneous data, to recheck the accuracy of the model.

In order to delete error data, a metadata file describing the label error information is required. In this metadata file, if it contains errors of type (1)-(3), the information will be described in the "correction" attribute.

ImageNet label error removed, model ranking changed significantly

The study used a tool called Adansons Base, which filters data by linking datasets to metadata. 10 models were tested here as shown below.

ImageNet label error removed, model ranking changed significantly

10 image classification models used for testing

Results

The results are shown in the table below (numeric values is the accuracy in %, the number in brackets is the ranking)

ImageNet label error removed, model ranking changed significantly

The results of 10 classification models

With All Eval data is the baseline. Excluding incorrect data types (1), the accuracy increases by an average of 3.122 points. Excluding all incorrect data (1) to (3), the accuracy increases by an average of 11.743 points.

As expected, excluding erroneous data, the accuracy rate is improved across the board. There is no doubt that compared with clean data, erroneous data is prone to errors.

The accuracy ranking of the model changed when the evaluation was performed without excluding erroneous data, and when erroneous data (1)~(3) were all excluded.

In this article, there are 3,670 erroneous data (1), accounting for 7.34% of the total 50,000 pieces of data. After removal, the accuracy rate increased by about 3.22 points on average. When erroneous data is removed, the data scale changes, and a simple comparison of accuracy rates may be biased.

Conclusion

Although not specifically emphasized, it is important to use accurately labeled data when doing evaluation training.

Previous studies may have drawn incorrect conclusions when comparing accuracy between models. So the data should be evaluated first, but can this really be used to evaluate the performance of the model?

Many models using deep learning often disdain to reflect on the data, but are eager to improve accuracy and other evaluation metrics through the performance of the model, even if the evaluation data contains erroneous data. Not processed accurately.

When creating your own data sets, such as when applying AI in business, creating high-quality data sets is directly related to improving the accuracy and reliability of AI. The experimental results of this paper show that simply improving data quality can improve accuracy by about 10 percentage points, which demonstrates the importance of improving not only the model but also the data set when developing AI systems.

However, ensuring the quality of the data set is not easy. While increasing the amount of metadata is important to properly assess the quality of AI models and data, it can be cumbersome to manage, especially with unstructured data.

The above is the detailed content of ImageNet label error removed, model ranking changed significantly. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template