In the past ten years, ImageNet has basically been the "barometer" in the field of computer vision. If the accuracy rate has improved, you will know whether there is a new technology coming out.
"Brushing the list" has always been the driving force for model innovation, pushing the model's Top-1 accuracy to 90%, which is higher than humans.
#But is the ImageNet dataset really as useful as we think?
Many papers have questioned ImageNet, such as data coverage, bias issues, whether labels are complete, etc.
The most important thing is, is the 90% accuracy of the model really accurate?
Recently, researchers from the Google Brain team and the University of California, Berkeley, re-examined the prediction results of several sota models and found that the true accuracy of the models may have been underestimated!
Paper link: https://arxiv.org/pdf/2205.04596.pdf
Every mistake researchers make by testing some top models Perform manual review and classification to gain insights into long-tail errors on benchmark datasets.
The main focus is on the multi-label subset evaluation of ImageNet. The best model has been able to achieve a Top-1 accuracy of 97%.
The study’s analysis shows that nearly half of the so-called prediction errors were not errors at all and were also found in the picture New multi-labels have been added, which means that if the prediction results have not been manually reviewed, the performance of these models may be "underestimated"!
Unskilled crowdsourced data annotators often label data incorrectly, which greatly affects the authenticity of the model accuracy.
In order to calibrate the ImageNet data set and promote good progress in the future, the researchers provide an updated version of the multi-label evaluation set in the article, and combine 68 examples with obvious errors in the sota model predictions into a new data Collect ImageNet-Major to facilitate future CV researchers to overcome these bad cases
Pay off "technical debt"
Just start from the title of the article "When does dough become bagel?" It can be seen that the author mainly focuses on the label issue in ImageNet, which is also a historical issue.
The picture below is a very typical example of label ambiguity. The label in the picture is "dough", and the model's prediction result is "bagel". Is it wrong?
Theoretically speaking, this model has no prediction error, because the dough is baking and is about to become a bagel, so it is both dough and bagel.
It can be seen that the model has actually been able to predict that this dough will "become" a bagel, but it did not get this score in terms of accuracy.
In fact, using the classification task of the standard ImageNet data set as the evaluation criterion, problems such as the lack of multiple labels, label noise, and unspecified categories are inevitable.
From the perspective of the crowdsourced annotators tasked with identifying such objects, this is a semantic and even philosophical conundrum that can only be solved through multi-labeling, Therefore, the main improvement in the ImageNet derived data set is the labeling problem.
It has been 16 years since the establishment of ImageNet. The annotators and model developers at that time certainly did not have as rich an understanding of the data as they do today, and ImageNet was an early large-capacity and relatively well-annotated data set, so ImageNet It has naturally become the standard for CV rankings.
But the budget for labeling data is obviously not as large as that for developing models, so the improvement of labeling problems has become a kind of technical debt.
To find the remaining errors in ImageNet, the researchers used a standard ViT-3B model with 3 billion parameters (able to achieve 89.5% accuracy), with JFT-3B as a pre-trained model, and fine-tuned on ImageNet-1K.
Using the ImageNet2012_multilabel data set as the test set, ViT-3B initially achieved an accuracy of 96.3%, in which the model clearly mispredicted 676 images, and then conducted in-depth research on these examples.
When re-labeling the data, the author did not choose crowdsourcing, but formed a team of 5 expert reviewers to perform labeling, because this type of labeling errors are difficult to identify for non-professionals.
For example, in picture (a), ordinary annotators may just write "table", but in fact there are many other objects in the picture, such as screens, monitors, mugs, etc.
The subject of picture (b) is two people, but the label is picket fence (fence), which is obviously imperfect. Possible labels include bow tie, uniform, etc. .
Picture (c) is also an obvious example. If only "African elephant" is marked, the ivory may be ignored.
Picture (d) is labeled lakeshore, but there is actually nothing wrong with labeling it seashore.
In order to increase the efficiency of annotation, the researchers also developed a dedicated tool that can simultaneously display the categories, prediction scores, labels and images predicted by the model.
In some cases, there may still be label disputes between the expert groups, and at this time the images will be put into Google search to assist in labeling.
For example, in one example, the model’s prediction results include taxis, but there is no taxi brand in the picture except for “a little yellow”.
The annotation of this image was mainly found through Google image search that the background of the image is an iconic bridge. Then the researchers located the city where the image is located, and after retrieving taxi images in the city, It is acknowledged that this picture does contain a taxi and not an ordinary car. And a comparison of the license plate design also verified that the model's prediction was correct.
After a preliminary review of the errors discovered during several stages of the research, the authors first divided them into two categories based on their severity:
1. Major: Human Be able to understand the meaning of the label, and the model's predictions have nothing to do with the label;
2. Minor error (Minor): The label may be wrong or incomplete, resulting in prediction errors. Corrections require expert review of the data.
For the 155 major errors made by the ViT-3B model, the researchers found three other models to predict together to increase the diversity of prediction results.
There were 68 major errors that all four models failed to predict. We then analyzed all models' predictions for these examples and verified that none of them were correct. New multi-label, i.e., predictions for each model The results are indeed major errors.
These 68 examples have several common characteristics. The first is that the sota models trained in different ways have made mistakes on this subset, and expert reviewers also believe that the prediction results are completely irrelevant.
The data set of 68 images is also small enough to facilitate manual evaluation by subsequent researchers. If these 68 examples are conquered in the future, the CV model may achieve new breakthroughs.
By analyzing the data, the researchers divided prediction errors into four types:
1. Fine-grained errors, in which the predicted category is similar to the real label, but not exactly the same;
2. Fine-grained with out-of-vocabulary (OOV), where the model identifies a class whose category is correct but does not exist for the object in ImageNet;
3. Spurious correlation, where the predicted label is read from the context of the image;
4. Non-prototype, where the object in the label is similar to the predicted label, but not exactly the same.
After reviewing the original 676 errors, researchers found that 298 of them should have been correct, or it was determined that the original label was wrong or problematic.
In general, four conclusions can be drawn from the research results of the article:
1. When a large-scale, high-precision model makes other When the model does not have new predictions, about 50% of them are correct new multi-labels;
2. The higher accuracy model does not show a clear correlation between category and error severity;
3. Today’s SOTA model performance on human-evaluated multi-label subsets largely matches or exceeds the best expert human performance;
4. Noisy training data and Unspecified classes can be a factor that limits effective measurement of image classification improvements.
Perhaps the image labeling problem still has to wait for natural language processing technology to be solved?
The above is the detailed content of The future of CV is on these 68 pictures? Google Brain takes a deep look at ImageNet: top models all fail to predict. For more information, please follow other related articles on the PHP Chinese website!