Home Technology peripherals AI Finally, someone investigated the overfitting of small models: two-thirds of them had data pollution, and Microsoft Phi-3 and Mixtral 8x22B were named

Finally, someone investigated the overfitting of small models: two-thirds of them had data pollution, and Microsoft Phi-3 and Mixtral 8x22B were named

May 04, 2024 pm 01:05 PM
industry scale ai

Two-thirds of the most popular large models currently have over-fitting problems?

#A study just released has surprised many researchers in the field.

终于有人调查了小模型过拟合:三分之二都有数据污染,微软Phi-3、Mixtral 8x22B被点名

Improving the reasoning capabilities of large language models is one of the most important directions of current research. In this type of task, many small models recently released seem to perform well, and Can handle this type of task well. For example, Microsoft's Phi-3, Mistral 8x22B and other models.

Researchers pointed out that there is a key problem in the current field of large model research: many studies fail to accurately benchmark the capabilities of existing LLMs. This suggests that we need to spend more time evaluating and testing the current LLM capability level.

终于有人调查了小模型过拟合:三分之二都有数据污染,微软Phi-3、Mixtral 8x22B被点名

终于有人调查了小模型过拟合:三分之二都有数据污染,微软Phi-3、Mixtral 8x22B被点名

This is because most current research uses GSM8k, MATH, MBPP, HumanEval, SWEBench and other test sets as benchmarks. Since the model is trained on a large data set scraped from the Internet, the training data set may contain samples that are highly similar to the questions in the benchmark.

This kind of contamination may cause the model's reasoning ability to be incorrectly evaluated - They may simply be confused by the question during the training process and happen to recite the correct answer.

Just now, a paper by Scale AI conducted an in-depth survey of the most popular large models, including OpenAI’s GPT-4, Gemini, Claude, Mistral, Llama, Phi, Abdin Models with different parameter quantities under other series.

The test results confirmed a widespread suspicion: many models were contaminated by benchmark data.

终于有人调查了小模型过拟合:三分之二都有数据污染,微软Phi-3、Mixtral 8x22B被点名

  • ##Paper title: A Careful Examination of Large Language Model Performance on Grade School Arithmetic

  • Paper link: https://arxiv.org/pdf/2405.00332

In order to avoid data pollution problems, researchers from Scale AI did not use any LLM or other synthetic data sources. The GSM1k dataset was created entirely relying on human annotation. Similar to GSM8k, GSM1k contains 1250 elementary-level math problems. In order to ensure a fair benchmark test, researchers have tried their best to ensure that the difficulty distribution of GSM1k is similar to GSM8k. On GSM1k, researchers benchmarked a series of leading open-source and closed-source large-scale language models and found that the worst-performing model performed 13% lower on GSM1k than on GSM8k.

Especially the Mistral and Phi model series, which are known for their small quantity and high quality. According to the test results of GSM1k, almost all versions of them show consistent evidence of overfitting.

终于有人调查了小模型过拟合:三分之二都有数据污染,微软Phi-3、Mixtral 8x22B被点名Further analysis found that there is a positive correlation between the probability of the model generating GSM8k samples and its performance gap between GSM8k and GSM1k (correlation coefficient r^2 = 0.32). This strongly suggests that the main cause of overfitting is that the model partially misjudges the samples in GSM8k.

However, the Gemini, GPT, Claude and Llama2 series have shown very few signs of fitting.

Furthermore, all models, including the most overfitted model, were still able to successfully generalize to new elementary school math problems, although sometimes with lower success rates than indicated by their baseline data.

终于有人调查了小模型过拟合:三分之二都有数据污染,微软Phi-3、Mixtral 8x22B被点名Scale AI does not currently plan to publicly release GSM1k to prevent similar data contamination issues from occurring in the future. They plan to conduct regular ongoing evaluations of all major open and closed source LLMs, and will also open source the evaluation code so that subsequent studies can replicate the results in the paper.

GSM1k data set

GSM1k contains 1250 elementary school math problems. These problems can be solved with only basic mathematical reasoning. Scale AI showed each human annotator 3 sample questions from GSM8k and asked them to ask new questions of similar difficulty, resulting in the GSM1k data set. The researchers asked the human annotators not to use any advanced mathematical concepts and only use basic arithmetic (addition, subtraction, multiplication and division) to formulate questions. Like GSM8k, the solutions to all problems are positive integers. No language model was used in the construction of the GSM1k dataset.

终于有人调查了小模型过拟合:三分之二都有数据污染,微软Phi-3、Mixtral 8x22B被点名

In order to avoid the data pollution problem of the GSM1k dataset, Scale AI will not publicly release the dataset at present, but will open source the GSM1k evaluation framework, which is based on EleutherAI's LM Evaluation Harness.

But Scale AI promises that will release the complete GSM1k data set under the MIT license after reaching one of the following two conditions: (1) There are three based on different pre-training bases The open source model of model lineage reaches 95% accuracy on GSM1k; (2) by the end of 2025. At that point, it is likely that elementary school mathematics will no longer be a valid benchmark for assessing LLM performance.

To evaluate proprietary models, researchers will publish data sets via an API. The reason for this release approach is that the authors believe that LLM vendors generally do not use API data points to train model models. Nonetheless, if GSM1k data is leaked through the API, the authors of the paper have also retained data points that do not appear in the final GSM1k data set. These backup data points will be released with GSM1k when the above conditions are met.

They hope that future benchmark releases will follow a similar pattern - not releasing them publicly at first, but pre-promising to release them at a future date or when certain conditions are met to prevent manipulation.

In addition, although Scale AI strives to ensure maximum consistency between GSM8k and GSM1k. But the test set of GSM8k has been publicly released and widely used for model testing, so GSM1k and GSM8k are only approximations under ideal conditions. The following evaluation results are obtained when the distributions of GSM8k and GSM1k are not exactly the same.

Evaluation results

To evaluate the model, the researchers used the LM Evaluation Harness branch of EleutherAI and used the default settings. The running prompts for the GSM8k and GSM1k problems are the same. Five samples are randomly selected from the GSM8k training set, which is also the standard configuration in this field (see Appendix B for complete prompt information).

All open source models are evaluated at a temperature of 0 to ensure repeatability. The LM Assessment Kit extracts the last numeric answer in the response and compares it to the correct answer. Therefore, model responses that produce "correct" answers in a format that does not match the sample will be marked as incorrect.

For open source models, if the model is compatible with the library, vLLM will be used to accelerate model inference, otherwise the standard HuggingFace library will be used for inference by default. Closed-source models are queried via the LiteLLM library, which unifies the API call format for all evaluated proprietary models. All API model results are from queries between April 16 and April 28, 2024, and use default settings.

In terms of the models evaluated, we selected them based on popularity, and also evaluated several lesser-known models that were ranked high on OpenLLMLeaderboard.

Interestingly, the researchers found evidence of Goodhart's law in the process: many models performed much worse on GSM1k than GSM8k, indicating that they were mainly catering to GSM8k benchmark, rather than truly improving model inference capabilities. The performance of all models is shown in Appendix D below.

终于有人调查了小模型过拟合:三分之二都有数据污染,微软Phi-3、Mixtral 8x22B被点名

终于有人调查了小模型过拟合:三分之二都有数据污染,微软Phi-3、Mixtral 8x22B被点名

In order to make a fair comparison, the researchers divided the models according to their performance on GSM8k, and compared them with other models that performed similarly. The models were compared (Figure 5, Figure 6, Figure 7).

终于有人调查了小模型过拟合:三分之二都有数据污染,微软Phi-3、Mixtral 8x22B被点名

终于有人调查了小模型过拟合:三分之二都有数据污染,微软Phi-3、Mixtral 8x22B被点名

终于有人调查了小模型过拟合:三分之二都有数据污染,微软Phi-3、Mixtral 8x22B被点名

What conclusions were drawn?

Although the researchers provided objective evaluation results of multiple models, they also stated that interpreting the evaluation results, like interpreting dreams, is often a very subjective task. In the last part of the paper, they elaborate on four implications of the above evaluation in a more subjective way:

Conclusion 1: Some model families are systematically overfitting

While it is often difficult to draw conclusions from a single data point or model version, examining a family of models and observing patterns of overfitting can make more definitive statements. Some model families, including Phi and Mistral, show a tendency for stronger system performance on GSM8k than on GSM1k in almost every model version and size. There are other model families such as Yi, Xwin, Gemma and CodeLlama that also display this pattern to a lesser extent.

Conclusion 2: Other models, especially cutting-edge models, show no signs of overfitting

Many models show small overfitting in all performance areas Signs of fit, particularly for all leading or near-leading models including the proprietary Mistral Large, appear to perform similarly on GSM8k and GSM1k. In this regard, the researchers put forward two possible hypotheses: 1) Frontier models have sufficiently advanced reasoning capabilities, so even if the GSM8k problem has already appeared in their training set, they can generalize to new problems; 2) Frontier models Model builders may be more cautious about data contamination.

While it’s impossible to look at each model’s training set and determine these assumptions, one piece of evidence supporting the former is that Mistral Large is the only model in the Mistral series that shows no signs of overfitting. The assumption that Mistral only ensures that its largest model is free from data contamination seems unlikely, so researchers favor that a sufficiently powerful LLM will also learn basic inference capabilities during training. If a model learns to reason well enough to solve a problem of a given difficulty, it will be able to generalize to new problems even if GSM8k is present in its training set.

Conclusion 3: The overfitting model still has the ability to reason

One of the worries of many researchers about model overfitting is that the model cannot perform reasoning, and Just memorize the answers in the training data, but the results of this paper do not support this hypothesis. The fact that a model is overfit does not mean that its inference capabilities are poor, it simply means that it is not as good as the benchmark indicates. In fact, researchers have found that many overfitted models are still capable of reasoning and solving new problems. For example, Phi-3's accuracy dropped by almost 10% between GSM8k and GSM1k, but it still correctly solved more than 68% of GSM1k problems - problems that certainly did not appear in its training distribution. This performance is similar to larger models such as dbrx-instruct, which contain almost 35 times the number of parameters. Likewise, even accounting for overfitting, the Mistral model remains one of the strongest open source models. This provides more evidence for the conclusion of this article that a sufficiently powerful model can learn basic inference even if the benchmark data accidentally leaks into the training distribution, which is likely to happen with most overfit models.

Conclusion 4: Data pollution may not be a complete explanation of overfitting

A priori, natural hypothesis is that the main cause of overfitting is data Contamination, for example, the test set is leaked during the pre-training or instruction fine-tuning part of creating the model. Previous research has shown that models assign higher log-likelihoods to the data they have seen during training (Carlini et al. [2023]). The researchers tested the hypothesis that data contamination is the cause of overfitting by measuring the probability of the model generating samples from the GSM8k test set and comparing its degree of overfitting compared to GSM8k and GSM1k.

终于有人调查了小模型过拟合:三分之二都有数据污染,微软Phi-3、Mixtral 8x22B被点名

Researchers say data pollution may not be the entire cause. They observed this with several outliers. A closer look at these outliers reveals that the model with the lowest log-likelihood per character (Mixtral-8x22b) and the model with the highest log-likelihood per character (Mixtral-8x22b-Instruct) are not just variants of the same model , and has a similar degree of overfitting. More interestingly, the most overfitted model (Math-Shepherd-Mistral-7B-RL (Yu et al. [2023])) has a relatively low log-likelihood per character (Math Shepherd using synthetic data Train reward models on process-level data).

Thus, the researchers hypothesized that the reward modeling process may have leaked information about the correct inference chains for GSM8k, even though the issues themselves never appeared in the dataset. Finally, they found that the Llema model had high log-likelihood and minimal overfitting. Since these models are open source and their training data is known, several instances of the GSM8k problem appear in the training corpus, as described in the Llema paper. However, the authors found that these few instances did not lead to severe overfitting. The existence of these outliers suggests that overfitting on GSM8k is not purely due to data contamination, but may be caused by other indirect means, such as the model builder collecting data with similar properties to the baseline as training data, or based on Performance on the benchmark selects the final model checkpoint, even though the model itself may not have seen the GSM8k dataset at any point during training. The opposite is also true: a small amount of data contamination does not necessarily lead to overfitting.

For more research details, please refer to the original paper.

The above is the detailed content of Finally, someone investigated the overfitting of small models: two-thirds of them had data pollution, and Microsoft Phi-3 and Mixtral 8x22B were named. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners Aug 09, 2024 pm 04:01 PM

But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

Claude has become lazy too! Netizen: Learn to give yourself a holiday Claude has become lazy too! Netizen: Learn to give yourself a holiday Sep 02, 2024 pm 01:56 PM

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home Aug 21, 2024 pm 07:33 PM

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded Aug 22, 2024 pm 10:35 PM

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI ​​robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI ​​side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Sep 03, 2024 pm 05:18 PM

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award Aug 15, 2024 pm 04:37 PM

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Aug 08, 2024 am 07:02 AM

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

The first large UI model in China is released! Motiff's large model creates the best assistant for designers and optimizes UI design workflow The first large UI model in China is released! Motiff's large model creates the best assistant for designers and optimizes UI design workflow Aug 19, 2024 pm 04:48 PM

Artificial intelligence is developing faster than you might imagine. Since GPT-4 introduced multimodal technology into the public eye, multimodal large models have entered a stage of rapid development, gradually shifting from pure model research and development to exploration and application in vertical fields, and are deeply integrated with all walks of life. In the field of interface interaction, international technology giants such as Google and Apple have invested in the research and development of large multi-modal UI models, which is regarded as the only way forward for the mobile phone AI revolution. In this context, the first large-scale UI model in China was born. On August 17, at the IXDC2024 International Experience Design Conference, Motiff, a design tool in the AI ​​era, launched its independently developed UI multi-modal model - Motiff Model. This is the world's first UI design tool

See all articles