Home Technology peripherals AI Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Aug 05, 2024 pm 08:48 PM
deepmind project

Facing the current common practice of fine-tuning large models mainly relying on human-generated data, Google DeepMind has explored a more efficient way to reduce this dependence.


As you and I can see, Large Language Models (LLMs) are changing the deep learning landscape, demonstrating superior capabilities in generating human-quality text and solving various language tasks. While the industry has further improved performance on specific tasks through supervised fine-tuning of human-collected data, obtaining high-quality human data faces significant bottlenecks. This is especially true for tasks that involve solving complex problems, requiring significant resources and expertise.

How to solve it? Synthetic data generated by models is a promising alternative that can be scalable and cost-effective as long as the quality of the data is maintained.

While LLM is able to self-evaluate the generated data, in this paper, Google DeepMind explores a simpler setup that uses an external scalar feedback signal as a quality indicator for each generated sample.

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Paper address: https://arxiv.org/pdf/2312.06585.pdf

In order to study training on model-generated data, the researchers considered a simple but powerful language model self-training The method requires only two functions, one is to generate samples based on the model, and the other is to use the scoring mechanism to evaluate these samples.

In order to ensure clarity and consistency, the researchers adopted a reinforcement self-training method ReST^??, and proved that this method can use expectation-maximization (EM) for reinforcement learning. Specifically, ReST^?? alternates between expectation and maximization steps.

  1. Generation (E-step): The language model generates multiple output samples for each input context, and then filters these samples using binary rewards to collect a training dataset.
  2. Improvement (M-step): The original language model is supervised fine-tuned on the training data set from the previous E-step and then used in the next E-step.

Researchers confirmed that ReST^?? and its variants have been successful in enhancing language models in various fields, including machine translation, semantic analysis, preference alignment and basic reasoning.

In addition, previous work mainly used ReST^??for relatively small models (up to 7 billion parameters), with limited scalability for larger models. Therefore, this paper aims to explore the effectiveness and scalability of model-generated synthetic data versus human-generated data in two challenging but less studied areas: Mathematical Problem Solving at Competitive Levels (MATH) and code generation (APPS).

Empirical results show that when using ReST^?? for PaLM 2 models of different sizes, significant performance improvements are achieved in mathematical reasoning and code generation tasks. Models fine-tuned on synthetic data generated by the model achieved greater performance gains than models trained on human-written data. Interestingly, performance degrades beyond a certain number of ReST^?? iterations, indicating the potential for overfitting on a small number of training problems.

In addition, the model fine-tuned using ReST^?? improved pass@k metric and majority voting performance. These fine-tuned models also show performance enhancements on relevant but held-out benchmarks, including math (GSM8K and Hungarian HS finals), coding (HumanEval), and Big-Bench Hard tasks.

In summary, the results of this paper show that self-training with feedback is a promising method to reduce reliance on human data.

Expected Maximum (EM) for reinforcement self-training

First, this study is based on the previous research of Dayan and Hinton, using a language model to describe the EM-based reinforcement learning framework. Specifically, they first defined a binary optimal variable O such that ?(?= 1|?,?)∝?(?(?,?)); then for the non-decreasing function ?: ℝ → ℝ+, they achieved Maximizing observation?= 1 (obtaining high reward), the following formula is obtained:

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

However, solving the sum of the sequence ? in the above equation is tricky. Therefore, this paper considers maximizing its ELBO ?( ??, ?) with respect to the parameter ? and the variational distribution ?( ?|?) instead of maximizing log ?(? = 1; ?). Specifically:

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

The EM algorithm in formula (2) alternates between E-step (Expectation) and M-step (Maximization).

ReST^??: Inspired by the EM framework, the next paper discusses a simplified version of the ReST method proposed by Gulcehre et al. For clarity, this article calls this approach ReST^??, which decouples data collection (E-step) and policy optimization (M-step) in the RL pipeline. As shown in Algorithm 1:

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Generation (E-step) : In this step, the study generates the dataset Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better by sampling the output sequence from the current policy ?? Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better. Here, the input is resampled from the original dataset Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better. The output sequence in Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better is then scored using the binary reward function ?(?, ?).

Improvement (M-step) : In the ? iteration, the study uses the new dataset Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better in the E-step to fine-tune the strategy ??. Unlike Gulcehre's study, they fine-tune a base pre-trained language model to minimize task-specific overfitting and minimize deviations from the base model. For fine-tuning, the study minimizes the reward-weighted negative log-likelihood loss Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better. Once the strategy is improved, a new dataset with better quality samples can be created again.

Experiments and Analysis

The main goal of conducting experiments in this paper is to answer the following questions:

  1. How effective is ReST^?? compared to fine-tuning on human-generated data? ?
  2. How many iterations are needed to get the best performance? ReST^??How long does it take to overfit the training set?
  3. ReST^??How does it affect pass@k and majority voting performance?
  4. If a user uses the data generated by the model for fine-tuning on a specific task, will it be migrated to other tasks? When evaluating our fine-tuned model on a wide range of tasks, does the performance degrade compared to the base model?
  5. Approximately how much input data is needed to get most of the performance gains from ReST^??? Is one iteration of ReST^?? enough?

This study conducted experiments using the PaLM 2 model and public APIs on Google Cloud, including PaLM 2-S (Bison), PaLM 2-S* (Codey), and PaLM 2-L (Unicorn). The training data set uses the MATH data set and APPS data set.

Figure 2 and Figure 3 show the performance of ReST^?? trained on the MATH and APPS datasets respectively. It can be concluded that MATH benefits from multiple iterations of ReST^??, both in terms of performance on the MATH test set and migration to GSM8K. On the other hand it can be seen that most of the gains for APPS come from the first iteration, while performing more iterations results in performance degradation for both APPS and HumanEval.

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

The gap between training and test performance. Figure 4 shows that while the training set performance increases linearly with the number of ReST^?? iterations, the test set performance does not. For MATH, little improvement in test performance was observed after the first iteration, whereas for APPS, performance regression was observed in the second iteration. The study speculates that the regression in performance may be due to overfitting. Since the APPS dataset is about one-third the size of the MATH dataset, it is more susceptible to this problem.

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Figure 5 shows the performance of the Palm-2-L model on the pass@K metric. The results show that the ReST^?? model obtained after fine-tuning is stronger for all values ​​of K, with the performance gap generally being largest at K=1. Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

The above is the detailed content of Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners Aug 09, 2024 pm 04:01 PM

But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days Jul 17, 2024 am 01:56 AM

It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Jul 17, 2024 pm 10:02 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' Jun 24, 2024 pm 03:04 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated Aug 05, 2024 pm 03:32 PM

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it Aug 01, 2024 pm 05:18 PM

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source Jul 17, 2024 am 02:46 AM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which

See all articles