Home > Technology peripherals > AI > body text

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

WBOY
Release: 2024-08-05 20:48:40
Original
890 people have browsed it

Facing the current common practice of fine-tuning large models mainly relying on human-generated data, Google DeepMind has explored a more efficient way to reduce this dependence.


As you and I can see, Large Language Models (LLMs) are changing the deep learning landscape, demonstrating superior capabilities in generating human-quality text and solving various language tasks. While the industry has further improved performance on specific tasks through supervised fine-tuning of human-collected data, obtaining high-quality human data faces significant bottlenecks. This is especially true for tasks that involve solving complex problems, requiring significant resources and expertise.

How to solve it? Synthetic data generated by models is a promising alternative that can be scalable and cost-effective as long as the quality of the data is maintained.

While LLM is able to self-evaluate the generated data, in this paper, Google DeepMind explores a simpler setup that uses an external scalar feedback signal as a quality indicator for each generated sample.

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Paper address: https://arxiv.org/pdf/2312.06585.pdf

In order to study training on model-generated data, the researchers considered a simple but powerful language model self-training The method requires only two functions, one is to generate samples based on the model, and the other is to use the scoring mechanism to evaluate these samples.

In order to ensure clarity and consistency, the researchers adopted a reinforcement self-training method ReST^??, and proved that this method can use expectation-maximization (EM) for reinforcement learning. Specifically, ReST^?? alternates between expectation and maximization steps.

  1. Generation (E-step): The language model generates multiple output samples for each input context, and then filters these samples using binary rewards to collect a training dataset.
  2. Improvement (M-step): The original language model is supervised fine-tuned on the training data set from the previous E-step and then used in the next E-step.

Researchers confirmed that ReST^?? and its variants have been successful in enhancing language models in various fields, including machine translation, semantic analysis, preference alignment and basic reasoning.

In addition, previous work mainly used ReST^??for relatively small models (up to 7 billion parameters), with limited scalability for larger models. Therefore, this paper aims to explore the effectiveness and scalability of model-generated synthetic data versus human-generated data in two challenging but less studied areas: Mathematical Problem Solving at Competitive Levels (MATH) and code generation (APPS).

Empirical results show that when using ReST^?? for PaLM 2 models of different sizes, significant performance improvements are achieved in mathematical reasoning and code generation tasks. Models fine-tuned on synthetic data generated by the model achieved greater performance gains than models trained on human-written data. Interestingly, performance degrades beyond a certain number of ReST^?? iterations, indicating the potential for overfitting on a small number of training problems.

In addition, the model fine-tuned using ReST^?? improved pass@k metric and majority voting performance. These fine-tuned models also show performance enhancements on relevant but held-out benchmarks, including math (GSM8K and Hungarian HS finals), coding (HumanEval), and Big-Bench Hard tasks.

In summary, the results of this paper show that self-training with feedback is a promising method to reduce reliance on human data.

Expected Maximum (EM) for reinforcement self-training

First, this study is based on the previous research of Dayan and Hinton, using a language model to describe the EM-based reinforcement learning framework. Specifically, they first defined a binary optimal variable O such that ?(?= 1|?,?)∝?(?(?,?)); then for the non-decreasing function ?: ℝ → ℝ+, they achieved Maximizing observation?= 1 (obtaining high reward), the following formula is obtained:

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

However, solving the sum of the sequence ? in the above equation is tricky. Therefore, this paper considers maximizing its ELBO ?( ??, ?) with respect to the parameter ? and the variational distribution ?( ?|?) instead of maximizing log ?(? = 1; ?). Specifically:

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

The EM algorithm in formula (2) alternates between E-step (Expectation) and M-step (Maximization).

ReST^??: Inspired by the EM framework, the next paper discusses a simplified version of the ReST method proposed by Gulcehre et al. For clarity, this article calls this approach ReST^??, which decouples data collection (E-step) and policy optimization (M-step) in the RL pipeline. As shown in Algorithm 1:

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Generation (E-step) : In this step, the study generates the dataset Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better by sampling the output sequence from the current policy ?? Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better. Here, the input is resampled from the original dataset Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better. The output sequence in Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better is then scored using the binary reward function ?(?, ?).

Improvement (M-step) : In the ? iteration, the study uses the new dataset Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better in the E-step to fine-tune the strategy ??. Unlike Gulcehre's study, they fine-tune a base pre-trained language model to minimize task-specific overfitting and minimize deviations from the base model. For fine-tuning, the study minimizes the reward-weighted negative log-likelihood loss Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better. Once the strategy is improved, a new dataset with better quality samples can be created again.

Experiments and Analysis

The main goal of conducting experiments in this paper is to answer the following questions:

  1. How effective is ReST^?? compared to fine-tuning on human-generated data? ?
  2. How many iterations are needed to get the best performance? ReST^??How long does it take to overfit the training set?
  3. ReST^??How does it affect pass@k and majority voting performance?
  4. If a user uses the data generated by the model for fine-tuning on a specific task, will it be migrated to other tasks? When evaluating our fine-tuned model on a wide range of tasks, does the performance degrade compared to the base model?
  5. Approximately how much input data is needed to get most of the performance gains from ReST^??? Is one iteration of ReST^?? enough?

This study conducted experiments using the PaLM 2 model and public APIs on Google Cloud, including PaLM 2-S (Bison), PaLM 2-S* (Codey), and PaLM 2-L (Unicorn). The training data set uses the MATH data set and APPS data set.

Figure 2 and Figure 3 show the performance of ReST^?? trained on the MATH and APPS datasets respectively. It can be concluded that MATH benefits from multiple iterations of ReST^??, both in terms of performance on the MATH test set and migration to GSM8K. On the other hand it can be seen that most of the gains for APPS come from the first iteration, while performing more iterations results in performance degradation for both APPS and HumanEval.

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

The gap between training and test performance. Figure 4 shows that while the training set performance increases linearly with the number of ReST^?? iterations, the test set performance does not. For MATH, little improvement in test performance was observed after the first iteration, whereas for APPS, performance regression was observed in the second iteration. The study speculates that the regression in performance may be due to overfitting. Since the APPS dataset is about one-third the size of the MATH dataset, it is more susceptible to this problem.

Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

Figure 5 shows the performance of the Palm-2-L model on the pass@K metric. The results show that the ReST^?? model obtained after fine-tuning is stronger for all values ​​of K, with the performance gap generally being largest at K=1. Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better

The above is the detailed content of Does fine-tuning large models have to rely on human data? DeepMind: Self-training with feedback is better. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!