


Meta develops System 2 distillation technology, and the Llama 2 dialogue model task accuracy is close to 100%
Researchers say that if Sytem 2 distillation can become an important feature of future continuous learning AI systems, it can further improve the performance of inference tasks where System 2 performs poorly.
When it comes to large language model (LLM) strategies, there are generally two types, one is immediate System 1 (fast response), and the other is System 2 (slow thinking).
Where System 2 reasoning favors thoughtful thinking, generative intermediate thinking allows the model (or human) to reason and plan in order to successfully complete a task or respond to instructions. In System 2 reasoning, effortful mental activity is required, especially in situations where System 1 (more automatic thinking) can go awry.
Therefore, System 1 is defined as an application of Transformer that can directly generate responses based on inputs without generating intermediate tokens. Sytem 2 is defined as any method that generates an intermediate token, including methods that perform a search or multiple prompts and then finally generate a response.
The industry has proposed a series of related System 2 technologies, including thinking chain, thinking tree, thinking map, branch resolution and merging, System 2 Attention, Rephrase and Respond (RaR), etc. Many methods show more accurate results thanks to this explicit inference, but doing so often comes with higher inference costs and response latency. Therefore, many of these methods are not used in production systems and are mostly used in System 1.
For humans, the process of learning to transfer skills from deliberate (System 2) to automatic (System 1) is known in psychology as automaticity, and the use of procedural memory. For example, when driving to work for the first time, people often expend conscious effort planning and making decisions to get to their destination. After the driver repeats this route, the driving process will be "compiled" into the subconscious mind. Likewise, sports such as tennis can become "second nature."
In this article, researchers from Meta FAIR explore a similar AI model approach. This method performs compilation in an unsupervised manner given a set of unlabeled examples and is called System 2 distillation. For each example, they apply a given System 2 method and then measure the quality of the predictions in an unsupervised manner.
For example, for tasks with unique answers, researchers apply self-consistency and sample multiple times. For a sufficiently consistent example of System 2, they assume that this result should be distilled and added to the distillation pool. System 1 is then fine-tuned to match the predictions of the System 2 method on the pool of collected examples, but without generating intermediate steps. Figure 1 below illustrates the overall process of distilling System 2 into System 1.
The researchers conducted experiments on 4 different System 2 LLM methods and 5 different tasks. It was found that our method can distill System 2 reasoning back into System 1 in a variety of settings, sometimes even better than System 2 teachers' results. Furthermore, these predictions can now be produced at a fraction of the computational cost.
For example, they found successful distillation applicable to tasks of dealing with biased opinions or irrelevant information (System 2 Attention), clarifying and improving responses in certain reasoning tasks (RaR), and fine-grained evaluation of LLMs (branch- Resolve - merge).
However, not all tasks can be distilled into System 1, especially complex mathematical reasoning tasks that require chain of thought. This is also reflected in humans, who are unable to perform certain tasks without thoughtful System 2 reasoning.
Paper address: https://arxiv.org/pdf/2407.06023v2
Distill System 2 back to System 1
Setup: System 1 and System 2 models
Given an input x , the researchers considered setting up a single model, in their case a Large Language Model (LLM), which was able to implement two response modes:
System 1: Directly generate output y. This type of approach works by forwarding layers of an underlying autoregressive neural network (Transformer) to generate output tokens.
System 2. Such methods use the underlying Transformer to generate any kind of intermediate output token z before generating the final response token, possibly including multiple calls (hints).
Formally, researchers treat System 2 model S_II as a function that accepts LLM p_θ and input x, and can repeatedly call LLM to generate intermediate markers z using a specific algorithm, and then return output y:
System 2 methods may involve multiple hints, branches, iterations and searches, while using LLM to generate intermediate results for further processing. In contrast, the System 1 model only considers the original input The labeled input However, they are susceptible to noise: some of these responses may be of high quality, while others may be of low quality or incorrect. For short question-answering and reasoning tasks involving short responses, often with a unique correct (but unknown) answer, researchers have considered an unsupervised management step to try to improve training data quality. They considered the following two variants that rely on the self-consistency criterion:
Self-consistency under input perturbation: Perturb the input x^i in a way that the output remains unchanged, such as changing the order of multiple-choice questions in the prompt, and calculating S_II for each perturbation; if the output is inconsistent, discard the Example.
Then the researcher obtained the synthetic data set (X_S_II, Y_S_II), where X_S_II is a filtered subset of X and the target is Y_S_II. The final step is to use this distilled training set to perform supervised fine-tuning of the LLM with parameters p_θ. Researchers typically initialize this model from the current state p_θ and then continue training with new data sets. After fine-tuning, they obtained an LLM
- Experimental results
- Training and evaluation settings
- The researchers used Llama-2-70B-chat as the base model for all experiments. They needed a base model with enough power to run as efficiently as a System 2 model, while also having open weights that could be fine-tuned, hence this choice.
For System 1, researchers use the instruction-adjusted base model as the standard baseline for zero-shot inference. They report task-specific metrics for each task, as well as the “#Tokens” metric, which measures the average number of tokens generated per input on the evaluation set. The System 2 method includes intermediate token generation and final output token generation.
Rephrase and Respond Distillation
RaR is a System 2 approach that first prompts the language model to rephrase the original question in a further elaborative way, and then generates a response based on the rephrased question, with the goal of providing a better output. For distillation data, the researchers used the self-consistency of the output to build a System 2 distillation data set for RaR. For each input, they performed eight sampling iterations on the last letter task and eight sampling iterations on each stage of the coin flip task, then used majority voting to determine the final output. .
Let’s first look at the
Last letter Concatenation task. This task focuses on symbolic reasoning, requiring the model to connect the last letters of a given word. The overall results are shown in Table 1 below.
The baseline System 1 model (Llama-2-70B-chat) achieves an accuracy of 30.0%, which is lower than System 2’s 1-Step and 2-Step RaR methods (39.5% and 44.5% respectively). By distilling the 2-Step RaR method back into the System 1 Llama-2-70B-chat model through this unsupervised technique, an astonishing accuracy of 98.0% is achieved.
Compared to zero-shot chat models, the model can effectively learn how to solve the task from this training data. RaR's distillation effectively inherits the advantages of System 2 and System 1, retaining the accuracy advantage of System 2, while its inference cost is equivalent to System 1.
Come back to theCoin Flip Reasoning Task
. This symbolic reasoning task, often tested in research, involves determining the final side of a coin (heads or tails), starting from a known initial position through a series of flips described in natural language, such as "The coin lands on heads." .The overall results are shown in Table 1 above. Llama-2-70B-chat (zero sample) achieved a success rate of 56.1% on this task, while 1-Step and 2-Step RaR achieved success rates of 58.5% and 77.2% respectively. Therefore, huge improvements were obtained using the 2-Step approach. Distilling 2-Step RaR back to System 1 Llama-2-70B-chat via our unsupervised technique yields 75.69% results.
Thus, the distilled System 2 model provides comparable performance to System 2 (2 Step RaR), but without the need to execute the LLM program using 2 hints.
System 2 Attention Distillation
Weston and Sukhbaatar (2023) proposed System 2 Attention (S2A), which helps reduce model inference pitfalls, such as relying on biased information in the input or focusing on irrelevant context .
The researchers verified the feasibility of distilling S2A into System 1, specifically the SycophancyEval question-answering task, which contains biased information in the input known to harm LLM performance.
The results are shown in Table 2 below, reporting the average accuracy of 3 random seeds. As expected, the baseline (System1) LLM has lower accuracy in the biased part and is susceptible to biased input. S2A significantly improves performance on biased inputs. System 2 distillation exhibits similar strong performance to System 2 methods.
Please refer to the original paper for more experimental results.
The above is the detailed content of Meta develops System 2 distillation technology, and the Llama 2 dialogue model task accuracy is close to 100%. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

If the answer given by the AI model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which

Can language models really be used for time series prediction? According to Betteridge's Law of Headlines (any news headline ending with a question mark can be answered with "no"), the answer should be no. The fact seems to be true: such a powerful LLM cannot handle time series data well. Time series, that is, time series, as the name suggests, refers to a set of data point sequences arranged in the order of time. Time series analysis is critical in many areas, including disease spread prediction, retail analytics, healthcare, and finance. In the field of time series analysis, many researchers have recently been studying how to use large language models (LLM) to classify, predict, and detect anomalies in time series. These papers assume that language models that are good at handling sequential dependencies in text can also generalize to time series.
