Home Technology peripherals AI Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable

Jul 19, 2024 am 01:29 AM
openai project

If the answer given by the AI ​​model is incomprehensible at all, do you dare to use it?


As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and to make it clear when we should not trust them.

One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, completely Understand so that any possible errors can be caught. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions.

For large language models, we can also adopt a similar approach.
However, when using this approach, it is very important to ensure that the language model generates understandable text, especially when dealing with complex tasks such as mathematics and coding.

As shown in the figure below, you ask AI to write a quick sort algorithm. AI writes it quickly, and the answer is very concise. But if you don’t know how to write code, how can you judge whether the AI ​​is written correctly? Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable
OpenAI studied this problem in a recent paper.
Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable
  • Paper title: PROVER-VERIFIER GAMES IMPROVE LEGIBILITY OF LLM OUTPUTS
  • Paper link: https://cdn.openai.com/prover-verifier-games-improve-legibility-of-llm- outputs/legibility.pdf

They found that if large language models are asked to generate answers with the goal of "getting the correct answer", the answers they give may be difficult to understand. Human evaluators are twice as likely to make mistakes when judging these answers.

So their goal is to train high-level language models so that they can generate text that simpler models can easily verify. They found that such text was not only friendly to simple models but also more readable to humans.

So, how is it achieved? This also mentions a 2021 paper - "Learning to Give Checkable Answers with Prover-Verifier Games". In this paper, Cem Anil et al. proposed a game theory framework called "Prover-Verifier Games". The framework consists of two competing learners: a network of trusted verifiers trying to choose the correct answer, and a more powerful but untrustworthy network of provers trying to convince the verifiers to accept a specific answer. , regardless of its correctness. The goal of this framework is to produce a reliable proof protocol from the game.

OpenAI borrowed this framework in this new research and let two large models (the stronger one is the prover and the weaker one is the verifier) ​​play against each other until the two reach a balance. At this time, the answer given by the prover is both readable and reasonably correct.

OpenAI said that by improving the readability of model output, it can make the application of artificial intelligence more effective and trustworthy, which is beneficial to various fields that require precise communication.

It is worth mentioning that this paper comes from OpenAI’s Super Alignment team, several of whose authors left OpenAI earlier this year, and the team has also fallen apart. One of the authors revealed that this was the team's last paper.

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable

An intuitive example

In order to improve the readability of model output, OpenAI optimized the chain of thinking (CoT) of some elementary school mathematics problems so that it can be understood by weaker models. Here are some examples:

Question: Shawna’s father is five times older than Shawna. Shauna is now three times Aliya's age. If Aaliyah is 3, how old is Shauna's father?

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable
Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable
这么做的目的是为了验证经过这种训练的模型,它的输出是否更加直接易懂。

这种方法在只对训练集做了一些优化的情况下,让模型的性能提升了 50%。同时,模型输出的答案也更好懂了,这也帮助需要评估这些答案正确性的人类用户更准确地做出判断。OpenAI 认为,进一步缩小可读性强的模型和能力强的模型之间的性能差距,是未来对齐研究中一个关键的目标。

「证明者」和「验证者」是怎么在训练中博弈的?

为了激励强力模型(证明者)生成可读性更强的文本,研究者采用了一种训练方案,要求强力模型生成的答案能够被较弱的模型(验证者)轻松验证。具体来说,他们使用了 GPT-4 系列中的大模型和小模型。它们在预训练数据量上相差约三个数量级。它们都接受了一些基于人工标注的数学推导训练,但未经过广泛的后训练。

首先,研究团队让证明者模型尝试解决一系列数学问题。由于证明者模型自身可能出错,它生成的一些答案可能不正确。接下来,他们将这些答案交由验证者模型进行检验,并在以下两个阶段之间进行多轮的交替验证:

  • 「验证者」训练:研究团队训练较弱的「验证者」模型,验证「证明者」给出的答案是否正确。
  • 「证明者」训练:研究团队指示性能较强的「证明者」模型扮演「有用」或「狡猾」的戏份。
  • 当「证明者」被设定为「有用」时,研究团队将奖励「验证者」打分高的正确答案。
  • 当「证明者」被设定为「狡猾」时,研究团队反而奖励「验证者」打分高的错误答案。

研究团队发现,最初,狡猾的「证明者」能够轻松生成让「验证者」信服的错误答案。然而,这些错误答案将会进入下一轮的「验证者」训练中,相同的漏洞在下一轮中,将不再具有说服力。因此,「证明者」将被迫在每一轮中探索不同的策略。

当狡猾的「证明者」再也无法生成让「验证者」信服的错误解决方案时,训练宣告结束。

研究者都发现了什么?
Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable
性能与可读性权衡:

  • 一个仅在人类编写的推导上进行监督式微调的模型,虽然具有较高的可读性,但准确性较差。
  • 当模型的优化目标仅为正确性,即只关注生成正确答案时,虽然能够达到高准确性,但生成的解决方案的可读性却较差。
  • OpenAI 提出的可检查博弈方法平衡了这两者,在保持高可读性的同时保持适度的准确性。

三种有用的模型:

  • 鲁棒验证者:即使解决方案具有误导性,仍能够有效区分正确和错误的解决方案。
  • 有用的证明者:生成对人类来说易于理解的解决方案,从而减少人类评估者的错误。
  • 狡猾证明者:产生微妙的、错误的解决方案,这些解决方案最初会混淆人类评估者,从而突出需要进一步改进的领域。

研究团队使用了同样的测试问题,来展示狡猾证明者(Sneaky Prover)的生成内容和训练过程。

问题:肖娜(Shawna)的父亲的年龄是肖娜的五倍。肖娜现在的年龄是阿莉娅(Aliya)的三倍。如果阿莉娅 3 岁,那么肖娜的父亲多少岁?

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable

结论

随着大语言模型(LLMs)在关键领域和复杂应用中的使用日益增加,了解其所生成的内容是否值得信赖变得至关重要。通过要求模型生成清晰且可验证的理由,可以增强对其生成内容的信任。

此方法的一个显着优势是它减少了对人类示范或可读性判断的依赖。这种自主性对于未来超智能 AI 系统的对齐尤为重要,其最终目标是在没有人类进行直接监督的情况下,可靠地将 AI 系统与人类的价值观和期望对齐。

尽管这项工作仅在一个数据集上进行了实验,并且仍然需要真值标签(ground truth labels),但研究团队仍预计在开发正确、透明及可验证的AI 系统中,此类方法会起到关键作用,并增强其在现实应用中的可信任性和安全性。

更多详情,请参考原论文。

参考链接:
https://openai.com/index/prover-verifier-games-improve-legibility/

The above is the detailed content of Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days Jul 17, 2024 am 01:56 AM

It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Jul 17, 2024 pm 10:02 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated Aug 05, 2024 pm 03:32 PM

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it Aug 01, 2024 pm 05:18 PM

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

Axiomatic training allows LLM to learn causal reasoning: the 67 million parameter model is comparable to the trillion parameter level GPT-4 Axiomatic training allows LLM to learn causal reasoning: the 67 million parameter model is comparable to the trillion parameter level GPT-4 Jul 17, 2024 am 10:14 AM

Show the causal chain to LLM and it learns the axioms. AI is already helping mathematicians and scientists conduct research. For example, the famous mathematician Terence Tao has repeatedly shared his research and exploration experience with the help of AI tools such as GPT. For AI to compete in these fields, strong and reliable causal reasoning capabilities are essential. The research to be introduced in this article found that a Transformer model trained on the demonstration of the causal transitivity axiom on small graphs can generalize to the transitive axiom on large graphs. In other words, if the Transformer learns to perform simple causal reasoning, it may be used for more complex causal reasoning. The axiomatic training framework proposed by the team is a new paradigm for learning causal reasoning based on passive data, with only demonstrations

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source Jul 17, 2024 am 02:46 AM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which

LLM is really not good for time series prediction. It doesn't even use its reasoning ability. LLM is really not good for time series prediction. It doesn't even use its reasoning ability. Jul 15, 2024 pm 03:59 PM

Can language models really be used for time series prediction? According to Betteridge's Law of Headlines (any news headline ending with a question mark can be answered with "no"), the answer should be no. The fact seems to be true: such a powerful LLM cannot handle time series data well. Time series, that is, time series, as the name suggests, refers to a set of data point sequences arranged in the order of time. Time series analysis is critical in many areas, including disease spread prediction, retail analytics, healthcare, and finance. In the field of time series analysis, many researchers have recently been studying how to use large language models (LLM) to classify, predict, and detect anomalies in time series. These papers assume that language models that are good at handling sequential dependencies in text can also generalize to time series.

See all articles