


A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking
When I woke up, the machine learning community was in a state of shock.
Because the latest research has found that just saying "Let's think step by step" to GPT-3 will allow it to correctly answer questions that it could not answer before.
For example, the following example:
Half of the 16 balls are golf balls, and half of these golf balls are blue. How many blue golf balls are there in total?
(The problem is not difficult, but please note that this is zero-sample learning, which means that similar problems have never been seen during the AI training stage.)
If GPT is required -3 Directly write "what is the answer", it will give the wrong answer: 8.
But after adding the "spell" that lets us think about it step by step, GPT-3 will first output the steps of thinking, and finally give the correct answer: 4!
And this does not It's not a coincidence, the research team fully verified it in the paper.
The above question comes from the classic MutiArith data set, which specifically tests the language model's ability to solve mathematical problems. GPT-3 originally had an accuracy of only 17% in a zero-sample scenario.
This paper summarizes the 9 most effective prompt words. Among them, the first 6 words that were changed to let GPT-3 think gradually increased the accuracy rate to more than 70%.
Even the simplest “Let’s think” (let’s think about it) can rise to 57.5%.
It feels like a kindergarten aunt is coaxing a child...
This technique does not seem to require any magic modifications to GPT-3. Someone has successfully reproduced it on the OpenAI official demo. Even changing it to Chinese will work.
English questions have Chinese hints, and GPT-3 gives the correct Chinese answers.
The Google researcher who first forwarded this paper to the social network said that the new all you need has been added.
Seeing this, the big guys from all walks of life started to have their imaginations wild and started making jokes.
What would happen if you encouraged the AI to say "You can do it, I believe in you"?
Threaten the AI by saying "Time is running out" or "You What about "having a gun on your head"?
Will saying "drive more carefully" to the AI become a self-driving solution?
Some people also pointed out that this is almost the same as the plot of the science fiction story "The Hitchhiker's Guide to the Galaxy". The key to achieving general artificial intelligence is knowing how to ask the AI correctly.
So, what is going on with this magical phenomenon?
The large language model was discovered by the zero-sample reasoner
It is a collaborative research between Google Brain and the University of Tokyo, which explores the performance of large language models in zero-sample scenarios.
The title of the paper "Language Model is a Zero-Sample Reasoner" also pays tribute to GPT-3's "Language Model is a Few-Sample Learner".
The method used belongs to Chain of Thought Prompting (CoT), which was just proposed by the Google Brain team in January this year.
The earliest CoT was applied to few-sample learning. While asking questions, a step-by-step answer example was given to guide the AI.
This latest research proposes zero-sample CoT. The main change is to simplify the example part.
- The first step is to rewrite the question stem into the form of "Q: xxx, A: xxx", in which the trigger sentence A can extract the thinking process of the language model.
- The second step is an additional experiment, adding the prompt "The answer is..." to prompt the language model to give the final answer.
#The biggest advantage of this is that it is universal, and there is no need to provide dedicated examples for different problem types.
The paper has done sufficient experiments on various problems, including 12 tests:
- 6 mathematical problem test sets, SingleEq, AddSub, SVAMP and the more challenging MultiArith, AQUA-RAT, GSM8K.
- 2 common sense reasoning test sets, CommonsenseQA and StrategyQA.
- 2 symbolic reasoning test sets, Last Letter Concatenation and Coin Flip.
- As well as the date understanding problem in BIG-bench and the task of tracking out-of-order objects.
Compared with ordinary zero-shot learning, zero-shot CoT achieves better results in 10 of them.
The value on the right side of △ is the additional experimental result
In the more difficult MultiArith and GSM8K math tests, the latest version of GPT-3 Text-davinci was used -002 (175B) conducted more in-depth experiments.
If you give 8 attempts to get the best result, the accuracy can be further improved to 93%.
In the analysis of error results, researchers also found that in many questions, the reasoning process of AI is actually correct, but when the answer cannot converge to a unique determination, multiple answers will be given. Alternative.
At the end of the paper, the research team proposed that this study can not only serve as the baseline for zero-sample CoT, but also hopes to make the academic community realize the importance of constructing fine-tuned data sets and few-sample prompt templates. Previously, we fully explored the importance of zero-sample capabilities of large language models.
The research team comes from the Matsuo Laboratory of the University of Tokyo.
The person in charge, Professor Matsuo Yutaka, is also the first artificial intelligence expert on SoftBank’s board of directors.
Visiting professor Gu Shixiang among the team members is from the Google Brain team. Gu Shixiang studied under Hinton, one of the three giants, for his undergraduate degree and graduated with a doctorate from the University of Cambridge.
Adding a little "magic" has become a new trend in the AI circle
Why zero-sample CoT works remains to be explored.
However, someone experimentally concluded that this method seems to be only effective for GPT-3 (text-davinci-002). He tried version 001 and found little effect.
He listed an example of what he did.
Question: Please connect the last letters of each word in machine and learning.
The answer given by GPT-3 when prompted is to connect all the letters in the two words.
In response, one of the authors, Gu Shixiang, replied that in fact, the "spell" has an effect on both the initial version and the improved version of GPT-3, and these results are also reflected in the paper. .
Some people have also questioned whether deep learning has become a game of finding a "magic spell"?
At the same time, we saw Marcus again in the Tucao team.
He also listed an example of failure. GPT-3, with the blessing of the "spell", failed to figure out whether Sally's cow would come back to life...
However, it is worth noting that it is not uncommon for examples like this to add a little magic to the AI, and the improvement effect is immediate.
Some netizens shared that adding a few intermediate commands when using GPT-3 can indeed get more satisfactory results.
Previously, researchers from Google and MIT found that without changing the underlying architecture, as long as the training language model will "break points" like programmers when debugging, the model reads the code, My ability to do arithmetic improved quickly.
The principle is also very simple, that is, in a program with many calculation steps, let the model encode each step into text and record them in a file called "Sticky Notes" ” in the temporary register.
As a result, the calculation process of the model becomes clearer and more orderly, and the performance is naturally greatly improved.
There is also Instruct GPT-3 used for testing in this experiment, which is also a typical example.
Just by letting GPT-3 learn intensively from human feedback, it can significantly improve the situation of answering incorrect questions.
Specifically, we first use some human demonstration answers to fine-tune the model, then collect several sets of different output data for a certain question, manually sort the several sets of answers, and train the reward model on this data set.
Finally, using RM as the reward function, the Proximal Policy Optimization (PPO) algorithm fine-tunes the GPT-3 policy to maximize rewards with reinforcement learning methods.
Including Aran, the Twitter blogger who ignited this topic, was the one who originally discovered that adding "Unreal Engine" can make the quality of AI-generated images soar.
Former Google robot boss Eric Jang also previously discovered that reinforcement learning can also use similar thinking to improve computing efficiency.
Some people also said that this kind of technique used in AI is not what they usually use when using their brains?
In fact, Bengio has previously started from brain science and proposed that the operating mode of AI should be like the human brain mode.
Human cognitive tasks can be divided into system 1 cognition and system 2 cognition.
System 1 cognitive tasks refer to those tasks that are completed unconsciously. For example, you can immediately identify what you are holding in your hand, but you cannot explain to others how you completed this process.
System 2 cognitive tasks refer to cognitions that the human brain needs to complete according to certain steps. For example, if you do an addition and subtraction calculation, you can clearly explain how you arrived at the final answer.
The "spell" added this time is to allow AI to go one step further and learn to think in steps.
Faced with this trend, some scholars believe that "hint engineering is replacing feature engineering."
So "cue word hunter" will become the nickname of the next generation of NLP researchers?
Paper address :https://www.php.cn/link/cc9109aa1f048c36d154d902612982e2
Reference link:
[1]https: //twitter.com/arankomatsuzaki/status/1529278580189908993
[2]https://evjang.com/2021/10/23/generalization.html
The above is the detailed content of A little coaxing can increase GPT-3 accuracy by 61%! Google and University of Tokyo research is shocking. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



How to define header files using Visual Studio Code? Create a header file and declare symbols in the header file using the .h or .hpp suffix name (such as classes, functions, variables) Compile the program using the #include directive to include the header file in the source file. The header file will be included and the declared symbols are available.

Writing C in VS Code is not only feasible, but also efficient and elegant. The key is to install the excellent C/C extension, which provides functions such as code completion, syntax highlighting, and debugging. VS Code's debugging capabilities help you quickly locate bugs, while printf output is an old-fashioned but effective debugging method. In addition, when dynamic memory allocation, the return value should be checked and memory freed to prevent memory leaks, and debugging these issues is convenient in VS Code. Although VS Code cannot directly help with performance optimization, it provides a good development environment for easy analysis of code performance. Good programming habits, readability and maintainability are also crucial. Anyway, VS Code is

Running Kotlin in VS Code requires the following environment configuration: Java Development Kit (JDK) and Kotlin compiler Kotlin-related plugins (such as Kotlin Language and Kotlin Extension for VS Code) create Kotlin files and run code for testing to ensure successful environment configuration

Depending on the specific needs and project size, choose the most suitable IDE: large projects (especially C#, C) and complex debugging: Visual Studio, which provides powerful debugging capabilities and perfect support for large projects. Small projects, rapid prototyping, low configuration machines: VS Code, lightweight, fast startup speed, low resource utilization, and extremely high scalability. Ultimately, by trying and experiencing VS Code and Visual Studio, you can find the best solution for you. You can even consider using both for the best results.

VS Code provides a powerful C development environment that improves development efficiency. When configuring, you need to pay attention to path issues, memory leaks and dependency management. Advantages include extended ecosystems, excellent code editing capabilities, and integrated debuggers, while disadvantages are extended dependencies and resource consumption.

VS Code is absolutely competent for Java development, and its powerful expansion ecosystem provides comprehensive Java development capabilities, including code completion, debugging, version control and building tool integration. In addition, VS Code's lightweight, flexibility and cross-platformity make it better than bloated IDEs. After installing JDK and configuring JAVA_HOME, you can experience VS Code's Java development capabilities by installing "Java Extension Pack" and other extensions, including intelligent code completion, powerful debugging functions, construction tool support, etc. Despite possible compatibility issues or complex project configuration challenges, these issues can be addressed by reading extended documents or searching for solutions online, making the most of VS Code’s

Sublime Text is a powerful customizable text editor with advantages and disadvantages. 1. Its powerful scalability allows users to customize editors through plug-ins, such as adding syntax highlighting and Git support; 2. Multiple selection and simultaneous editing functions improve efficiency, such as batch renaming variables; 3. The "Goto Anything" function can quickly jump to a specified line number, file or symbol; but it lacks built-in debugging functions and needs to be implemented by plug-ins, and plug-in management requires caution. Ultimately, the effectiveness of Sublime Text depends on the user's ability to effectively configure and manage it.

Beautifying JSON data in VS Code can be achieved by using the Prettier extension to automatically format JSON files so that key-value pairs are arranged neatly and indented clearly. Configure Prettier formatting rules as needed, such as indentation size, line breaking method, etc. Use the JSON Schema Validator extension to verify the validity of JSON files to ensure data integrity and consistency.
