Home > Technology peripherals > AI > body text

Google and OpenAI scholars talk about AI: Language models are working hard to 'conquer' mathematics

PHPz
Release: 2023-04-13 11:37:02
forward
1496 people have browsed it

If you ask what computers are good at, among all the answers, mathematics must be on the list. After a long period of research, top scholars have achieved surprising results in studying the development of computers in mathematical calculations.

Take last year, for example, researchers from the University of California, Berkeley, OpenAI and Google have made great progress in language models. GPT-3, DALL·E 2, etc. have been developed. However, until now, language models have not been able to solve some simple, verbally described mathematical problems, such as "Alice has five more balls than Bob, and Bob has two balls after giving Charlie four balls. Ask Alice how many balls she has ?" For the language model, it may be a bit "difficult" to give the correct answer.

“When we say computers are very good at math, we mean that they are very good at specific, specific things,” said Guy Gur-Ari, a machine learning expert at Google. It is true that computers are good at arithmetic, but outside of specific modes, computers are powerless and cannot answer simple text description questions.

Google researcher Ethan Dyer once said: People who do mathematics research have a rigid reasoning system. There is a clear gap between what they know and what they don’t understand. .

Solving word problems or quantitative reasoning questions is tricky because, unlike other problems, these two require robustness and rigor. If you go wrong at any step in the process, you will get the wrong answer. DALL·E is impressive at drawing, even though the images it generates are sometimes weird, with fingers missing and eyes looking weird... We can all accept that, but it makes mistakes in math, and our The tolerance will be very small. Vineet Kosaraju, a machine learning expert from OpenAI, has also expressed this idea, "Our tolerance for mathematical errors made by language models (such as misunderstanding 10 as 1 and 0 instead of 10) is still relatively small."

"We study mathematics simply because we find it independent and very interesting," said Karl Cobbe, a machine learning expert at OpenAI.

As machine learning models are trained on larger data samples, they become more robust and make fewer errors. But scaling up models appears to be possible only through quantitative reasoning. The researchers realized that the mistakes made by language models seemed to require a more targeted approach.

Last year, two research teams from the University of California, Berkeley and OpenAI released the data sets MATH and GSM8K respectively. These two data sets contain thousands of geometry, algebra, elementary mathematics, etc. Math problems. “We wanted to see if this was a problem with the data set,” said Steven Basart, a researcher at the Center for AI Security who works in mathematics. It is known that language models are not good at word problems. How bad do they perform on this problem? Can it be solved by introducing better formatted and larger data sets?

On the MATH dataset, the top language model achieved an accuracy of 7%, compared to 40% accuracy for human graduate students and 90% accuracy for Olympic champions. On the GSM8K dataset (elementary school level problem), the model achieved 20% accuracy. In the experiment, OpenAI used two techniques, fine-tuning and verification, and the results showed that the model can see many examples of its own errors, which is a valuable finding.

At that time, OpenAI’s model needed to be trained on 100 times more data to achieve 80% accuracy on GSM8K. But in June of this year, Google released Minerva, which achieved 78% accuracy. This result exceeded expectations, and the researchers said it came faster than expected.

Google and OpenAI scholars talk about AI: Language models are working hard to conquer mathematics

##Paper address: https://arxiv.org/pdf/2206.14858.pdf

Minerva is based on Google’s self-developed Pathways Language Model (PaLM) and has more mathematical data sets, including arXiv, LaTeX and other mathematical formats. Another strategy Minerva employs is chain-of-thought prompting, in which Minerva breaks larger problems into smaller pieces. Additionally, Minerva uses majority voting, where instead of asking the model to come up with one answer, it asks it to come up with 100 answers. Of these answers, Minerva chooses the most common one.

The gains from these new strategies are huge, with Minerva achieving up to 50% accuracy on MATH, GSM8K, and MMLU (a more general set of algorithms including chemistry and biology). The accuracy rate on STEM problems is close to 80%. When Minerva was asked to redo slightly tweaked problems, it performed equally well, showing that its abilities don't just come from memory.

Minerva can have weird, confusing reasoning and still come up with the right answer. While models like Minerva may arrive at the same answers as humans, the actual process they follow may be very different.

Ethan Dyer, a machine learning expert at Google, said, "I think there is this idea that people in mathematics have some rigorous reasoning system between knowing something and not knowing something. There's a clear difference." But people give inconsistent answers, make mistakes, and fail to apply core concepts. On the machine learning frontier, the boundaries are blurry.

The above is the detailed content of Google and OpenAI scholars talk about AI: Language models are working hard to 'conquer' mathematics. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template