The relationship between the big bosses is sometimes really confusing.
Yesterday, someone discovered that OpenAI CEO Sam Altman had unfollowed Yann LeCun, Meta’s chief artificial intelligence scientist, on Twitter.
It is difficult for us to determine the specific time when this withdrawal occurred, but we can basically determine the cause of the incident—— A few days ago, Yann LeCun expressed his views on ChatGPT at a small online gathering of media and executives some time ago:
"As far as the underlying technology is concerned, ChatGPT is nothing special. The innovation is not revolutionary. Many research laboratories are using the same technology to carry out the same work."
In ZDNet's "ChatGPT is 'not particularly innovative, ' and 'nothing revolutionary', says Meta's chief AI scientist" In the report, some details of LeCun's speech were revealed. There are some amazing comments:
In this way, Sam Altman’s removal action is also excusable.
Four hours after the "unblocking" was discovered, Yann LeCun updated the news and once again forwarded an article from "Yin Yang" ChatGPT:
Why can a large language model like ChatGPT spout endless nonsense? Their grasp of reality is very superficial
Some people disagree: "ChatGPT is a source of extensive knowledge and tremendous creativity, having been trained on a large number of books and other information sources."
In this regard, LeCun also expressed his opinion: "No one said that LLM is useless I said this myself during FAIR's short release of Galactica. People crucified it because it spawned nonsense. ChatGPT did the same thing. But again, that doesn't mean they weren't useful.
In fact, this "Atlantic" article is a review of a paper by the MIT Cognitive Science Group. Let’s look at the specific research content.
The title of this paper is "Dissociating Language and Thought in Large Language Models: a Cognitive Perspective", and the authors are from the University of Texas at Austin, MIT and UCLA.
Paper address: https://arxiv.org/pdf/2301.06627.pdf
We know that today’s large language models (LLMs) are often able to generate text passages that are coherent, grammatical, and appear to make sense. This achievement has fueled speculation that these networks are already, or will soon become, "thinking machines" capable of performing tasks that require abstract knowledge and reasoning. In this article, the author considers two different aspects of language use performance to observe the ability of LLM, respectively as follows:
Drawing on evidence from cognitive neuroscience, the authors show that human formal abilities rely on specific language processing mechanisms, while functional abilities require multiple abilities beyond language that constitute Develop thinking skills such as formal reasoning, world knowledge, situation modeling and social cognition. Similar to the distinction between two abilities in humans, LLM performs well (albeit imperfectly) on tasks that require formal language ability, but tends to fail on many tests that require functional ability.
Based on this evidence, the authors argue that, firstly, modern LLMs should be taken seriously as models with formal language skills, and secondly, models that play with real-life language use need to be incorporated or developed Core language modules and multiple non-language-specific cognitive abilities needed to model thinking.
In summary, they argue that the distinction between formal and functional language abilities helps clarify the discussion around the potential of LLM and provides a basis for building understanding and use of language in a human-like way. The model provides a way. The failure of LLMs on many non-linguistic tasks does not weaken them as good models for language processing. If the human mind and brain are used as an analogy, Future progress in AGI may depend on incorporating language models as well as representing abstract knowledge Combined with models that support complex reasoning.
LLM is lacking in functional capabilities beyond language (such as reasoning, etc.), and OpenAI's ChatGPT is an example. Although it was officially announced that his math ability has been upgraded, netizens complained that he could only be proficient in addition and subtraction within ten. Recently in a paper "Mathematical Capabilities of ChatGPT", researchers from the University of Oxford, the University of Cambridge and other institutions tested the mathematical capabilities of ChatGPT on publicly available and hand-made data sets , and measured its performance against other models trained on mathematical corpora such as Minerva. At the same time, we test whether ChatGPT can be called a useful assistant for professional mathematicians by simulating various use cases that arise in mathematicians' daily professional activities (question and answer, theorem search).
Paper address: https://arxiv.org/pdf/2301.13867.pdf
Researchers have introduced and made public a new data set - GHOSTS, which is the first natural language data set produced and managed by mathematics researchers, covering graduate-level mathematics, and A comprehensive overview of the mathematical capabilities of language models. They benchmarked ChatGPT on GHOSTS and evaluated the performance based on fine-grained criteria.
Test results show that
ChatGPT’s mathematical ability is significantly lower than that of ordinary mathematics graduate students. It can usually understand questions but cannot give correct answers . $20 per month, ChatGPT Plus membership is available
Just now, OpenAI announced “ChatGPT Plus”, a new paid membership service of US$20 per month.
Subscribers will receive several benefits:
OpenAI said it will send out invitations to the service "in the coming weeks" to people in the United States and on its waitlist, adding that it will Promote to other countries and regions.
More than a week ago, there was news that OpenAI would launch the plus or pro version of the ChatGPT service at a price of US$42 per month, but the final price of US$20 per month was obviously Make the service accessible to a wider group of people, including students and businesses.
In a way, this will set the standard for payment for any AI chatbot on the market that wants to launch. Given that OpenAI is a pioneer in this field, if other companies try to release bots that pay more than $20 per month, they must first explain one thing - why is their own chatbot worth more than ChatGPT Plus?
The above is the detailed content of After being shut down by OpenAI CEO, Yann LeCun criticized again: ChatGPT's grasp of reality is very superficial. For more information, please follow other related articles on the PHP Chinese website!