The Infinite Monkey Theorem holds that if a monkey presses keys randomly on a typewriter, when the key-pressing time reaches infinity, it will almost certainly be able to type any given text, such as Shakespeare's complete works.
In this theorem, "almost certainly" is a mathematical term with a specific meaning. "Monkey" does not refer to a real monkey, but is used as a metaphor to a machine that can generate infinite randomness. Abstract device for letter sequence.
This theory shows that it is wrong to treat a large but finite number as an infinite inference, even if the observable universe is full of monkeys who keep typing. The probability of them producing a Hamlet is still less than 1/10^183800.
Moreover, even if countless monkeys were given unlimited time, they would not know how to appreciate the poetic diction of the bard.
“The same goes for artificial intelligence (AI),” says Michael Wooldridge, professor of computer science at the University of Oxford.
In Wooldridge’s view, although AI models such as GPT-3 have shown surprising capabilities with tens or hundreds of billions of parameters, their problem is not the size of the processing power, but the Lack of experience from the real world.
For example, a language model might learn "rain is wet" very well, and when asked whether rain is wet or dry, it will most likely answer that rain is wet, but in the same way as a human The difference is that this language model has never actually experienced the feeling of "wet". To them, "wet" is nothing more than a symbol that is often combined with words such as "rain".
However, Wooldridge also emphasized that the lack of knowledge of the real physical world does not mean that the AI model is useless, nor does it prevent a certain AI model from becoming an empirical expert in a certain field. However, on issues such as understanding, if it is considered The possibility of AI models having the same capabilities as humans is indeed doubtful.
The related research paper is titled "What Is Missing from Contemporary AI? The World" and has been published in the magazine "Intelligent Computing".
In the current wave of AI innovation, data and computing power have become the basis for the success of AI systems: the capabilities of an AI model are directly proportional to their size, the resources used to train them, and the scale of the training data.
Regarding this phenomenon, DeepMind research scientist Richard S. Sutton has previously said that the "painful lesson" of AI is that its progress mainly relies on the use of larger and larger data sets and more and more data. Computing resources.
When talking about the overall development of the AI industry, Wooldridge gave affirmation. “Over the past 15 years, the pace of development in the AI industry, and in particular the field of machine learning (ML), has repeatedly surprised me: We have to constantly adjust our expectations to determine what is possible and when Possible."
However, Wooldridge also pointed out the problems existing in the current AI industry, "While their achievements are commendable, I think most current large-scale ML models are limited by one key factor: the AI model Without really experiencing the real world.
In Wooldridge’s view, most ML models are built in virtual worlds such as video games. They can be trained on massive data sets. Once it involves the physical world If applied, they will lose important information, and they are just disembodied AI systems.
Take the artificial intelligence that supports self-driving cars as an example. It is unrealistic for self-driving cars to learn by themselves on the road. For this and other reasons, researchers often choose to build their models in virtual worlds.
"But they simply don't have the ability to run in the most important environment of all, which is our world," Wooldridge said .
Language AI models, on the other hand, suffer from the same limitations. Arguably, they have evolved from ridiculously scary predictive text to Google’s LAMDA. Earlier this year, a former Google engineer claimed that artificial intelligence The program LAMDA was sentient, making headlines for a time.
"Whatever the validity of the engineer's conclusions, it's clear that he was impressed by LAMDA's conversational abilities - and that's well-documented Reasonable," Wooldridge said, but he does not believe that LAMDA is sentient, and AI is not close to such a milestone.
"These basic models demonstrate unprecedented capabilities in natural language generation and can generate more natural language. The text fragments also seem to have acquired some common sense reasoning capabilities, which is one of the major events in AI research in the past 60 years. ”
These AI models require the input of massive parameters and are trained to understand them. For example, GPT-3 uses hundreds of billions of English texts on the Internet for training. A large amount of training data is combined with powerful computing power. The combination allows these AI models to behave similarly to the human brain and can move beyond narrow tasks and begin to recognize patterns and make connections that may seem unrelated to the main task.
However, Wooldridge said that the basic model is a bet. "Training based on massive data makes them useful in a range of fields and can be specialized for specific applications."
" Symbolic AI is based on the assumption that 'intelligence is mainly a knowledge problem', while the basic model is based on the assumption that 'intelligence is mainly a data problem'. If enough training data is input into the large model, it is considered promising to improve The ability of the model."
Wooldridge believes that in order to produce more intelligent AI, this "might is right" approach continues to expand the scale of AI models, but ignores the key to truly advancing AI. Required knowledge of the real physical world.
"To be fair, there are some signs that this is changing," Wooldridge said. In May, DeepMind announced Gato, a foundational model based on a large language set and robot data that can run in simple physical environments.
"It's great to see the underlying model taking its first steps into the physical world, but only a small step: to make AI work in our world, the challenges that need to be overcome are at least as high as to make AI in simulation. The challenges of working in an environment are as great, perhaps even greater."
At the end of the paper, Wooldridge wrote: "We are not looking for the end of the road to AI, but we may have already reached it. The end of the beginning.”
The above is the detailed content of What are the shortcomings of the 'AI world”? Oxford University Professor Michael Wooldridge: The Real World. For more information, please follow other related articles on the PHP Chinese website!