Home > Technology peripherals > AI > 16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together

16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together

PHPz
Release: 2023-04-14 09:52:02
forward
860 people have browsed it

After a one-year hiatus, the annual Artificial Intelligence Debate organized by Montreal.AI and New York University Professor Emeritus Gary Marcus returned last Friday night and was once again held as an online conference like in 2020.

This year’s debate – AI Debate 3: The AGI Debate – focused on the concept of artificial general intelligence, i.e., machines capable of integrating myriads of near-human-level reasoning capabilities.

16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together

## Video link: https://www.youtube.com/watch?v=JGiLz_Jx9uI&t=7s

This discussion lasted for three and a half hours, focusing on five topics related to artificial intelligence: Cognition and Neuroscience, Common Sense, Architecture, Ethics and Morality, and Policy and Contribution.

In addition to many big names in computer science, 16 experts including computational neuroscientist Konrad Kording also participated. ​​

This article briefly summarizes the views of five of the big guys. Interested readers can watch the full video through the link above.

Moderator: Marcus

As a well-known critic, Marcus quoted his article in the New Yorker ""Deep Learning" is Is artificial intelligence a revolution in development? 》, once again poured cold water on the development of AI.

Marcus said that contrary to the decade-long wave of enthusiasm for artificial intelligence after Li Feifei's team successfully released ImageNet, the "wishes" to build omnipotent machines have not been realized. .

16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together

DeepMind neuroscientist Dileep George

Dileep George, a neuroscientist from Google DeepMind, once proposed a method called "innateness" the concept of.

To put it simply, it is certain ideas that are "built in" in the human mind.

So for artificial intelligence, should we pay more attention to innateness?

In this regard, George said that any kind of growth and development from an initial state to a certain stable state involves three factors.

The first is the internal structure in the initial state, the second is the input data, and the third is the universal natural law.

"It turns out that innate structures play an extraordinary role in every area we discover."

For those who are considered The classic example of learning, like acquiring a language, once you start to break it down, you realize that the data has almost no impact on it.

Language has not changed since the dawn of man, as evidenced by the fact that any child in any culture can master it.

George believes that language will become the core of artificial intelligence, giving us the opportunity to figure out what makes humans such a unique species.

University of Washington professor Yejin Choi

Yejin Choi, a professor of computer science at the University of Washington, predicts that the performance of AI will become increasingly amazing in the next few years.

But since we don’t know the depth of the network, they will continue to make mistakes on adversarial and corner cases.

16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together

"For machines, the dark matter of language and intelligence may be common sense."

Of course , the dark matter mentioned here is something that is easy for humans but difficult for machines.

Jürgen Schmidhuber, the father of LSTM

Marcus said that we can now obtain a large amount of knowledge from large language models, but in fact this paradigm needs to be transformed. Because the language model is actually "deprived" of multiple types of input.

Jürgen Schmidhuber, director of the Swiss Artificial Intelligence Laboratory IDSIA and the father of LSTM, responded, "Most of what we are discussing today, at least in principle, has been adopted by General Purpose many years ago. Neural Networks" are solved." Such systems are "less than human."

16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together

Schmidhuber said that as computing power becomes cheaper every few years, the "old theory" is coming back. "We can do a lot of things with these old algorithms that we couldn't do at the time."

Then, IBM researcher Francesca Rossi asked Schmidhuber a question: "How can we still see A system without the functions we want? What do you think? Those defined technologies still have not entered the current system?"

In this regard, Schmidhuber believes that the current main issue is computing cost:

Recurrent networks can run arbitrary algorithms, and one of the most beautiful aspects of them is that they can also learn learning algorithms. The big question is what algorithms can it learn? We may need better algorithms. Options for improving learning algorithms.

The first such system appeared in 1992. I wrote my first paper in 1992. There was little we could do about it at that time. Today we can have millions and billions of weights.

Recent work with my students has shown that these old concepts, with a few improvements here and there, suddenly work so well that you can learn new learning algorithms that are better than backpropagation.

Jeff Clune, associate professor at the University of British Columbia

The topic discussed by Jeff Clune, associate professor of computer science at the University of British Columbia, is "AI Generating Algorithms: The Fastest Path to AGI."

Clune said that today’s artificial intelligence is taking an “artificial path”, which means that various learning rules and objective functions need to be completed manually by humans.

In this regard, he believes that in future practice, manual design methods will eventually give way to automatic generation.

16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together

Subsequently, Clune proposed the "three pillars" to promote the development of AI: meta-learning architecture, meta-learning algorithm, and automatically generating effective Learning environment and data.

Here, Clune suggests adding a "fourth pillar", which is "utilizing human data." For example, models running in the Minecraft environment can achieve "huge improvements" by learning from videos of humans playing the game.

Finally, Clune predicts that we have a 30% chance of achieving AGI by 2030, and that doesn’t require a new paradigm.

It is worth noting that AGI is defined here as "the ability to complete more than 50% of economically valuable human work."

To summarize

At the end of the discussion, Marcus gave all participants 30 seconds to answer a question: “If you could give students one piece of advice, e.g. , Which question of artificial intelligence do we need to study most now, or how to prepare for a world where artificial intelligence is increasingly becoming mainstream and central, what are the suggestions?"

Choi said: "We The alignment of AI with human values ​​has to be addressed, particularly with an emphasis on diversity; I think that's one of the really key challenges we face, more broadly, addressing challenges like robustness, generalization and explainability."

George gave advice from the perspective of research direction: "First decide whether you want to engage in large-scale research or basic research, because they have different trajectories."

Clune: "AGI is coming. So, for researchers developing AI, I encourage you to engage in technologies based on engineering, algorithms, meta-learning, end-to-end learning, etc., because these are most likely to be absorbed into our AGIs are being created. Perhaps the most important for non-AI researchers is the question of governance. For example, what are the rules when developing AGIs? Who decides the rules? And how do we get researchers around the world to follow them? Set rules?"

At the end of the evening, Marcus recalled his speech in the previous debate: "It takes a village to cultivate artificial intelligence."

"I think that's even more true now," he said. "AI used to be a child, but now it's a bit like a rambunctious teenager who has not yet fully developed mature judgment."

He concluded: "This moment is both exciting and dangerous. 》

The above is the detailed content of 16 top scholars debate AGI! Marcus, the father of LSTM and MacArthur Genius Grant winner gathered together. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template