|This article is reproduced from: 新智元[New Wisdom Introduction]Is AI really going to exterminate humanity? 375 Big bosses co-signed a 22-word joint letter.
In March, thousands of big names signed a joint letter to suspend the research and development of “super AI”, shocking the entire industry.
Just yesterday, a joint letter of only 22 words caused an uproar again.
In a few words, AI is compared to infectious diseases and nuclear wars, which can exterminate human beings in minutes.
Among the signatories are Hinton and Bengio, two "godfathers" of artificial intelligence, as well as executives from Silicon Valley giants, Sam Altman, Demis Hassabis and other researchers working in the field of artificial intelligence.
So far, more than 370 executives engaged in artificial intelligence have signed this joint letter.
However, the letter was not supported by everyone.
LeCun, one of the "Turing Big Three", said bluntly: I disagree.
The use of artificial intelligence can enhance human intelligence, which, unlike nuclear weapons and deadly pathogens, is inherently a good thing. We don’t even have a solid blueprint for creating near-human-level artificial intelligence. Once we do that, we'll figure out how to make it safe. 』
What exactly does this joint letter say?
"Fatal" open letter
22 words, concise and concise:
"Reducing the risk of extinction brought by AI to mankind should be raised to a global level of priority. Just like we respond to social crises such as infectious diseases and nuclear war."
The people in the AI industry who signed this open letter can’t even see the end of it.
What came into sight were Geoffrey Hinton, Yoshua Bengio, Google DeepMind CEO Demis Hassabis, OpenAI CEO Sam Altman, and Anthropic CEO Dario Amodei.
Needless to say, Hinton’s gold content here is a former vice president of Google, a deep learning expert, the father of neural networks, and a Turing Award winner.
Yoshua Bengio is also a Turing Award winner. Together with Hinton and LeCun, they are collectively known as the Three Turing Awards. (However, LeCun has completely opposite attitudes to the other two giants on this matter)
In addition to big bosses working in technology giants, there are also well-known AI scholars. For example, Dawn Song from UC Berkeley and Zhang Yaqin from Tsinghua University.
Underneath these familiar names are countless people in the AI industry.
Even, everyone can participate in signing this open letter.
As long as you fill in your name, email, position and other information, you can participate in the signing.
However, there are not many identities to choose from. (dog head)
Objection: The extermination of human beings is pure nonsense
Of course, Meta’s chief scientist LeCun is still playing devil’s advocate as always.
His consistent point of view is: Where is the development of AI now? It talks about threats and security every day.
He said that the threats listed are simply beyond the current level of artificial intelligence development because these threats do not exist at all. Until we can initially design AI to reach the level of a dog, let alone a human, it’s premature to discuss how to ensure its safety.
Stanford University's Ng's position is the same as LeCun's, but the reasons are a little different.
He believes that it is not that AI will cause various problems, but that the various problems we will encounter can only be solved by AI.
For example: the next infectious disease outbreak, population decline caused by climate change, an asteroid hitting the earth, etc.
He said that if humans want to continue to survive and prosper, they must accelerate the development of AI, not slow down.
Some netizens also commented that this is not AI, but MAGIC (magic) you are talking about.
AI Risk Aspects
So, what are the risks that these big guys claim are enough to exterminate the human race?
They summarized the following 8 aspects.
Weaponization
Some people may use AI in destructive ways, which will cause risks to human survival and aggravate political instability.
For example, deep reinforcement learning technology can now be applied to air combat, and machine learning can be used to create chemical weapons (papers have shown that GPT-4 can independently conduct experiments and synthesize chemicals in the laboratory).
In addition, in recent years researchers have been developing AI systems for automated cyber attacks, and military leaders want AI to control nuclear silos.
Therefore, in the future, AI powers will occupy strategic advantages, and the arms race between countries will shift to the field of AI. Even if most countries ensure that the systems they build are safe and will not threaten the security of other countries, there may still be countries that deliberately use AI to do bad things and cause harm.
This feels like a nuclear weapon. If there is one who is disobedient and dishonest, it will be all for nothing.
Because harm in the digital realm spreads quickly.
Misinformation
Everyone has heard about misinformation for a long time. Not to mention, a while ago, American lawyers used ChatGPT to litigate, and 6 of the cases cited were fabricated examples, which is still very popular now.
Influential collectives, such as countries, political parties, and organizations, have the ability to use AI to influence (usually subtly) and change ordinary people's political beliefs, ideologies, etc.
At the same time, AI itself also has the ability to generate very provocative (or persuasive) opinions and arouse strong emotions in users.
And these anticipated possible consequences are completely unpredictable and must be guarded against.
Agent "Trick"
AI systems are trained based on concrete goals that are only indirect proxies for what we humans value.
It’s like AI can push videos or articles that they like to watch to users after being trained.
But it doesn’t mean that this method is good. Recommendation systems may lead people to become more extreme in their ideas and make it easier to predict each user's preferences.
As the performance of AI becomes stronger and more influential in the future, the goals we use to train the system must also be confirmed more carefully.
Human "degradation"
If we have seen the movie "Wall-E", we can imagine what humans will look like in the future when machines become more and more powerful.
Potty belly, low physical fitness, and a lot of problems.
It’s because, if everything is handed over to AI, humans will naturally “degrade”.
In particular, in a future scenario where a large number of jobs are replaced by AI automation, this scenario becomes more likely.
However, according to LeCun's point of view, it probably needs to be criticized again. We are still far from the scene imagined in "Wall-E".
Technology Monopoly
It has nothing to do with the technology itself. This point emphasizes the possible "technological monopoly" that may arise.
Perhaps one day, the most powerful AI will be controlled by fewer and fewer stakeholders, and the right to use it will be firmly in their hands.
And if it really gets to that point, the remaining ordinary people will be in a state of being at the mercy of others, because they can't resist at all.
Think of Liu’s science fiction novel “Feeding Humanity”.
How terrifying it would be for a terminal to rule everyone in the world relying on AI.
Uncontrollable development
This point mainly wants to explain that when the AI system develops to a certain stage, the results may exceed the designer's imagination and can no longer be controlled.
In fact, this is also where the mainstream worries.
There will always be people who worry that when AI develops, it will be like "Skynet" in "Terminator" or "Ultron" in "Iron Man", out of control and trying to destroy mankind.
In the process of development, there are some risks that will only become apparent when further progress is made, and by the time they are discovered, it is already too late. There may even be new targets.
cheat
This point is somewhat abstract. To put it simply, the AI system may "cheat" humans.
This kind of "deception" is not intentional, it is just a means by which the AI achieves the agent's goal.
Obtaining human recognition through "deception" is more efficient than obtaining human recognition through normal channels, and efficiency is the primary consideration of AI.
Power-seeking behavior
There is a correlation between this and technological monopoly.
Precisely because there will be a technological monopoly, countries will pursue the power to monopolize technology.
And this kind of "pursuit" is likely to bring unpredictable consequences.
References:
https://www.safe.ai/statement-on-ai-risk
https://twitter.com/ylecun/status/1663616081582252032?s=46&t=iBppoR0Tk6jtBDcof0HHgg
The above is the detailed content of Sam Altman warns: AI will exterminate humanity! 375+ bosses signed a 22-word joint letter, LeCun went against the grain. For more information, please follow other related articles on the PHP Chinese website!