The Future of Life Institute has released an open letter calling for a six-month moratorium on some forms of artificial intelligence research. Citing "serious risks to society and humanity," the group asked AI labs to suspend research on AI systems more powerful than GPT-4 until more "safety guardrails" can be put in place around them.
The Future of Life Institute wrote in an open letter on March 22: “Artificial intelligence systems with the ability to compete with humans will have a profound impact on society and humanity. poses significant risks." "Advanced artificial intelligence could represent profound changes in the history of life on Earth and should be planned and managed with commensurate care and resources. Unfortunately, this level of planning and management has not been achieved."
The institute believes that If there is no artificial intelligence governance framework-such as the Asilomar AI Principles, which is the famous An expanded version of Simov's Three Laws of Robotics. These 23 guidelines were developed by the institute.) - We lack appropriate inspection mechanisms to ensure that artificial intelligence develops in a planned and controlled manner. This is what we are facing today.
The letter reads: “Unfortunately, this level of planning and management has not occurred, despite the fact that the AI lab has been in a state of spiral out of control in recent months. The race to develop and deploy increasingly powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”
In the absence of a voluntary moratorium by AI researchers, the Future of Life Institute urges governments to take action to prevent harm from continued research into large-scale AI models.
Top artificial intelligence researchers are divided over whether to pause research. More than 5,000 people have signed in support of the open letter, including Turing Award winner Yoshua Bengio, OpenAI co-founder Elon Musk and Apple co-founder Steve Wozniak.
# However, not everyone is convinced that banning research on AI systems more powerful than GPT-4 is in our best interests.
"I did not sign this letter," Meta chief artificial intelligence scientist and Turing Award winner Yann LeCun (2018 Turing Award winner with Bengio and Geoffrey Hinton) tweeted Said above. "I don't agree with its premise." The current artificial intelligence craze was launched two years ago through research on neural networks. Back then, deep neural networks were a major focus of artificial intelligence researchers around the world.
With the Transformer paper published by Google in 2017, artificial intelligence research has entered overdrive. Soon, researchers noticed the unexpected power riches of large language models, such as the ability to learn mathematics, chain-of-thought reasoning, and instruction following.
The public got a taste of what these LLMs can do in late November 2022 when OpenAI released ChatGPT to the world. Since then, the tech community has been working to implement LLM in everything they do, and the “arms race” to build bigger, more powerful models has gained additional momentum, as demonstrated by GPT-4’s March 15 announcement. release.
#While some AI experts have expressed concern about the negative impacts of LLM, including a tendency to lie, the risk of private data disclosure, and the potential impact on employment, It has done little to cool down the overwhelming public demand for new AI capabilities. Nvidia CEO Jensen Huang says we may be at a turning point in artificial intelligence. But the genie seems to be out of the bottle, and there’s no telling where it will go next.
The above is the detailed content of Open letter urges moratorium on artificial intelligence research. For more information, please follow other related articles on the PHP Chinese website!