News on May 4th, the “big guys” who built today’s basic artificial intelligence technology have issued warnings, saying that this technology is potentially dangerous. However, they currently have no consensus on the dangers of AI or the precautions to be taken.
Geoffrey Hinton, known as the "Godfather of Artificial Intelligence," has just announced his resignation from Google. He regrets some of his work because he fears machines will become smarter than humans, jeopardizing humanity's survival.
Yoshua Bengio, a professor at the University of Montreal, is another pioneer in the field of artificial intelligence. He, along with Hinton and Yann LeCun, Meta's chief artificial intelligence scientist, received the 2018 Turing Award for breakthroughs in artificial neural networks that are critical to the development of today's artificial intelligence applications such as ChatGPT. On Wednesday, Bengio said he "strongly agrees" with Hinton's concerns about chatbots such as ChatGPT and its related technologies, but he worries that simply saying "we are doomed" won't solve the problem.
Bengio added: "The main difference is that he (Hinton) is a pessimist and I am more optimistic. I do think these dangers are very serious, both short-term and long-term. , not only needs to be taken seriously by a small number of researchers, but also needs to be taken seriously by the government and the people."
There are many signs that governments are listening to the voices of all parties. The White House has summoned the CEOs of Google, Microsoft and ChatGPT maker OpenAI for a "candid" discussion with US Vice President Kamala Harris on Thursday focused on how to mitigate their technology short-term and long-term risks. European lawmakers are also accelerating negotiations to pass comprehensive AI regulatory rules.
And some people worry that over-hyping super-intelligent machines that do not yet exist and fantasizing about the most terrifying dangers they can cause will distract people from developing practical safeguards for current artificial intelligence products, and these Products are often not regulated.
For Margaret Mitchell, the former head of Google’s AI ethics team, she was disturbed that Hinton was not outspoken during his decade at the helm of Google, especially This comes after prominent black scientist Timnit Gebru was fired in 2020. Gebru studied the dangers of large language models before they were widely commercialized into products like ChatGPT and Google Bard.
After Gebru left, Mitchell was also forced to leave Google. She said: "She (Gebru) is fortunate to be outside the circle of discriminatory communication, hateful language, derogation of women and non-consensual pornography, all of which are harmful to marginalized people in the technology field. People. She skipped all those things and could worry about more distant things."
Of the three winners of the 2018 Turing Award, Bengio is the only one who doesn't work at a big tech company people. For years, he has expressed concerns about the near-term risks of artificial intelligence, including job market instability, the dangers of autonomous weapons and biased data sets, among others.
But recently, those concerns have become more serious, prompting Bengio to join forces with Tesla CEO Elon Musk and Apple co-founder Steve Wozniak Other computer scientists and tech business leaders, including Steve Wozniak, have called for a six-month moratorium on development of artificial intelligence systems more powerful than OpenAI's latest model, GPT-4.
Bengio said on Wednesday that he believed the latest artificial intelligence language models had passed the "Turing test." The Turing Test, named after a method developed in 1950 by British codebreaker and AI pioneer Alan Turing, is designed to measure when artificial intelligence becomes indistinguishable from humans, at least on the surface.
Bengio said: "This is a milestone, but if we are not careful, it could have serious consequences. My main concern is how these technologies will be exploited to launch cyber attacks and spread disinformation and other evil Purpose. You can talk to these systems and think you're interacting with a real person. They're hard to tell."
Researchers dispute whether current artificial intelligence language systems can be smarter than humans because these systems There are many limitations, such as a tendency to fabricate information.
Aidan Gomez was one of the co-authors of a seminal 2017 paper that introduced a so-called converter technique for improving the performance of machine learning systems, specifically Learning from text passages. Gomez, then just 20 years old, was an intern at Google. He remembers lying on a couch at the company's California headquarters around 3 a.m. when his team sent the paper.
Gomez remembers a colleague telling him, “This is going to have a huge impact.” Their research has since helped spur the emergence of new systems that can generate articles that resemble human-written articles.
Six years later, Gomez is now the CEO of Cohere, the artificial intelligence company he founded. He was enthusiastic about the potential applications of these systems, but he was also troubled by the threats to the technology. He said the true capabilities of these systems are disconnected from reality and rely on extraordinary powers of imagination and reasoning.
The idea that these models will somehow gain control of nuclear weapons and launch some kind of extinction-level event is unfounded and unhelpful to truly pragmatic policy efforts that are trying to do good, Gomez said. .”
The above is the detailed content of AI experts warn, but which one is more worth worrying about, the immediate problem or the risk of destroying humanity?. For more information, please follow other related articles on the PHP Chinese website!