News on April 5th, in the past six months, powerful new artificial intelligence (AI) tools have been proliferating at an alarming rate, from being able to From chatbots that resemble real-person conversations, to coding bots that run software automatically, to image generators that make things out of thin air, so-called generative artificial intelligence (AIGC) is suddenly becoming ubiquitous and increasingly powerful.
But last week, a backlash against this AI craze began to emerge. Thousands of technology experts and academics, led by Tesla and Twitter CEO Elon Musk, have signed an open letter warning of "serious risks to humanity" and calling for a moratorium on the development of AI language models Six months.
At the same time, an AI research non-profit organization filed a complaint asking the U.S. Federal Trade Commission (FTC) to investigate OpenAI, the company that created ChatGPT, and stop further commercial release of its GPT-4 software.
The Italian regulatory agency also took action and completely blocked ChatGPT on the grounds of data privacy violation.
Perhaps it is understandable that some people have called for a moratorium or slowdown on AI research. AI applications that seemed incredible or even unfathomable just a few years ago are now rapidly infiltrating social media, schools, workplaces and even politics. Faced with this dizzying change, some people have issued pessimistic predictions, believing that AI may eventually lead to the demise of human civilization.
The good news is that the hype and fear about omnipotent AI may be exaggerated. Although Google's Bard and Microsoft's Bing are impressive, they are still far from becoming "Skynet".
The bad news is that people’s concerns about the rapid evolution of AI have become a reality. This is not because AI will become smarter than humans, but because humans are already using AI to suppress, exploit, and deceive each other in ways that existing organizations are not prepared for. Furthermore, the more powerful people perceive AI to be, the more likely they and businesses are to delegate tasks to it that it is not capable of.
In addition to those pessimistic doomsday predictions, we can get a preliminary understanding of the impact of AI on the foreseeable future from two reports released last week. The first report was released by the US investment bank Goldman Sachs and mainly assessed the impact of AI on the economy and labor market; the second report was released by Europol and focused on the possible criminal abuse of AI.
From an economic perspective, the latest AI trends focus on automating tasks that once could only be done by humans. Like power looms, mechanized assembly lines, and ATMs, AIGCs promise to complete certain types of work cheaper and more efficiently than humans can.
But cheaper and more efficient doesn’t always mean better, as anyone who’s dealt with a grocery store self-checkout machine, automated phone answering system or customer service chatbot can attest. Unlike previous waves of automation, AIGC is able to imitate humans and even impersonate humans in some cases. This could both lead to widespread deception and trick employers into thinking AI can replace human workers, even if this is not the case.
Goldman Sachs’ research analysis estimates that AIGC will change about 300 million jobs around the world and cause tens of millions of people to lose their jobs, but it will also promote significant economic growth. However, Goldman Sachs' estimates may not be accurate, after all, they have a history of forecasting errors. In 2016, the bank predicted that virtual reality headsets could become as ubiquitous as smartphones.
The most interesting thing about Goldman Sachs’ AI analysis report is their breakdown of various industries, that is, which jobs may be enhanced by language models and which jobs may be completely replaced. Goldman Sachs researchers ranked white-collar tasks on a scale of difficulty from 1 to 7, with "reviewing forms for completeness" at level 1 and tasks that could be automated at level 4, such as "ruling on a complex motion in court." ” is level 6. The conclusion drawn from this is that administrative support and paralegal jobs are most likely to be replaced by AI, while occupations such as management and software development will become more productive.
The report optimistically predicts that this generation of AI could eventually increase global GDP by 7% as businesses gain more benefits from employees with AI skills. But Goldman Sachs also predicts that in the process, about 7% of Americans will find that their careers will be eliminated, and more people will have to learn this technology to maintain employment. In other words, even if AIGC brings a more positive impact, the result may lead to a large number of employees losing their jobs, and humans in offices and daily life will gradually be replaced by robots.
At the same time, many companies are already eager to take shortcuts and automate tasks that AI cannot handle. For example, the technology website CNET automatically generates financial articles full of errors. When problems arise with AI, already marginalized groups may be disproportionately affected. Despite the excitement around ChatGPT and its ilk, today’s developers of large language models have yet to address the issue of biased datasets that have embedded racial bias into AI applications such as facial recognition and criminal risk assessment algorithms. . Last week, a black man was wrongfully imprisoned again because of a facial recognition mismatch.
What is even more worrying is that AIGC may be used to cause intentional harm in some cases. The Europol report details how AIGC is used to help people commit crimes, such as fraud and cyberattacks.
For example, chatbots can generate specific styles of language and even imitate certain people's voices, which could make them a powerful tool for phishing scams. The advantages of language models in writing software scripts may democratize the generation of malicious code. They provide personalized, contextual, step-by-step advice and can be a go-to guide for criminals looking to break into a home, blackmail someone, or build a pipe bomb. We’ve already seen how synthetic images can spread false narratives on social media, reigniting concerns that deepfakes could distort campaigns.
It’s worth noting that what makes language models vulnerable to abuse is not just their broad intelligence, but also their fundamental flaws in knowledge. Current leading chatbots are trained to remain silent when they detect attempts to use them for nefarious purposes. But as Europol points out, "Safeguards to prevent ChatGPT from serving potentially malicious code are only effective if the model understands what it is doing." As the wealth of documented tricks and vulnerabilities demonstrates, self- Awareness remains one of the technology's weaknesses.
Given all of these risks, you don't have to worry about doomsday scenarios that would see AIGC's pace of development slow down, giving society more time to adapt. OpenAI itself was founded as a nonprofit on the premise that AI could be built in a more responsible way, without the pressure of hitting quarterly revenue targets.
But OpenAI is now leading in a close race, the tech giants are laying off their AI ethicists, and the horse may have left the stable anyway. As academic AI experts Sayash Kapoor and Arvind Narayanan point out, the main driver of innovation in language models right now is not the push for ever-larger models, but It's about integrating the models we have into various applications and tools. They argue that regulators should view AI tools from the perspective of product safety and consumer protection, rather than trying to curb AI like nuclear weapons.
Perhaps the most important thing in the short term is for technology experts, business leaders, and regulators to put aside the panic and hype and gain a deeper understanding of the advantages and disadvantages of AIGC, so that they can be more cautious in adopting it. If AI continues to have a role, no matter what happens, its impact will be disruptive. But overestimating its capabilities will make it more harmful, not less.
The above is the detailed content of US media: Don't be frightened by the intelligence of AI. The real frightening thing is that it is overestimated and abused.. For more information, please follow other related articles on the PHP Chinese website!