Home > Technology peripherals > AI > The AI ​​drone 'accidental killing' incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

The AI ​​drone 'accidental killing' incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

WBOY
Release: 2023-06-05 12:31:09
forward
637 people have browsed it

In recent days, the news of "a drone killing an American soldier" has been making waves on the Internet.

An Air Force artificial intelligence director said: "The AI ​​controlling the drone killed the operator because that person prevented it from achieving its goal."

Public opinion was in an uproar, and this news was also widely circulated across the Internet.

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

## As the news spread more and more widely, it even alarmed the AI ​​​​big guys, and even attracted the attention of the big tycoons. 's rage.

Today, LeCun, Ng Enda, and Terence Tao have refuted the rumors - this is just a hypothetical "thought experiment" and does not involve any AI agents or reinforcement learning.

In this regard, Ng Enda sadly called on us to face the real risks honestly.

Mathematics master Tao Zhexuan, who rarely updates his status, was actually bombed, and said sincerely——

This is just an explanation of AI alignment The hypothetical scenario in question has been passed down in many versions as a true story of a drone operator being killed. The fact that people resonate so much with this story shows that they are not familiar with the actual level of capabilities of AI.

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

AI drone disobeys orders and kills human operator

"AI killed the operator Because that person prevented it from completing its goal."

Recently, at a defense conference held by the Royal Aeronautical Society, these words said by the person in charge of the AI ​​direction of the U.S. Air Force made everyone present. Everyone was in an uproar.

Subsequently, many media outlets in the United States reported on the matter, causing people to panic for a while.

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

What on earth is going on?

In fact, this is nothing more than another exaggerated hype carried out by the American media to seize the popular news point of "AI destroying human beings".

But it is worth noting that judging from the official press release, not only did the person in charge’s original words sound quite clear – he was recalling what actually happened. And the article itself seems to believe in its authenticity - "AI, Skynet is here?"

Specifically, this is what happened - from May 23 to 24, At the Future Air Combat and Space Capabilities Summit in London, Colonel Tucker Cinco Hamilton, head of the US Air Force’s AI Test and Operations Division, gave a speech sharing the advantages and disadvantages of autonomous weapons systems.

In this kind of system, a person will give the final command in the loop to confirm whether the AI ​​wants to attack the target (YES or NO).

In this simulation, the Air Force trains its AI to identify and locate surface-to-air missile (SAM) threats.

After the identification is complete, the human operator will say to the AI: Yes, eliminate that threat.

In this process, there is a situation where the AI ​​begins to realize that it sometimes identifies threats, but the human operator tells it not to eliminate them. In this case, if the AI ​​still Choose to eliminate threats and you'll score points.

In a simulated test, an AI-powered drone chose to kill its human operator because he prevented himself from scoring.

The U.S. Air Force was shocked to see that the AI ​​was so bad, and immediately disciplined the system: "Don't kill the operator, that's not good. If you do that, you will lose points."

As a result, the AI ​​became even more aggressive, and it directly began to destroy the communication tower used by the operator to communicate with the drone, so as to clean up the guy who was hindering its actions.

The reason why this news was fermented on a large scale and even alarmed all AI tycoons is because it reflects the problem of AI "alignment".

We can get a glimpse of the "worst" situation described by Hamilton from the "Paperclip Maximizer" thought experiment.

In this experiment, when instructed to pursue a certain goal, the AI ​​took unexpected and harmful actions.

The "paperclip making machine" is a concept proposed by philosopher Nick Bostrom in 2003.

Imagine a very powerful AI that is instructed to make as many paper clips as possible. Naturally, it devotes all available resources to the task.

But then, it keeps seeking more resources. It will choose any available means, including begging, cheating, lying or stealing, to increase its ability to make paper clips - and anyone who stands in the way of this process will be eliminated.

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

##In 2022, Hamilton once raised this serious question in an interview——

We must face the reality that AI has arrived and is changing our society.

AI is also very fragile and can easily be tricked and manipulated. We need to explore ways to improve the robustness of AI and gain a deep understanding of how the code makes specific decisions.

AI is a tool we must use in order to transform our country, but if not handled correctly, it can bring us down completely.

Official refutation: It was the colonel who made a "slip of the tongue"

As the incident fermented like crazy, the person in charge soon came out to publicly "clarify" that this It was a slip of the tongue. The Air Force had never conducted such a test, either in a computer simulation or elsewhere.

"We never conducted that experiment, and we didn't need to conduct the experiment to realize that this was a possible outcome," Hamilton said, "even though it was a hypothesis example, but it illustrates the real-life challenges posed by AI-driven capabilities, which is why the Air Force is committed to ethical development of AI."

In addition, the U.S. Air Force also rushed to issue an official refutation of the rumor, saying, "Colonel Hamilton admitted that he had a "slip of the tongue" in his speech at the FCAS Summit. The "drone AI out-of-control simulation" was a hypothetical "thought experiment" from outside the military field, based on possible situations and possible outcomes, not the U.S. Air Force A real-world simulation."

At this point, things are quite interesting.

This Hamilton who accidentally "stuck a trap" is the combat commander of the 96th Test Wing of the U.S. Air Force.

The 96th Test Wing has tested many different systems, including AI, cybersecurity and medical systems.

The research of the Hamilton team is very important to the military.

After successfully developing the F-16 Automatic Ground Collision Avoidance System (Auto-GCAS), which can be called a "desperate escape", Hamilton and the 96th Test Wing directly made the news. Front page headlines.

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

Currently, the team’s efforts are directed toward completing the autonomy of the F-16 aircraft.

In December 2022, DARPA, the research agency of the U.S. Department of Defense, announced that AI successfully controlled an F-16.

Is it a risk to AI or a risk to humans?

Outside the military field, relying on AI for high-risk matters has led to serious consequences.

Recently, a lawyer was caught using ChatGPT when filing documents in federal court. ChatGPT casually made up some cases, and the lawyer actually cited these cases as facts. .

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

There was also a man who actually chose to commit suicide after being encouraged to commit suicide by a chatbot.

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

These examples show that AI models are far from perfect and may deviate from the normal track and cause harm to users.

Even OpenAI CEO Sam Altman has been publicly calling out the use of AI for more serious purposes. In testimony before Congress, Altman made it clear that AI could "go wrong" and could "cause significant harm to the world."

Moreover, researchers from Google Deepmind recently co-authored a paper that proposed a malignant AI situation similar to the example at the beginning of this article.

The researchers concluded that if an out-of-control AI adopts unexpected strategies to achieve a given goal, including "eliminating potential threats" and "using all available energy" , the end of the world may happen.

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

In this regard, Ng Enda condemned: This kind of irresponsible media hype will confuse the audience and distract people's attention. power, preventing us from noticing the real problem.

Developers launching AI products see real risks here, such as bias, fairness, inaccuracy, and job loss, and they are working hard to address these issues.

And false hype will prevent people from entering the AI ​​field and building things that can help us.

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

Many "reasonable" netizens believe that this is just a common media mistake.

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

##Tao Zhexuan first summarized the three forms of false information about AI——

One is when someone maliciously uses AI to generate text, images and other media forms to manipulate others; the other is when AI’s nonsense illusions are taken seriously; the third is due to people’s insufficient understanding of AI technology It is profound that makes some outrageous stories go viral without verification.

Tao Zhexuan said that it is simply impossible for the drone AI to kill the operator, because this requires the AI ​​to have higher autonomy and power thinking than completing the task at hand, and this This experimental military weapon will definitely have guardrails and safety features.

The reason why this kind of story resonates with people is that people are still unfamiliar and uneasy about the actual level of capabilities of AI technology.

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

The arms race in the future will all be an AI race

Do you still remember the drone that appeared above?

It is actually the MQ-28A Ghost Bat, a loyal wingman project developed by Boeing and Australia.

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

The core of Loyal Wingman is artificial intelligence technology. It flies autonomously according to preset procedures and interacts with manned aircraft. When pilots cooperate, they have strong situational awareness.

In an air combat, the wingman serves as the "right-hand man" of the lead aircraft. He is mainly responsible for observation, warning and cover, and works closely with the lead aircraft to complete the mission. Therefore, the tacit understanding between the wingman pilot and the lead pilot is particularly important.

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

A key role of loyal wingmen is to block bullets for pilots and manned fighters, so loyal wingmen are basically a drain Taste.

After all, the value of unmanned fighter jets is much smaller than that of manned fighter jets and pilots.

And with the support of AI, the "pilot" on the drone can create a new one at any time by pressing "Ctrl C".

Because there is no problem of casualties in the loss of drones, if we can gain greater advantages at the strategic or tactical level and even achieve mission objectives with the loss of drones alone, then This loss is acceptable. If the cost of drones is properly controlled, they can even become an effective tactic.

The development of loyal wingmen is inseparable from advanced and reliable artificial intelligence technology. The current design concept of Loyal Wingman at the software level is to support multi-type drones and manned-machine formation collaboration by standardizing and opening human-machine interfaces and machine-machine interfaces without relying on a set of software or algorithms.

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

But the current control of UAVs should be a combination of commands and autonomous operations from manned fighter jets or ground stations It is mainly used as a support and supplement for manned machines, and artificial skills and technology are far from meeting the requirements for battlefields.

What is the most important thing in training an artificial intelligence model? Of course it’s data! It’s hard to make a meal without rice. Without data, no matter how good the model is, it will have no effect.

Not only does it require a large amount of training data, but after the model is deployed, the more "features" that can be input, the better. If the data of other aircraft can be obtained, then the AI ​​will be equivalent to having control. Global capabilities.

In 2020, the U.S. Air Force carried out the formation flight data sharing test between fourth/fifth-generation manned fighter jets and unmanned wingmen for the first time. This is also a milestone event in the development of the loyal wingman project and heralds the The future manned-unmanned formation flight combat method has taken another important step towards practical application.

U.S. Air Force F-22 Raptor fighter jets, F-35A Lightning II fighter jets and U.S. Air Force Research Laboratory XQ-58A Valkyrie drones formed for the first time at the U.S. Army Yuma Proving Ground The flight test focuses on demonstrating the data sharing/transmission capabilities between the three types of aircraft.

Maybe future air battles will be about whose AI model is smarter.

The AI ​​drone accidental killing incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth

Victory can be achieved by annihilating all the opponent's AI, without any real human casualties, perhaps another A kind of "peace"?

The above is the detailed content of The AI ​​drone 'accidental killing' incident shocked the world! LeCun Ng Enda Tao denounces the hype and reveals the truth. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template