Home > Technology peripherals > AI > body text

Attack AI with AI? Threats and Defenses of Adversarial Machine Learning

WBOY
Release: 2023-04-09 18:31:05
forward
1715 people have browsed it

More and more enterprise organizations are beginning to apply artificial intelligence (Artificial Intelligence, abbreviated AI) and machine learning (Machine Learning, abbreviated ML) projects, and protecting these projects has become increasingly important. A survey jointly conducted by IBM and Morning Consult shows that among more than 7,500 multinational companies surveyed, 35% are already using AI, an increase of 13% from last year, and another 42% are studying the feasibility. However, nearly 20% of companies report difficulties in protecting data from AI systems, which is slowing the pace of AI adoption.

Securing AI and ML systems faces significant challenges, some of which are not caused by the AI ​​technology itself. For example, AI and ML systems require data, and if the data contains sensitive or private information, it will become a target for attackers. Machine learning models are at potential risk of adversarial attacks in the cyberspace environment and may become the weakest link in the defense system, thus endangering the security of the entire system.

What is adversarial machine learning

Adversarial machine learning is not a type of machine learning, but is used by attackers to attack ML systems a series of means. Adversarial machine learning exploits the vulnerabilities and peculiarities of ML models to carry out attacks. For example, adversarial machine learning can be used to make ML trading algorithms make incorrect trading decisions, make fraudulent operations harder to detect, provide incorrect operational recommendations, and manipulate reports based on sentiment analysis.

Adversarial machine learning attacks are divided into four types: poisoning attack, evasion attack, extraction attack and inference attack.

1. Poisoning attack

In a poisoning attack, the attacker manipulates the training data set. For example, intentionally biasing a data set causes the machine to learn in the wrong way. For example, your home is equipped with AI-based security cameras. An attacker could walk by your house at 3 a.m. every day and let his dog run across the lawn, triggering the security system. Eventually, you turn off those alarms that trigger at 3 a.m. to avoid being woken by the dog. That dog walker is actually providing training data to let the security system know that what happens every day at 3am is harmless. When systems are trained to ignore anything that happens at 3 a.m., attackers take advantage of the opportunity to launch attacks.

2. Evasion attack

In an evasion attack, the model has been trained, but the attacker can slightly change the input to carry out the attack. One example is a stop sign - when an attacker applies a yield tag, the machine interprets it as a yield sign, not a stop sign. In the dog walking example above, a burglar could break into your home wearing a dog suit. Evading an attack is like an optical illusion on the machine.

3. Extraction Attack

In an extraction attack, the attacker obtains a copy of the AI ​​system. Sometimes you can extract the model simply by observing its inputs and outputs, and play around with the model to see how it reacts. If you can test your model multiple times, you can teach your model to behave in the same way.

For example, in 2019, a vulnerability was exposed in Proofpoint’s email protection system, and the generated email headers were accompanied by a score indicating how likely the email was to be spam. Using these scores, attackers can build imitation spam detection engines to generate spam that evades detection.

If a company uses commercial AI products, attackers can also obtain a copy of the model by purchasing or using the service. For example, there are platforms that attackers can use to test their malware against antivirus engines. In the dog-walking example above, an attacker could get a pair of binoculars to see what brand of security camera it is, then buy a camera of the same brand and figure out how to bypass the defense.

4. Inference attack

In an inference attack, the attacker figures out the data set used to train the system and then exploits the Vulnerabilities or biases enable attacks. If you can figure out the training data, you can use common sense or clever tricks to exploit it. Still using the dog-walking example, an attacker might monitor the house in order to get a feel for nearby passers-by and vehicles. When an attacker notices a dog walker passing by at 3 a.m. every day, the security system will ignore the dog walker and may exploit this vulnerability to carry out an attack.

In the future, attackers may also use intelligent machine learning technology to attack regular machine learning applications. For example, a new type of AI generative confrontation system. Such systems are often used to create deep fake content, which are photos or videos that are so realistic that they appear to be real. Attackers often use them for online scams, but the same principles can also be used to generate undetectable malware.

In a generative adversarial network, one side is called the discriminator and the other side is called the generator, and they attack each other. For example, antivirus AI might try to figure out whether an object is malware. Malware-generating AI may attempt to create malware that the first system cannot catch. Through repeated confrontations between the two systems, the end result can be malware that is nearly impossible to detect.

How to defend against adversarial machine learning

The widespread confrontation in cyberspace makes the application of machine learning face severe challenges. In order to defend against the threat of adversarial machine learning attacks, security researchers have begun security research on adversarial machine learning to improve the practical application of machine learning algorithms. The robustness ensures the application security of machine learning related algorithms.

Research firm Gartner recommends that if an enterprise has AI and ML systems that need to be protected, targeted security measures should be taken. First, in order to protect the integrity of AI models, enterprises should adopt the principles of trustworthy AI and conduct verification checks on models; second, in order to protect the integrity of AI training data, data poisoning detection technology should be used; in addition, many traditional security measures It can also be applied to AI system protection. For example, solutions that protect data from being accessed or destroyed can also protect training data sets from being tampered with.

MITRE is well-known for its standardized ATT&CK adversarial strategy and technical framework. It has also created a set of adversarial machine learning threat matrices for AI systems called Adversarial Machine Learning Threat Matrix. ) attack framework, which is currently known as the Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS), covers 12 stages of attacking ML systems.

In addition, some manufacturers have begun to release security tools to help users protect AI systems and defend against adversarial machine learning. Microsoft released Counterfit in May 2021, an open source automation tool for security testing of AI systems. Counterfit was originally an attack script library written specifically for a single AI model, and later became a general automation tool for large-scale attacks on multiple AI systems. The tool can be used to automate techniques in MITER's ATLAS attack framework, but can also be used during the AI ​​development phase to find vulnerabilities early before they make it into production.

IBM also has an open source adversarial machine learning defense tool called Adversarial Robustness Toolbox, which is now a project under the Linux Foundation. The project supports all popular ML frameworks and includes 39 attack modules divided into four categories: evasion, poisoning, extraction and inference.

In view of the possible attacks that machine learning may suffer in cyberspace defense, enterprises should also introduce machine learning attacker models as early as possible, with the purpose of scientifically evaluating their security attributes in specific threat scenarios. At the same time, organizations should fully understand how adversarial machine learning algorithms launch evasion attacks in the testing phase, launch poisoning attacks in the training phase, and launch privacy theft in the entire machine learning phase. They should design and deploy them in actual confrontational environments in cyberspace, and be able to A defense method that effectively strengthens the security of machine learning models.

Reference link:

https://www.csoonline.com/article/3664748/adversarial-machine-learning-explained-how-attackers-disrupt-ai -and-ml-systems.html

The above is the detailed content of Attack AI with AI? Threats and Defenses of Adversarial Machine Learning. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template