Is the facial recognition system that claims to be 99% accurate really unbreakable? In fact, the face recognition system can be easily broken by making some changes in face photos that do not affect visual judgment. For example, the girl next door and the male celebrity can be judged as the same person. This is an adversarial attack. The goal of adversarial attacks is to find adversarial samples that are natural and can confuse the neural network. In essence, finding adversarial samples is to find the vulnerabilities of the neural network.
Recently, a research team from Dongfang University of Technology proposed a paradigm of generalized manifold adversarial attack (GMAA),promoting the traditional "point" attack mode to The "surface" attack mode greatly improves the generalization ability of the adversarial attack model and develops a new idea for the work of adversarial attacks.
This research improves previous work from two aspects: target domain and adversarial domain. On the target domain, this study finds more powerful adversarial examples with high generalization by attacking the set of states of the target identity. For the adversarial domain, previous work was looking for discrete adversarial samples, that is, finding several "loopholes" (points) of the system, while this research is looking for continuous adversarial manifolds, that is, finding the fragile integral parts of the neural network. Piece "area" (face). In addition, this study introduces domain knowledge of expression editing and proposes a new paradigm based on expression state space instantiation. By continuously sampling the generated adversarial manifold, we can obtain highly generalizable adversarial samples with continuous expression changes. Compared with methods such as makeup, lighting, and adding perturbations, theexpression state space is more universal and natural, and is not affected by gender or lighting. Impact. Research paper has been accepted for CVPR 2023.
Paper link: Please click here to view the paperThe content that needs to be rewritten is: Code link https://github.com/tokaka22/GMAAIntroduction method
In the target domain part, previous work has been to design adversarial samples for a specific photo of target identity A. However, as shown in Figure 2, when the adversarial sample generated by this attack method is used to attack another photo of A, the attack effect will be significantly reduced. In the face of such attacks, regularly changing the photos in the facial recognition database is naturally an effective defense measure. However, the GMAA proposed in this study not only trains on a single sample of the target identity, but also looks for adversarial samples that can attack the set of target identity states.Such highly generalized adversarial samples face the updated face recognition library Have better attack performance. These more powerful adversarial examples also correspond to the weaker areas of the neural network and are worthy of in-depth exploration.
In previous research in the field of adversarial, people usually look for one or several discrete adversarial samples, which is equivalent to finding one or several "points" where the neural network is vulnerable in high-dimensional space. However, this study believes that neural networks may be vulnerable across the entire "face" and therefore should find all adversarial examples on this "face". Therefore, the goal of this research is to find adversarial manifolds in high-dimensional spaceTo sum up, GMAA is a new attack paradigm that usesadversarial manifolds to attack the state set of the target identity .
Please refer to Figure 1, which is the core idea of the article Specifically, this study introduced facial Facial Action Coding System (FACS) is used as domain knowledge to instantiate the proposed new attack paradigm. FACS is a system for facial expression encoding. It divides the face into different muscle units. Each element in the AU vector corresponds to a muscle unit. The size of the vector element represents the muscle activity of the corresponding unit, thereby encoding the expression state. . For example, in the figure below, the first element of the AU vector, AU1, represents the degree to which the inner eyebrow is raisedFrom "Anatomy of Facial Expressions"
For the target field, this research aims to attack target sets containing multiple expression states to achieve better attack performance on unknown target photos; for the adversarial field, this research aims to establish a one-to-one correspondence with the AU space. Adversarial manifold, you can sample adversarial samples on the adversarial manifold by changing the AU value. By continuously changing the AU value, you can generate adversarial samples with continuously changing expressionsIt is worth noting that this study uses expression state space to instantiate the GMAA attack paradigm. This is because expression is the most common state in human facial activities, and the expression state space is relatively stable and will not be affected by race or gender (light can change skin color, and makeup can affect gender) . In fact, as long as other suitable state spaces can be found, this attack paradigm can be generalized and applied to other adversarial attack tasks in nature.
The content that needs to be rewritten is: model results
The visual results of this study are shown in the animation below. Each frame of animation is an adversarial sample obtained by sampling on the adversarial manifold. Continuous sampling can obtain a series of adversarial examples with continuously changing expressions (left). The red value in the animation represents the similarity between the adversarial sample of the current frame and the target sample (on the right) under the Face face recognition system In Table 1, column The black box attack success rates of four face recognition models on two data sets are shown. Among them, MAA is a reduced version of GMAA. MAA only extends the point attack model to manifold attacks in the adversarial domain. In the target domain, it still attacks a single target photo. The state set of the attack target is a common experimental setting. The article adds this setting to the three methods including MAA in Table 2 (the bold part in the table is the result of adding this setting, in Table 2 (A "G" is added before the name of the method to distinguish), which verifies that the expansion of the target domain can improve the generalization of adversarial samples Figure 4 shows the two The results of an attack on a commercial face recognition system API The content is rewritten as follows: The research also explores the impact of different expressions on attack performance, as well as samples in the state set The impact of quantity on attack generalization performance In Figure 6, a comparison of the visual results of different methods is shown. The MAA method sampled 20 adversarial samples on the adversarial manifold. From the results, it can be seen that the visualization effect is more natural Of course, not all data sets contain different status pictures. In this case, how to expand the data in the target field? This study proposes a feasible solution, which is to use AU vectors and expression editing models to generate a set of target states. The study also shows the results of attacking the synthesized target state set, and the results show that the generalization performance has improvedThe content that needs to be rewritten is: Principle and method
Rewritten content: The core part of the model includes the WGAN-GP-based generation module, expression supervision module, transferability enhancement module and generalized attack module. Among them, the generalized attack module can realize the aggregation function of attack target states, and the transferability enhancement module is based on previous research work. For fair comparison, this module has been added to all benchmark models. The expression supervision module consists of four trained expression editors, and achieves expression conversion of adversarial samples through global structure supervision and local detail supervision In terms of the expression supervision module, the paper The supporting materials provide corresponding ablation experiments, which verify that local detail supervision can reduce artifacts and blurring of generated images, effectively improve the visual quality of adversarial samples, and also improve the accuracy of expression synthesis of adversarial samples In addition, the paper defines the concepts ofcontinuous adversarial manifolds and semantic continuous adversarial manifolds, and proves in detail the generated adversarial manifold and AU vector space Homeomorphism.
Summary is the induction and generalization of existing information or experience. It is a process of organizing and summarizing thoughts, aiming to extract the most important ideas and conclusions. Summarizing can help us better understand and remember what we have learned, and it can also help us better communicate and share our ideas. By summarizing, we can simplify complex information and distill it down to its core points, making it easier to understand and apply. Summary is an important tool in the learning and communication process. It can help us process and utilize large amounts of information more efficiently. Whether in study, work or life, summarizing is an essential skill
To sum up, this research proposes a new attack paradigm called GMAA, and at the same time Expanded the target domain and countermeasure domain, improving the performance of the attack. For the target domain, GMAA improves the generalization ability to the target identity by attacking a collection of states instead of a single image. Furthermore, GMAA extends the adversarial domain from discrete points to semantically continuous adversarial manifolds ("point-to-surface") . This study instantiates the GMAA attack paradigm by introducing domain knowledge of expression editing. Extensive comparative experiments prove that GMAA has better attack performance and more natural visual quality than other competing models.
The above is the detailed content of From individual adversarial to manifold adversarial: CVPR 2023 explores generalizable manifold adversarial attacks. For more information, please follow other related articles on the PHP Chinese website!