Home > Technology peripherals > AI > body text

When AI encounters fraud

王林
Release: 2023-05-31 14:06:01
forward
668 people have browsed it

“Although I know that scams are common nowadays, I still can’t believe that I have actually encountered one. The experience of being scammed by telecommunications a few days ago still frightened reader Wu Jia (pseudonym). Among the scams Wu Jia encountered, The scammer used AI to change her face into someone she knew well.

Not only did Wu Jia encounter AI fraud that was difficult to guard against in her daily life, a reporter from Beijing Business Daily noticed that new telecommunications fraud models using AI technology have recently shown a high incidence. “The success rate of AI fraud is close to 100%. Topics such as "Technology company owner was defrauded of 4.3 million yuan in 10 minutes" have been on the hot searches one after another, which has also triggered discussions among users on the application of new technologies.

When AI encounters fraud

“AI face-changing” fraud

Artificial intelligence is popular again, this time around telecommunications fraud. Wu Jia told a Beijing Business Daily reporter that a social account that he had not used for a long time suddenly received greetings from a high school "friend" whom he had not contacted for many years. At first, he did not become suspicious. But after a brief exchange of pleasantries, the other party asked for an emergency loan of 2,000 yuan. Although a little confused, Wu Jia did not refuse immediately because of the familiar relationship in the past, but asked to verify his identity through voice and video.

"First she sent me a few voice messages, which sounded normal and more like her voice. Then I made a video call and she quickly connected. It was indeed her face." Wu Jia express. According to Wu Jia's recollection, due to frequent network lags, the conversation between the two was not smooth after the video call was connected. The other party only excused himself by saying that the network was not good, and also made remarks such as "You don't even recognize my face."

Wu Jia became wary because the other party did not explain in detail why he needed to borrow money. He just hung up the phone in a hurry after confirming his "face" and behaved abnormally. Later, Wu Jia asked the other party on the phone the surname of their common high school teacher, but the other party did not answer directly and still insisted that "I am who I am." After realizing that he might have encountered a scam, Wu Jia hung up the video call and further used other channels to seek help from the classmate, but received a reply that the borrower was not him.

Wu Jia feels very sad about this scam. Wu Jia said that although he knew that there were many methods of telecommunications fraud, he did not expect that the video verification method might not work. Later, I reminded the elders in my family that they could easily be defrauded if they encountered the same situation, so I emphasized to them again not to randomly trade funds through online channels. ”

Based on Wu Jia’s feedback, it is inferred that what she encountered was an AI fraud that has attracted much attention recently. On May 22, topics such as “The boss of a technology company was defrauded of 4.3 million yuan in 10 minutes”, “AI fraud success rate is close to 100%”, “AI fraud broke out in many places across the country”, etc., and many police officers also released AI Technology-related telecommunications fraud cases.

Information released by the Anti-Fraud Center of the Wenzhou Municipal Public Security Bureau on May 22 showed, "I think you will not treat me badly. Contact me as soon as you receive the text message picture." Mr. Chen, a Wenzhou citizen, received a message The blackmail text message came from a "private detective" and attached a screenshot of a so-called indecent video of Mr. Chen and a woman, but the related video was synthesized by AI. Mr. Chen called the police for help. The case is currently under investigation.

Coincidentally, Ping An Baotou’s official WeChat account disclosed on May 20 that the Telecommunications Cybercrime Investigation Bureau of Baotou City Public Security Bureau recently detected a case of telecom fraud using intelligent AI technology. Mr. Guo, the legal representative of a technology company in Fuzhou City, received a call from his friend WeChat video call, after confirming the friend's face and voice through the video, Mr. Guo transferred 4.3 million yuan to the other party's account within 10 minutes. It was not until he contacted his friend after transferring the money that Mr. Guo realized that he had been cheated. With the assistance of the Baotou police, 3.3684 million yuan of defrauded funds in the fraudulent account were finally intercepted, but a further 931,600 yuan was transferred away.

Su Xiaorui, a senior consultant in the financial industry of Analysys Analysis, said that as the application of AI technology gradually matures and the application threshold is lowered, using AI to commit fraud by pretending to be real has become a new trend for criminals.

Excessive disclosure of personal privacy

In fact, in recent years, combating telecommunications fraud has become a key task for regulators. Through online and offline channels such as the police, banks, communities, and media, detailed early warnings and disclosures of various telecommunications fraud methods have been carried out. However, Telecommunications frauds that come in all kinds of ways are still hard to guard against.

However, under multiple warnings, users are increasingly becoming more vigilant against telecom fraud. In this discussion on the topic of AI fraud, many users also raised the point of protecting personal privacy information and not taking anything involving money lightly.

Wang Pengbo, chief analyst of Broadcom Consulting, also believes that the advancement of technology has made it more difficult to prevent telecom fraud, but the reason for the frequent occurrence of telecom fraud is still the excessive leakage of users' personal privacy information in the information age. Beijing Business Daily reporters also asked many readers. One user who had encountered a return scam on an e-commerce platform pointed out that the scammer clearly provided his account name, purchased products and harvesting address on the phone, which ultimately caused his own large losses.

In an era of comprehensive advancement of digitalization, personal information is commonly collected by various platforms, which provides opportunities for the rise of a new model of AI fraud. When fraud meets AI, the success rate of fraud is close to 100%. Su Xiaorui pointed out that new technologies, including AI, should be neutral in themselves. While they can bring convenience to production and life, they also need to be alert to the various risks hidden in them.

The impact on face-swiping and palm-swiping payments

According to the content of the police report, the methods of using AI technology to commit fraud mainly include voice synthesis, AI face-changing, and stealing WeChat IDs, extracting voice files or installing unofficial versions (plug-ins), and forwarding previous messages to WeChat friends. Voice recordings gain trust. After careful screening, AI technology can be used to identify defrauded people and find target targets.

Compared with traditional fraud methods, AI fraud involves biometric information such as faces and voiceprints. In the digital age, biometric information such as fingerprints, voiceprints, faces and even irises have become the "keys" that can open financial accounts. This also means that in addition to conventional verification codes and ID number information, current users must protect their privacy. Information also increasingly includes personal biometric information.

According to past actual investigations by Beijing Business Daily reporters, especially in the financial field, the emergence of new technologies and new things can easily become leverage for scammers to carry out scams on unsuspecting users. After the AI ​​"face-swapping" fraud method came out, many users also expressed concerns about the security of facial recognition and other methods, and asked whether scammers might use AI technology to steal personal financial accounts.

While AI fraud has been widely discussed, WeChat has also officially released a new payment method called palm payment. When discussing palm payment, some users also mentioned issues regarding palm print information collection and payment security. According to reports, unlike fingerprint recognition, which uses the epidermal lines of the fingertips, palmprint recognition uses the veins of the palm of the hand for identification.

While enjoying the convenience brought by new technologies, institutions should think about how to deal with problems such as fraud caused by new technologies. Talking about the application of new technologies in the financial field, Wang Pengbo pointed out that if an institution wants to implement new technologies, it must achieve a balance between how to protect personal privacy, anti-fraud and ease of use for users, and at least prepare prevention and response mechanisms in advance.

"Take palm payment as an example. The large-scale collection and application of personal information actually lacks legal support. How to avoid excessive collection of information and how to prevent personal biometric information that should be top-secret from being leaked? How to store data, and what method must be used to ensure the security of personal information before it can be retrieved... There is currently no good explanation for these issues. Wang Pengbo said that as the majority of users gradually accept personal privacy protection, new technologies must be popularized for individuals. The difficulty for users has become higher.

Cross verification before transfer

For consumers, it is more important to keep their "money bags" safe. In this regard, Wang Pengbo said that consumers must first be more vigilant. When it comes to transfers, they need to carefully verify the authenticity of the information and promptly contact relevant sources through formal channels for cross-verification.

Su Xiaorui pointed out that it needs to be viewed from three levels: First, on the regulatory side, risk reminders and early warnings need to be issued to the public in a timely manner, the introduction of scientific and technological talents should be further strengthened in the law enforcement team, and new methods of new electronic fraud should be summarized and refined. characteristics and new models, by setting up a number of major and important cases to shock the market; second, on the platform side, such as shopping platforms, social platforms, payment platforms, etc. where scammers are resident and frequently used, it is necessary to strengthen risk control management and take reasonable interception measures , promptly issue reminders to users when the account is abnormal or the chat content is abnormal; third, on the user side, do not trust unknown calls and text messages, and verify the identity information of the other party through official and other channels before transferring or making payment.

Beijing Business Daily reporter Liao Meng

The above is the detailed content of When AI encounters fraud. For more information, please follow other related articles on the PHP Chinese website!

source:sohu.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template