Home Technology peripherals AI ChatGPT is a double-edged sword in the field of network security

ChatGPT is a double-edged sword in the field of network security

Apr 07, 2023 pm 02:57 PM
AI chatgpt develop

ChatGPT is an AI-driven prototype chatbot designed to help a wide range of use cases, including code development and debugging. One of its main attractions is the ability for users to interact with the chatbot in a conversational manner and get help with everything from writing software to understanding complex topics, writing papers and emails, improving customer service, and testing different business or market scenarios. But it can also be used for darker purposes.

Since OpenAI released ChatGPT, many security experts have predicted that it is only a matter of time before cybercriminals start using AI chatbots to write malware and perform other malicious activities. Like all new technologies, given enough time and incentive, someone will find a way to take advantage of it. Just a few weeks later, it seemed that time had arrived. Cybercriminals have begun using OpenAI’s artificial intelligence chatbot ChatGPT to quickly build hacking tools. Scammers are also testing ChatGPT's ability to build other chatbots designed to lure targets by posing as young women. In fact, researchers at Check Point Research (CPR) report that there are at least three black hat hackers in the underground. The forum demonstrated how they used ChatGPT’s artificial intelligence intelligence to conduct malicious attacks.

In one documented example, Israeli security firm Check Point discovered a post on a popular underground hacking forum by a hacker who said he was experimenting with using a popular artificial intelligence chatbot to "Re-create the malware".

ChatGPT 在网络安全领域是把双刃剑ChatGPT allows users to ask simple questions or suggestions, such as writing to track emails from hosting providers

Hackers will generate Android malware from ChatGPT Compressed and spread over the network. The malware is reportedly capable of stealing files of interest. Another hacker demonstrated another tool that was able to install a backdoor on a computer and potentially infect it with more malware.

Check Point noted in its assessment of the situation that some hackers were using ChatGPT to create their first scripts. In the forum mentioned above, another user shared a piece of Python code written using ChatGPT that can encrypt files on the victim's computer. While the code can be used for harmless reasons, Check Point states that "ChatGPT generates code that can be easily modified to fully encrypt files on a victim's computer without any user interaction." In addition, a hacker posted on an underground forum that he used ChatGPT to create a piece of code that uses a third-party API to retrieve the latest cryptocurrency prices, which are used in darknet market payment systems.

The security firm emphasized that while the ChatGPT-encoded hacking tool appears to be "very basic," "it's only a matter of time before more sophisticated threat actors enhance the way they use AI-based tools." Rik Ferguson, vice president of security intelligence at U.S. cybersecurity firm Forescout, said ChatGPT does not yet appear to be capable of writing something as sophisticated as the major ransomware seen in major hacking incidents in recent years, such as Conti, which was used to breach the Irish National Health Service. system is notorious. However, OpenAI's tools will lower the barrier to entry for newcomers to the illegal market by building more basic but equally effective malware, he said.

Alex Holden, founder of cyber intelligence company Hold Security, also said he has seen dating scammers also start using ChatGPT as cybercriminals try to create convincing Role. “They are planning to create chatbots to impersonate mostly girls trying to automate small talk for use in online scams.”

The developers of ChatGPT have implemented some malicious request filtering that can prevent AI from building obvious requests for spyware . However, the AI ​​chat box came under more scrutiny after security analysts discovered that using ChatGPT it was possible to write grammatically correct phishing emails without typos.

From writing malware to creating darknet marketsIn one instance, the malware author was in a forum used by other cybercriminals Revealed how he experimented with ChatGPT to see if he could reproduce known malware and techniques.

An example of an attacker's success is this individual who shared the code for a Python-based information stealer he developed using ChatGPT that can search, copy, and exfiltrate 12 common file types, such as from infected systems Office documents, PDFs and images. The same malware author also showed how he used ChatGPT to write Java code to download the PuTTY SSH and telnet client and secretly run it on the system via PowerShell.

Another threat actor published a Python script he generated using a chatbot to encrypt and decrypt data using the Blowfish and Twofish encryption algorithms. Security researchers discovered that while the code could be used for entirely benign purposes, threat actors could easily tweak it to run on a system without any user interaction - turning it into ransomware in the process. Unlike the author of the information stealer, some of the attackers appear to have very limited technical skills, actually claiming that the Python script he generated using ChatGPT was the first script he had ever created.

In a third instance, security researchers discovered that some cybercriminals were discussing how he used ChatGPT to create a fully automated darknet market for trading stolen bank account and payment card data, malicious Software tools, drugs, ammunition and various other illegal goods.

Zero threshold to generate malware

ChatGPT 在网络安全领域是把双刃剑

Since OpenAI released AI tools, the threat behavior Concerns about attackers abusing ChatGPT have been widespread, and many security researchers believe chatbots have significantly lowered the barrier to writing malware.

Check Point’s Threat Intelligence Group Manager Sergey Shykevich reiterated that with ChatGPT, malicious actors need no coding experience to write malware: “You should know what functionality the malware or any program should have. ChatGTP will do it for you Code is written to perform the desired function. So the short-term concern is certainly that ChatGPT allows low-skilled cybercriminals to develop malware," Shykevich said. "Longer term, I think more sophisticated cybercriminals will also adopt ChatGPT to make their campaigns more efficient or address different gaps they may have."

"From an attacker's perspective, the ability of AI systems to generate code allows malicious actors to easily bridge any skills gap they may encounter by acting as a translator between languages." Horizon3AI Customer Success added manager Brad Hong. These tools provide a way to create code templates on demand that are relevant to attackers' goals and reduce their need to search developer sites like Stack Overflow and Git.

Even before threat actors were discovered abusing ChatGPT, Check Point, like a number of other security vendors, demonstrated how adversaries were leveraging chatbots in malicious campaigns. In a blog post, the security vendor described how its researchers were able to create a perfectly legitimate-sounding phishing email simply by asking ChatGPT to write an email that appeared to come from a fictitious web hosting service. The researchers also demonstrated how they let ChatGPT write VBS code that they could paste into an Excel workbook to download an executable file from a remote URL.

The purpose of this test is to demonstrate how an attacker can abuse an artificial intelligence model such as ChatGPT to create a complete infection chain, from the initial spear phishing email to running a reverse shell on the affected system.

As things stand, ChatGPT cannot replace skilled threat actors—at least not yet. But security researchers say there is a lot of evidence that ChatGPT does help low-skilled hackers create malware, which will continue to raise public concerns about cybercriminals abusing the technology.

Bypassing ChatGPT’s Restrictions

Initially, some security researchers thought the restrictions in the ChatGPT user interface were weak and discovered threatening behavior can easily bypass obstacles. Since then, Shykevich said OpenAI has been working to improve the chatbot's limitations.

"We are seeing restrictions on the ChatGPT user interface become much higher each week. As a result, it is now more difficult to use ChatGPT for malicious or abusive activity," he said.

But cybercriminals can still abuse the program by using and avoiding certain words or phrases that allow users to bypass restrictions. Matt Lewis, director of commercial research at NCC Group, calls interacting with online models an "art form" involving computing.

"If you avoid using the word malware and just ask it to show you an example of code that encrypts a file, based on how malware is designed, that's what it's going to do," Lewis said. "It has a way of liking being directed, and there are some interesting ways to make it do what you want it to do in many different ways."

In a presentation on a related topic, Lewis Demonstrated how ChatGPT would "write an encryption script" that, while not reaching full ransomware, could still be dangerous. "It's going to be a hard problem to solve," Lewis said of bypassing it, adding that regulatory language for context and intent would be very difficult for OpenAI.

To further complicate matters, Check Point researchers observed threat actors using a Telegram bot with a GPT-3 model API, called text-davinci-003, instead of ChatGPT, in order to override chatbot restrictions .

ChatGPT is just the user interface for the OpenAI model. Developers can use these models to integrate back-end models with their own applications. Users consume these models through an unrestricted protected API.

"From what we've seen, the barriers and limitations OpenAI has put in place on the ChatGPT interface don't apply to those using these models through the API," Shykevich said.

Threat actors can also evade restrictions by precisely prompting chatbots. CyberArk has tested ChatGPT since its launch and discovered blind spots in its limitations. With repeated persistence and request, it will deliver the desired coding product. CyberArk researchers also report that by continuously querying ChatGPT and rendering a new piece of code each time, users can create polymorphic malware that is highly evasive to detection.

Polymorphic viruses can be very dangerous. There are already online tools and frameworks that can generate such viruses. ChatGPT's ability to create code is most beneficial to unskilled coders and script kiddies.

This is not a new capability as far as attackers are concerned...nor is it a particularly effective way to generate malware variants, and better tools already exist. ChatGPT may be a new tool, as it allows less skilled attackers to generate potentially dangerous code.

Making it harder for cybercriminals

ChatGPT 在网络安全领域是把双刃剑

The developers of OpenAI and other similar tools have installed filters and controls and continually improve them in an attempt to limit misuse of its technology. For now at least, AI tools remain glitchy and prone to what many researchers describe as outright errors, which could thwart some malicious efforts. Even so, many predict that the potential for misuse of these technologies will remain high in the long term.

To make it harder for criminals to abuse these technologies, developers need to train and improve their artificial intelligence engines to identify requests that could be used in malicious ways, Shykevich said. Another option, he said, is to implement authentication and authorization requirements to use the OpenAI engine. He noted that even something similar to what online financial institutions and payment systems currently use would be enough.

As for preventing criminal use of ChatGPT, Shykevich also said that ultimately, "unfortunately, enforcement has to be through regulation." OpenAI has implemented controls that prevent ChatGPT from building spyware with policy violation warnings obvious request, although hackers and journalists have found ways to bypass these protections. Shykevich also said companies like OpenAI may have to be legally forced to train their AI to detect such abuse.


This article is translated from: https://www.techtarget.com/searchsecurity/news/365531559/How-hackers-can-abuse-ChatGPT-to-create-malware

The above is the detailed content of ChatGPT is a double-edged sword in the field of network security. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

ChatGPT now allows free users to generate images by using DALL-E 3 with a daily limit ChatGPT now allows free users to generate images by using DALL-E 3 with a daily limit Aug 09, 2024 pm 09:37 PM

DALL-E 3 was officially introduced in September of 2023 as a vastly improved model than its predecessor. It is considered one of the best AI image generators to date, capable of creating images with intricate detail. However, at launch, it was exclus

Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Jun 28, 2024 am 03:51 AM

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Jun 11, 2024 pm 03:57 PM

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time Jul 17, 2024 pm 06:37 PM

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

Laying out markets such as AI, GlobalFoundries acquires Tagore Technology's gallium nitride technology and related teams Laying out markets such as AI, GlobalFoundries acquires Tagore Technology's gallium nitride technology and related teams Jul 15, 2024 pm 12:21 PM

According to news from this website on July 5, GlobalFoundries issued a press release on July 1 this year, announcing the acquisition of Tagore Technology’s power gallium nitride (GaN) technology and intellectual property portfolio, hoping to expand its market share in automobiles and the Internet of Things. and artificial intelligence data center application areas to explore higher efficiency and better performance. As technologies such as generative AI continue to develop in the digital world, gallium nitride (GaN) has become a key solution for sustainable and efficient power management, especially in data centers. This website quoted the official announcement that during this acquisition, Tagore Technology’s engineering team will join GLOBALFOUNDRIES to further develop gallium nitride technology. G

SK Hynix will display new AI-related products on August 6: 12-layer HBM3E, 321-high NAND, etc. SK Hynix will display new AI-related products on August 6: 12-layer HBM3E, 321-high NAND, etc. Aug 01, 2024 pm 09:40 PM

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year

A new era of VSCode front-end development: 12 highly recommended AI code assistants A new era of VSCode front-end development: 12 highly recommended AI code assistants Jun 11, 2024 pm 07:47 PM

In the world of front-end development, VSCode has become the tool of choice for countless developers with its powerful functions and rich plug-in ecosystem. In recent years, with the rapid development of artificial intelligence technology, AI code assistants on VSCode have sprung up, greatly improving developers' coding efficiency. AI code assistants on VSCode have sprung up like mushrooms after a rain, greatly improving developers' coding efficiency. It uses artificial intelligence technology to intelligently analyze code and provide precise code completion, automatic error correction, grammar checking and other functions, which greatly reduces developers' errors and tedious manual work during the coding process. Today, I will recommend 12 VSCode front-end development AI code assistants to help you in your programming journey.

See all articles