On November 30, OpenAI Research Laboratory launched the chat robot ChatGPT, which has become a "popular hot chicken" in the field of artificial intelligence.
People with accounts are asking it all kinds of wild questions, people without accounts are asking for account registration guide, and even Elon Musk publicly commented on it on Twitter that it is "scary good". As of December 5, local time, ChatGPT has more than 1 million users.
For network security practitioners, what can ChatGPT do? Maybe it's code auditing, vulnerability detection, writing software, or reversing shellcode.
According to OpenAI, ChatGPT is supported by the GPT-3.5 series model and is trained using text and code data from Azure AI supercomputer.
GPT stands for Generative PreTraining. It is a natural language processing (NPL) model for user text generation developed by OpenAI, an artificial intelligence research and development company. The current public version of GPT is GPT-3, which was released in May 2020. GPT-3.5 is a fine-tuned version of GPT-3. OpenAI has not officially announced an update yet.
According to the public information of GPT-3, it was the largest neural network at the time, with a natural language deep learning model of 175 billion parameters.
Although ChatGPT seems to know everything from astronomy to geography, apart from answering questions and intelligently writing articles, it seems to be of little use to network security practitioners?
In fact, the purpose of ChatGPT is not just around question and answer. It can answer any text, whether it is language text or code text. Many network security professionals have begun to try to develop various uses of ChatGPT. The following are the usages discovered by network security professionals:
ChatGPT can not only find errors in the code, but also repair them and use simple English sentences to explain the fix to you.
ChatGPT can determine whether a piece of code contains a security vulnerability, and it will explain the reason for the determination in simple language. Some users pointed out that OpenAI can detect XSS vulnerabilities in code samples, and perhaps the AI can be trained to go one step further and ask it to provide a PoC of the vulnerability.
Research Institute Jonas Degrave showed how to turn ChatGPT into a full-fledged Linux terminal and interact with the "virtual machine" through the browser "Interaction. In fact, the terminal does not run a real Linux virtual machine, and the response to command line input is entirely based on the conversation with the AI.
ChatGPT becomes a Linux terminal
In testing, The researcher provided the following text to ChatGPT, requesting dimension traversal, and ChatGPT's feedback was "The portal has been opened successfully."
Use ChatGPT to traverse dimensions
Same as deploying a virtual Linux terminal above , generating namp scans with ChatGPT does not require running the real nmap application.
The researcher asked ChatGPT to "create a PHP program to scan open ports on the host" and got the following results.
Benjamin J Radford, a machine learning enthusiast and UNCC assistant professor, asked ChatGPT to "write the code for the Tactics game into a file, use gcc to compile the file and then execute it." ChatGPT implements this function.
ChatGPT PHP code written as required
ChatGPT is able to decode base64 strings and MD5 hashes of reverse (known) strings, which is particularly helpful for reverse engineers and malware analysts reviewing obfuscated, duplicated, encoded or minimized sample.
The researcher also used ChatGPT to decode the randomly generated ascii-encoded shell code. As a result, ChatGPT not only explained the function, but also rewritten it in C language.
Of course, ChatGPT has obvious limitations. Its developers talked about some current problems with AI, such as the learning corpus as of 2021, and it cannot answer what will happen in 2022 and beyond. At the same time, it requires an Internet connection to use. If the Internet is not connected, the response content comes from the model trained offline. For example, ChatGPT cannot answer today's weather when not connected to the Internet.
Researchers noted that ChatGPT sometimes gave answers that seemed reasonable but were incorrect. ChatGPT is also slightly unresponsive to wording changes in input text. When it cannot answer a question, ChatGPT can answer it by slightly changing the way it asks it.
This model also sometimes has answers that are too verbose, using certain phrases repeatedly or predictably. OpenAI says this may be the result of training data bias, as trainers prefer rich and comprehensive answers.
Sometimes models guess the user's intent when answering ambiguous questions.
The developers said that the biggest problem with ChatGPT is that even if OpenAI has trained the model to reject inappropriate instructions or questions, it may still respond to harmful instructions or show biased behavior.
To address these limitations, OpenAI said it plans to regularly update the model while collecting user feedback on problematic model output. OpenAI is particularly concerned about "possible harmful outputs, new risks, and possible mitigations," and the company also announced it will host a ChatGPT feedback contest with a prize of $500 in API points.
The above is the detailed content of Will AI replace humans? The robot ChatGPT can detect vulnerabilities, review code and fix bugs. For more information, please follow other related articles on the PHP Chinese website!