To say that the most popular online platform in early 2023 is undoubtedly the new favorite in the field of artificial intelligence - the intelligent chat robot model ChatGPT, which has attracted countless netizens to communicate and interact with it in a "human-like" manner, because it can not only fully understand Questions raised by humans will be given answers that are no longer "artificially retarded", and they will actually be involved in many productivity practices, such as programming, drawing, script writing, and even poetically composed poems and other copywriting.
For a time, people began to discuss: Will artificial intelligence represented by ChatGPT replace a large number of human occupations in the near future? In fact, before discussing this issue, what we need to be more vigilant about is that it is born in the online world and has great security risks and even legal risks. Just like the joke circulating in the circle: if you play well in ChatGPT, you will inevitably have to eat in jail sooner or later?
ChatGPT (Chat Generative Pre-Trained Transformer, full name Chat Generative Pre-training Converter) is an artificial intelligence chat robot model launched by OpenAI, the world's leading artificial intelligence company, and was born on November 30, 2022. By using human feedback to strengthen the model technology, human preferences are used as reward signals to fine-tune the model, so as to continuously learn human language and truly chat and communicate like humans. Precisely because it truly looks like a real person, ChatGPT has quickly become popular around the world since its inception. As of the end of January 2023, the number of monthly active users of ChatGPT exceeded 100 million, making it one of the hottest applications with the fastest user growth in the world.
Just as ChatGPT is making more use of its versatile capabilities, people are becoming more and more anxious: Are their jobs in danger? Just as machines liberated manpower and replaced a large amount of manual labor in the past, historical development is likely to promote artificial intelligence to further participate in human work, using less manpower to complete the same work content. But with the current level of technology, there is currently no foreseeable timetable for reaching this point. Artificial intelligence represented by ChatGPT is still in its early exploration stage, although it can continue to imitate and learn from conversations with humans. , but the foundation is still based on continuous debugging on the basis of a large amount of text and learning models. The answers all come from the Internet, and it is impossible to truly subjectively understand the true meaning of human language, and it is difficult to give realistic answers to specific real-life scenarios. Answer.
When ChatGPT encounters questions in professional fields that have not been trained with a large amount of corpus, the answers given often cannot answer the actual questions. Similarly, it does not have the extension ability that humans are good at. For example, in the picture below, when answering a question related to meteorology, since the field of meteorology is relatively niche and complex, without sufficient training in this field, ChatGPT gave a question with no substantive content like a "marketing account" reply. Therefore, even if ChatGPT can handle daily tasks, it will only be repetitive tasks that do not require too much creativity.
But even so, ChatGPT can still play a positive role in the field of network security and is expected to become a powerful helper for network security personnel.
ChatGPT It can help security personnel conduct security testing on codes. After preliminary testing, if you copy a piece of vulnerable code and let ChatGPT analyze it, the tool can quickly identify the security issues and give specific suggestions on how to fix them. While it has some shortcomings, it is a useful tool for developers who want to quickly check their code snippets for vulnerabilities. As more and more companies are pushing to build security into software development workflows, ChatGPT's accurate and fast detection capabilities can effectively improve code security.
ChatGPT is expected to further enhance the automation of scripts, policy orchestration, and security reporting, such as writing remediation guides for penetration testing reports. ChatGPT can also write outlines faster, which can help add automated elements to reports and even provide cybersecurity training sessions to employees. In terms of threat intelligence, ChatGPT is expected to significantly improve intelligence efficiency and real-time trust decisions, deploy higher confidence than current access and identity models, and form real-time conclusions about access to systems and information.
Any new technology often has drawbacks and risks. ChatGPT helps humans make security products more intelligent and assists humans in responding to security threats. At the same time, based on the working principle of information collection and reuse, will it cross the security boundary and generate various security risks? In addition, although ChatGPT's terms of service clearly prohibit the generation of malware, including ransomware, keyloggers, viruses or "other software designed to cause a certain degree of harm, in the hands of hackers, will ChatGPT still become a "devil's claw"? Are network attacks more widespread and complex?
Since ChatGPT needs to collect extensive network data and obtain information from conversations with people to train the model, it indicates that It also has the ability to obtain and absorb sensitive information. According to Silicon Valley media reports, Amazon lawyers said that they found text very similar to the company’s internal secrets in the content generated by ChatGPT, possibly because some Amazon employees were using ChatGPT to generate code. and text, the lawyer was worried that the data would become training data used by ChatGPT for iteration. Microsoft's internal engineers also warned employees not to send sensitive data to the OpenAI terminal during daily work use, because OpenAI It may be used for training future models.
This also involves another question: Is it legal to collect massive amounts of data from the Internet? Alexander Hanff, a member of the European Data Protection Board (EDPB), questioned whether ChatGPT is a Commercial products. Although there is a lot of information that can be accessed on the Internet, collecting large amounts of data from websites that prohibit third parties from crawling data may violate relevant regulations and is not considered fair use. In addition, individuals protected by GDPR etc. must also be considered. Information, crawling this information is not compliant, and using massive amounts of raw data may violate the "minimum data" principle of GDPR.
Similarly, when ChatGPT is trained based on human language, whether some personal data Can it be effectively deleted? According to the privacy policy published on OpenAI's official website, OpenAI does not mention data protection regulations such as the EU GDPR. In the "Use of Data" clause, OpenAI admits that it will collect data entered by users when using the service, but does not monitor the data. The purpose is further explained.
According to Article 17 of the GDPR, individuals have the right to request the deletion of their personal data, that is, the "right to be forgotten" or the "right to delete." But can large language models such as ChatGPT "forget" "Used to train its own data? Regarding whether OpenAI can completely delete data from the model when an individual requests it, industry insiders believe that it is difficult for such models to delete all traces of personal information. In addition, training such models is expensive, and AI companies It is also unlikely to retrain the entire model every time after an individual requests the deletion of certain sensitive data.
With the popularity of ChatGPT , there are also many related WeChat public accounts and mini-programs in China. Since ChatGPT has not officially opened services in mainland China, and it is impossible to register and talk to it through a mainland mobile phone number, these mini-programs began to claim that they can provide conversations with ChatGPT. Package services, such as 9.99 yuan for 20 conversations to 999.99 yuan for unlimited conversations, etc. In other words, these small programs act as an intermediary between domestic users and ChatGPT. But even for intermediaries, it is often difficult to tell the truth from the fake. Although some accounts will claim to call the ChatGPT port, they do not rule out that they actually use other robot models, and the quality of the dialogue is far inferior to ChatGPT.
A public account mini program that provides the so-called ChatGDT dialogue service
Similarly, some sellers are also on e-commerce platforms Provides services for selling ChatGPT accounts and agent registration. Some media investigations have found that some sellers purchase ready-made accounts in bulk from overseas websites that can be modified with passwords. The cost is about 8 yuan, and the external price can reach 30 yuan. If it is an agent registration, the cost It can be as low as 1.2 yuan.
Not to mention the risks of leaking personal privacy through these channels, privately setting up international channels and publicly providing cross-border business services also violates the relevant provisions of the Cybersecurity Law. At present, most of these public accounts or mini-programs with the words ChatGPT have been banned by WeChat officials, and relevant platforms are also blocking the buying and selling of such accounts.
As mentioned above, ChatGPT can help security personnel detect vulnerable code, and hackers can also perform similar operations. Researchers from digital security investigation media Cybernews found that ChatGPT can be used by hackers to find security holes in websites. With its help, the researchers conducted a penetration test on an ordinary website and combined the suggestions and code provided by ChatGPT within 45 minutes. Changing the example completes the hack.
In addition, ChatGPT can be used for programming, and accordingly it can also be used to write malicious code. For example, if you write VBA code through ChatGPT, you can simply insert a malicious URL into this code, and when the user opens a file such as an Excel file, it will automatically start downloading the malware payload. Therefore, ChatGPT allows hackers without much technical ability to craft weaponized emails with attachments in just a few minutes to carry out cyber attacks on targets.
In December last year, hackers shared Python stealer code written using ChatGPT on underground forums. The stealer can search for common file types (such as MS Office documents, PDF and image files) and copy them to A random subfolder within the Temp folder, zip and upload them to a hardcoded FTP server.
Hackers demonstrated on the forum how to use ChatGPT to create information stealing programs
Because ChatGPT can learn human language, it can be used when writing social engineering behaviors such as phishing emails. Grammar and expression habits are more confusing. Researchers found that ChatGPT enables hackers to realistically simulate various social environments, making any targeted attacks more effective. Even hackers from non-native countries can write code that conforms to the grammar and logic of the target country without any Highly simulated phishing emails with misspellings or formatting errors.
Phishing emails generated using ChatGPT
After a preliminary inventory of the above-mentioned advantages and risks of ChatGPT network security, it is obvious that the two are almost completely opposite. It has two sides, that is, if hackers can find vulnerabilities and attack surfaces, it can also help security personnel detect and fill gaps and narrow the gaps for attacks. However, at this stage, the security assistance function of ChatGPT still requires professional review and approval, and there may be a lag in the process. Those criminal hackers who are financially motivated usually directly use artificial intelligence to launch attacks, and the threat is difficult to estimate.
There is no doubt that artificial intelligence represented by ChatGPT is the embodiment of an outstanding technology in recent years. However, due to the duality of this new technology, network security and IT professionals are not satisfied with ChatGPT. Yan undoubtedly has mixed feelings. According to a recent BlackBerry study, while 80% of businesses say they plan to invest in AI-driven cybersecurity by 2025, 75% still believe AI poses a serious threat to security.
Does this mean that we urgently need a new batch of regulations to specifically regulate the development of the artificial intelligence industry? Some experts and scholars in our country have pointed out that the sensational effect that ChatGPT has now achieved and the attempts of multiple capitals to enter the game do not mean that the artificial intelligence industry has achieved breakthrough development. The tasks that ChatGPT can currently handle have distinctive repetitive and template characteristics. Therefore, There is no need for regulatory agencies to immediately regard ChatGPT technical supervision as the core task of cyberspace governance at this stage. Otherwise, our country's legislative model will fall into the logical cycle of "a new technology emerges and a new regulation is formulated." For now, my country has promulgated laws and regulations such as the Cybersecurity Law, the Regulations on the Ecological Governance of Network Information Content, and the Regulations on the Management of Algorithm Recommendations for Internet Information Services, which stipulate detailed legal provisions for the abuse of artificial intelligence technology and algorithm technology. The obligations and supervision system are sufficient to deal with the network security risks that ChatGPT may cause in the short term. At this stage, my country's regulatory authorities can make full use of the existing legal system to strengthen the supervision of ChatGPT-related industries; in the future, when experience matures, they can consider initiating legislation in the field of artificial intelligence technology applications to deal with various potential security risks.
Abroad, the EU plans to launch the Artificial Intelligence Act in 2024, proposing that member states must appoint or establish at least one supervisory agency responsible for ensuring that "the necessary procedures are followed." The bill is expected to be widely used in everything from self-driving cars to chatbots to automated factories.
In fact, the discussion about the security of ChatGPT and whether human jobs will be replaced by artificial intelligence may ultimately boil down to one proposition: whether artificial intelligence controls humans, or humans control artificial intelligence. If artificial intelligence develops in a healthy way and helps humans improve work efficiency and accelerate social progress, our network will also be safe. However, if humans cannot control artificial intelligence and let it develop in a disordered direction, it will inevitably be a world full of chaos. In the network environment of attacks and threats, by that time, network security may no longer be a "little fight between red and blue", but will be a subversion of the entire human social order.
The above is the detailed content of If you play ChatGPT well, you will have to eat in jail sooner or later?. For more information, please follow other related articles on the PHP Chinese website!