The recent release of ChatGPT-4 by artificial intelligence developer OpenAI shocked the world again, but what it means in the field of data security has yet to be determined. On the one hand, generating malware and ransomware is easier than ever. On the other hand, ChatGPT can also provide a series of new defense use cases.
The recent release of ChatGPT-4 by artificial intelligence developer OpenAI shocked the world again, but what it means in the field of data security has yet to be determined. On the one hand, generating malware and ransomware is easier than ever. On the other hand, ChatGPT can also provide a series of new defense use cases.
Industry media recently interviewed some of the world's top cybersecurity analysts, and they made the following predictions for the development of ChatGPT and generative artificial intelligence in 2023:
Here are some predictions from cybersecurity analysts.
Steve Grobman, senior vice president and chief technology officer of McAfee, said, “ChatGPT lowers the threshold for use, making it possible to invest in highly skilled personnel that have traditionally been required. and large amounts of money, some techniques are available to anyone with access to the Internet. Unskilled cyberattackers now have the means to generate malicious code in bulk.
For example, they can ask programs to write code , generates text messages sent to hundreds of people, just like what a non-criminal marketing team would do. Instead of directing recipients to a safe website, it takes them to a website with malicious threats. While the code itself is not malicious, it can be used to deliver dangerous content.
Like any emerging technology or application that has pros and cons, ChatGPT will be used by both good and bad actors, so the network The security community must remain vigilant about how it is being exploited."
Justin Greis, a partner at McKinsey & Company, said, "Broadly speaking, Generative AI is a tool, and like all tools, it can be used for good or evil purposes. There are already many use cases cited today, both by threat actors and by curious researchers. More convincing phishing emails are being crafted, malicious code and scripts are being generated to launch potential cyberattacks, or even just to query for better, faster intelligence.
But for every case of abuse, Controls will continue to be put in place to counter them. This is the nature of cybersecurity, a never-ending race to outmaneuver your adversaries and outrun your defenders.
As with any tool that can be used for malicious harm , companies must put guardrails and safeguards in place to protect the public from abuse. There is a very fine ethical line between experimentation and exploitation."
David Hoelzer, a SANS researcher at the SANS Institute, said, "ChatGPT is currently popular around the world, but we are only in its infancy in terms of its impact on the cybersecurity landscape. This marks the adoption of artificial intelligence on both sides of the dividing line/ The beginning of a new era of machine learning, not so much because of what ChatGPT can do, but because it pushes artificial intelligence/machine learning into the public spotlight.
On the one hand, ChatGPT can potentially be used for social purposes The democratization of engineering gives inexperienced threat actors new capabilities to quickly and easily generate excuses or scams and deploy sophisticated phishing attacks at scale.
On the other hand, when it comes to creating novel ChatGPT is much less capable when attacking or defending. This is not a failure of it, but rather that people are asking it to do things it is not trained to do.
What does this mean for security professionals? Can we safely ignore ChatGPT? No. As security professionals, many of us have tested ChatGPT to see how well it performs basic functions. Can it write Pen test scenarios? Can it write phishing excuses? How can it help build attack infrastructure and C2? So far, its testing results have been mixed.
However, the larger security conversation is not about ChatGPT. It’s about whether we currently have security roles that understand how to build, use and interpret AI/ML technologies. ”
Gartner analyst Avivah Litan said, “In some cases, when security personnel cannot verify the content of their output, At the same time, ChatGPT will cause more problems than it solves. For example, it will inevitably miss the detection of some vulnerabilities and give enterprises a false sense of security.
Similarly, it can miss detection of phishing attacks it is notified about and provide incorrect or outdated threat intelligence.
Therefore, we will definitely see in 2023 that ChatGPT will be responsible for missed cyberattacks and vulnerabilities that lead to data leakage of enterprises using it. ”
Rob Hughes, chief information security officer at RSA, said, “Like many new technologies, I don’t think ChatGPT will bring New threats. I think the biggest change it will make to the security landscape is to amplify, accelerate and enhance existing threats, particularly phishing.
At a basic level, ChatGPT can provide cyber attackers with syntactically correct phishing emails, something we don’t see very often these days.
While ChatGPT remains an offline service, it is only a matter of time before cyber threat actors begin to combine internet access, automation and artificial intelligence to create persistent and advanced attacks.
With chatbots, humans don’t need to write bait for spamming. Instead, they could write a script that says "Use internet data to get familiar with so-and-so and keep messaging them until they click the link."
Phishing remains a major cause of cybersecurity breaches one of the reasons. Having a natural language bot use a distributed spear phishing tool while working at scale on hundreds of users' machines will make it harder for security teams to do their job. ”
Matt Miller, head of cybersecurity services at KPMG, said that as more enterprises explore and adopt ChatGPT, security will is top of mind. Here are some steps to help enterprises get a head start in 2023:
(1) Set expectations for how ChatGPT and similar solutions should be used in enterprise environments. Develop acceptable Usage policy; defines a list of all approved solutions, use cases, and data that employees can rely on; and requires inspections to verify the accuracy of responses.
(2) Establish internal processes to review relevant usage awareness The impact and evolution of regulations on automated solutions, in particular the management of intellectual property, personal data, and appropriate inclusion and diversity.
(3) Implement technical network controls, paying particular attention to testing the operational resilience of code and scanning for malicious payloads. Additional controls include but are not limited to: multi-factor authentication and allowing access only to authorized users; application of data loss prevention schemes ensuring that all code generated by tools undergoes a standard review process and cannot be copied directly to production environments and configure network filtering to alert employees when they access unapproved solutions.
Senior Vice President, Analyst Services, ESG Corporation And senior analyst Doug Cahill said, "As with most new technologies, ChatGPT will become a resource for cyber attackers and defenders, with adversarial use cases including reconnaissance and defenders looking for best practices and the threat intelligence marketplace. As with other ChatGPT use cases, the fidelity of user test responses will vary as the AI system is trained on an already large and growing corpus of data.
While the ChatGPT use case is widely used, sharing threat intelligence among team members for threat hunting and updating rules and defense models is promising. However, ChatGPT is another example of AI augmenting (rather than replacing) the human element required in any type of threat investigation application scenario. ”
Candid Wuest, vice president of global research at Acronis, said, “While ChatGPT is a powerful language generation model, this technology is not A stand-alone tool cannot run independently. It relies on user input and is limited by the data it is trained on.
For example, the phishing text generated by this model still needs to be sent from an email account and directed to a website. These are traditional indicators that can be analyzed to help detect them.
While ChatGPT has the ability to write exploits and payloads, testing has shown that these features are not as good as initially suggested. The platform can also be used to write malware, and while this code can already be found online and on various forums, ChatGPT makes it more accessible to the masses.
However, its variations are still limited, making this malware easily detectable by behavior-based detection and other methods. ChatGPT is not specifically designed to target or exploit vulnerabilities, but it may increase the frequency of automated or simulated messages. It lowers the barrier to entry for cybercriminals but does not introduce entirely new attack methods to established businesses. ”
The above is the detailed content of Eight analyst predictions for where ChatGPT security will be in 2023. For more information, please follow other related articles on the PHP Chinese website!