Whether in novels or movies, artificial intelligence has been a fascinating subject for decades. While the synthetic humans envisioned by Philip K. Dick still exist only in science fiction, artificial intelligence is real and is playing an increasing role in many aspects of our lives.
While there are arguments for and against robots with artificial intelligence brains, a more common and equally powerful form of artificial intelligence is starting to play a role in cybersecurity. The goal is to make AI a force multiplier for hard-working security professionals.
As seen in the DevoSOC Performance Report™, security operations center (SOC) analysts are often overwhelmed by the number of alerts that keep popping up on their screens every day. “Vigilance fatigue” has emerged as a cause of analyst burnout across the industry.
Ideally, AI can help SOC analysts keep up with (and stay ahead of) the smart but ruthless threat actors who are effectively using AI to conduct crimes or espionage. But fortunately, this has not happened yet.
Devo commissioned Wakefield Research to conduct a survey of 200 IT security professionals to determine their views on artificial intelligence. The survey covers AI implementation across a range of defense disciplines including threat detection, breach risk prediction and incident response/management.
Artificial Intelligence is considered a force multiplier for cybersecurity teams struggling to keep up with savvy malicious actors, talent shortages, and more. However, not all AI is so intelligent, and that's even before we take into account the mismatch in needs and capabilities.
All survey respondents said their organizations are using artificial intelligence in one or more areas. The most used area is IT asset inventory management, followed by threat detection and breach risk prediction.
But in terms of using AI to directly combat threat actors, it’s not really a battle yet. Some 67% of respondents said their organization’s use of AI “only scratches the surface of the problem.”
Here’s a look at how respondents view their organization’s reliance on artificial intelligence in cybersecurity programs.
More than half of respondents believe their organization – at least for now – is too reliant on artificial intelligence. Less than a third of respondents believe reliance on AI is appropriate, while a minority believe their organizations are not doing enough with AI.
When asked about their views on the challenges posed by the use of AI in their organizations, respondents told it like it was. Only 11% of respondents said they had no problems using AI for cybersecurity. The vast majority of respondents had a very different view.
When asked where AI-related challenges occur within their organization’s security stack, core cybersecurity functions performed poorly. Fifty-three percent of respondents said IT asset inventory management is the top problem area for AI, but responses were also unsatisfactory in three cybersecurity categories:
Interestingly, very few respondents (13%) mentioned incident response as a challenge posed by artificial intelligence.
It is clear that although artificial intelligence is already being used in cybersecurity, the results are mixed. The biggest misconception about AI is that not all AI is as “intelligent” as its name implies, and that’s not even taking into account the mismatch between organizational needs and capabilities.
The cybersecurity industry has long struggled to find “silver bullet” solutions. Artificial intelligence is the latest one. Organizations must be thoughtful and results-driven when evaluating and deploying AI solutions. Organizations must ensure they work with experienced experts in AI technology, or else they will fail in a critical area with little room for error.
The above is the detailed content of Three major misunderstandings about artificial intelligence in cybersecurity. For more information, please follow other related articles on the PHP Chinese website!