Scientists are using an artificial intelligence technology called deep reinforcement learning (DRL: Deep Reinforcement Learning) to protect computer networks and have taken a critical step.
#When faced with complex cyberattacks in a rigorous simulated environment, deep reinforcement learning performs better 95% of the time Effectively prevent opponents from achieving their goals. The test results offer hope for autonomous artificial intelligence to play a role in proactive cyber defense.
Scientists at the U.S. Department of Energy’s Pacific Northwest National Laboratory (PNNL) documented their findings in a research paper presented February 14 at the Association for the Advancement of Artificial Intelligence in Washington, D.C. Their work was presented at the Artificial Intelligence for Cybersecurity Symposium during the annual meeting.
The project starting point was the development of a simulated environment to test multi-stage attack scenarios involving different types of adversaries. Creating such a dynamic attack and defense simulation environment for experiments is an achievement in itself. The environment provides researchers with a way to compare the effectiveness of different AI-based defenses in a controlled testing environment.
These tools are critical for evaluating the performance of deep reinforcement learning algorithms. This approach is becoming a powerful decision support tool for cybersecurity experts. DRL is a defense model with the ability to learn, adapt to rapidly changing environments, and make autonomous decisions. While other forms of artificial intelligence have previously been the standard for detecting intrusions or filtering spam, deep reinforcement learning expands defenders’ ability to coordinate sequential decision-making plans in daily confrontations with adversaries.
Deep reinforcement learning provides smarter network security, the ability to detect changes in the network environment earlier, and the opportunity to take preemptive measures to thwart cyberattacks.
Samrat Chatterjee, a data scientist who introduced the team's work, said: "An effective cybersecurity AI agent needs to sense based on the information it can gather and the consequences of the decisions it makes. , analyze, act, and adapt." "Deep reinforcement learning has huge potential in this area because the number of system states and optional actions can be large."
DRL Combining reinforcement learning (RL) and deep learning (DL), it is especially suitable for situations where a series of decisions need to be made in complex environments. Just like a toddler learns from bumps and scrapes, deep reinforcement learning (DRL)-based algorithms are trained by rewarding good decisions and punishing bad ones: good decisions that lead to desirable outcomes Supported by positive rewards (expressed as numerical values); by deducting rewards to discourage bad choices that lead to bad outcomes.
The team created a custom controlled simulation environment using the open source software toolkit OpenAI Gym as a foundation to evaluate the strengths and weaknesses of four deep reinforcement learning algorithms.
It also uses the MITER ATT&CK framework developed by MITER Corporation and combines 7 tactics and 15 techniques deployed by three different opponents. Defenders are equipped with 23 mitigation measures to try to stop or block the attack from progressing.
The stages of an attack include tactics such as reconnaissance, execution, persistence, defense evasion, command and control, collection and filtering (as data is transferred out of the system). If the adversary successfully reaches the final filtering stage, the attack is recorded as winning.
########“Our algorithm operates in a competitive environment, which is a competition with an adversary intent on damaging the system. It is a multi-stage attack. In this In this attack, adversaries can pursue multiple attack paths, which may change over time as they attempt to move from reconnaissance to exploitation. Our challenge is to show how defenses based on deep reinforcement learning can stop this attack." ############DQN outperforms other methods############The team is based on four deep reinforcement learning algorithms: DQN (Deep Q-Network) and three others Variants to train defensive agents were trained on simulated data about cyberattacks and then tested on attacks they had not observed during training. ######
DQN performed best:
Low-level complex attacks: DQN blocked 79% of attacks midway through the attack phase, Blocking stopped 93% of attacks in the final phase.
Moderately complex attacks: DQN blocked 82% of attacks in the mid-stage and 95% in the final stage.
The most complex attack: DQN blocked 57% of attacks in the middle and 84% of attacks in the final stage, much higher than the other three algorithm.
The goal is to create an autonomous defense agent that can understand an adversary’s most likely next move, plan for it, and then React in the best possible way to protect the system."
Despite the progress, no one is willing to leave cyber defense entirely to artificial intelligence systems. Instead, DRL-based cybersecurity systems need to work in conjunction with humans, said former PNNL co-author Arnab Bhattacharya. "AI is good at defending against specific strategies, but not very good at understanding all the approaches an adversary might take. We are still far from the stage where AI will replace human cyber analysts. Human feedback and guidance is important."
In addition to Chatterjee and Bhattacharya, authors of the workshop paper include PNNL’s Mahantesh Halappanavar and former PNNL scientist Ashutosh Dutta. This work was funded by the Department of Energy's Office of Science, and some of the early work driving this specific research was funded by PNNL's Artificial Reasoning Mathematics in Science program through the Laboratory Directed Research and Development program.
The above is the detailed content of Cybersecurity defenders are expanding their AI toolboxes. For more information, please follow other related articles on the PHP Chinese website!