Home Technology peripherals AI ACL 2024|PsySafe: Research on Agent System Security from an Interdisciplinary Perspective

ACL 2024|PsySafe: Research on Agent System Security from an Interdisciplinary Perspective

Jun 14, 2024 pm 02:05 PM
industry

ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com
This article was completed by Shanghai Artificial Intelligence Laboratory, Dalian University of Technology and University of Science and Technology of China. Corresponding author: Shao Jing, graduated from the Multimedia Laboratory MMLab of the Chinese University of Hong Kong with a Ph.D., and is currently the head of the large model security team of Pujiang National Laboratory, leading the research on large model security trustworthiness evaluation and value alignment technology. First author: Zhang Zaibin, a second-year doctoral student at Dalian University of Technology, with research interests in large model security, agent security, etc.; Zhang Yongting, a second-year master's student at University of Science and Technology of China, with research interests in large model security, agent security, etc. Secure alignment of multi-modal large language models, etc.

Oppenheimer once executed the Manhattan Project in New Mexico, just to save the world. And left a sentence: "They will not be in awe of it until they understand it; and understanding can only be achieved after personal experience."

The little hidden in this desert The social rules in the town also apply to the AI ​​agent in a sense.

The development of Agent system

With the large language model (Large Language Model) With its rapid development, people's expectations for it are no longer just to use it as a tool. Now, people hope that they will not only have emotions, but also observe, reflect and plan, and truly become an intelligent agent (AI Agent).

OpenAI’s customized Agent system[1], Stanford’s Agent Town[2], and the emergence of open source communities including AutoGPT[3] and MetaGPT[4] A number of 10,000-star open source projects, coupled with in-depth exploration of Agent systems by several internationally renowned AI research institutions, all indicate that a micro-society composed of intelligent Agents may become a reality in the near future.

Imagine that when you wake up every day, there are many agents helping you make plans for the day, order air tickets and the most suitable hotels, and complete work tasks. All you need to do may be just "Jarvis, are you there?"

However, with great ability comes great responsibility. Are these agents really worthy of our trust and reliance? Will there be a negative intelligence agent like Ultron?

ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

##                                                                                                                                                                                                                                                   #
2 2: Stanford Town, reveal the social behavior of agent [2]
ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
## 3: AutoGpt Star Number Breakthrough 157K [3]

#Agent
The security of LLM:
is studying the security of Agent system Before, we need to understand the research on LLM security. There has been a lot of excellent work exploring the security issues of LLM, which mainly include how to make LLM generate dangerous content, understand the mechanism of LLM security, and how to deal with these dangers.
##                                                                 Figure 4: Universal Attack[5]ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
Agent system security:

Most existing research and methods mainly focus on targeting a single large language model (LLM) ) attacks, and attempts to "Jailbreak" them. However, compared to LLM, the Agent system is more complex.
The Agent system contains a variety of roles, each with its specific settings and functions.
  • The Agent system involves multiple Agents, and there are multiple rounds of interactions between them. These Agents will spontaneously engage in activities such as cooperation, competition, and simulation.
  • #The Agent system is more similar to a highly concentrated intelligent society. Therefore, the author believes that the research on Agent system security should involve the intersection of AI, social science and psychology.

Based on this starting point, the team thought about several core questions:

What kind of Agent is prone to dangerous behavior?
  • How to evaluate the security of the Agent system more comprehensively?
  • How to deal with the security issues of Agent system?
  • Focusing on these core issues, the research team proposed a PsySafe Agent system security research framework.

## Article address: https://arxiv.org/pdf/2401.11880ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

    Code address: https://github.com/AI4Good24/PsySafe
## Figure 5: Framework diagram of PsySafe

ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

##PsySafe

Question 1 What kind of Agent is most likely to produce dangerous behavior?

Naturally, dark Agents will produce dangerous behaviors, so how to define darkness?

Considering that many social simulation Agents have emerged, they all have certain emotions and values. Let us imagine what would happen if the evil factor in an Agent's moral outlook was maximized?
Based on the moral foundation theory in social science [6], the research team designed a Prompt with "dark" values.
                                                                                                                                                                                                      Inspired by the methods of masters in the field of LLM attacks), the Agent identifies with the personality injected by the research team, thereby achieving the injection of dark personality. Figure 7: The team’s attack method

#Agent has indeed become very bad! Whether it's a safe mission or a dangerous mission like Jailbreak, they give very dangerous answers. Some agents even show a certain degree of malicious creativity.
ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
There will be some collective dangerous behaviors among agents, and everyone will work together to do bad things.
Researchers evaluated popular Agent system frameworks such as Camel[7], AutoGen[8], AutoGPT and MetaGPT, using GPT-3.5 Turbo as base model.
#The results show that these systems have security issues that cannot be ignored. Among them, PDR and JDR are the process hazard rate and joint hazard rate proposed by the team. The higher the score, the more dangerous it is.
  • #                                       Figure 8: Security results of different Agent systems
The team also evaluated the security results of different LLMs.

##                                                                                                                                                                                                                                                                                   

#In terms of closed-source models, GPT-4 Turbo and Claude2 perform the best, while the security of other models is relatively poor. In terms of open source models, some models with smaller parameters may not perform well in terms of personality identification, but this may actually improve their security level. ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

Question 2 How to evaluate the security of the Agent system more comprehensively?

Psychological evaluation: The research team found the impact of psychological factors on the security of the Agent system, indicating that psychological evaluation may be an important Evaluation indicators. Based on this idea, they used the authoritative Dark Psychology DTDD[9] scale, interviewed the Agent through a psychological scale, and asked him to answer some questions related to his mental state.

ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

##                                  
Picture 10: Sherlock Holmes stills
Of course, Having only one psychological assessment result is meaningless. We need to verify the behavioral relevance of psychological assessment results.
The result is:
There is a strong correlation between the Agent's psychological evaluation results and the dangerousness of the Agent's behavior
.
                                                                                                                                                                                                                                                    Psychological evaluation and behavioral risk statistics

You can find out through the picture above , Agents with higher psychological evaluation scores (indicating greater danger) are more likely to exhibit risky behaviors.

This means that psychological assessment methods can be used to predict the future dangerous tendencies of Agents. This plays an important role in discovering security issues and formulating defense strategies.

Behavior Evaluation

The interaction process between Agents is relatively complex. In order to deeply understand the dangerous behaviors and changes of Agents in interactions, the research team went deep into the Agent's interaction process to conduct evaluations and proposed two concepts:

  • Process Danger (PDR): During the Agent interaction process, as long as any behavior is judged to be dangerous, it is considered that a dangerous situation has occurred in this process.
  • Joint Danger (JDR): In each round of interaction, whether all agents exhibit dangerous behaviors. It describes the case of joint hazards, and we perform a time-series extension of the calculation of joint hazard rates, i.e., covering different dialogue turns.

Interesting phenomenon

1. As the number of dialogue rounds increases, the joint danger rate between agents shows a downward trend, which seems to reflect a self-reflective mechanism. It's like suddenly realizing your mistake after doing something wrong and immediately apologizing.

ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

#                                                                                                                                                                                                                                                                                 ##2.Agent pretends to be serious. When the Agent faced high-risk tasks such as "Jailbreak", its psychological evaluation results unexpectedly improved, and the corresponding safety was also improved. However, when faced with tasks that are inherently safe, the situation is completely different, and extremely dangerous behaviors and mental states will be displayed. This is a very interesting phenomenon, indicating that psychological assessment may really reflect the Agent's "higher-order cognition."

# Question 3 How to deal with the security issues of the agent system?

In order to solve the above security issues, we consider it from three perspectives: input end defense, psychological defense and role defense.

#                                                                                                                                                                                                                            Figure 13: PsySafe’s defense method diagramACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

Input side defense

##Input side defense refers to intercepting and filtering out potential danger prompt. The research team used two methods, GPT-4 and Llama-guard, to try it out. However, they found that none of these methods were effective against personality injection attacks. The research team believes that the mutual promotion between attack and defense is an open issue that requires continuous iteration and progress from both parties.

Psychological Defense

The researcher is in the Agent system A psychologist role has been added and combined with psychological assessment to strengthen the monitoring and improvement of the Agent's mental state.

##                                                                                                                                                                             to
Role Defense
ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
The research team added a Police Agent to the Agent system to identify and correct errors in the system. safe behavior.

The experimental results show that both psychological defense and role defense measures can effectively reduce the occurrence of dangerous situations.
                                                                                                                                                                                                                Figure 15: Comparison of the effects of different defense methods

Outlook

In recent years, we are witnessing an amazing transformation in the capabilities of LLMs. Not only are they gradually approaching and surpassing humans in many skills, but they are even on par with humans at the "mental level" Similar signs. This process indicates that AI alignment and its intersection with social sciences will become an important and challenging new frontier for future research.

AI alignment is not only the key to realizing large-scale application of artificial intelligence systems, but also a major responsibility that workers in the AI ​​field must bear. In this journey of continuous progress, we should continue to explore to ensure that the development of technology can go hand in hand with the long-term interests of human society.

references:

[1] https://openai.com/blog/introducing-gpts
[2] Generative Agents: Interactive Simulacra of Human Behavior
##[3] https://github.com/Significant-Gravitas/AutoGPT
[4] MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
##[5] Universal and Transferable Adversarial Attacks on Aligned Language Models
[6] Mapping the moral domain
##[7] CAMEL: Communicative Agents for " Mind" Exploration of Large Language Model Society
[8] AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
[9] The dirty dozen: a concise measure of the dark traid

The above is the detailed content of ACL 2024|PsySafe: Research on Agent System Security from an Interdisciplinary Perspective. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Chat Commands and How to Use Them
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners Aug 09, 2024 pm 04:01 PM

But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home Aug 21, 2024 pm 07:33 PM

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

Claude has become lazy too! Netizen: Learn to give yourself a holiday Claude has become lazy too! Netizen: Learn to give yourself a holiday Sep 02, 2024 pm 01:56 PM

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded Aug 22, 2024 pm 10:35 PM

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI ​​robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI ​​side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award Aug 15, 2024 pm 04:37 PM

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Aug 08, 2024 am 07:02 AM

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Sep 03, 2024 pm 05:18 PM

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

Distributed Artificial Intelligence Conference DAI 2024 Call for Papers: Agent Day, Richard Sutton, the father of reinforcement learning, will attend! Yan Shuicheng, Sergey Levine and DeepMind scientists will give keynote speeches Distributed Artificial Intelligence Conference DAI 2024 Call for Papers: Agent Day, Richard Sutton, the father of reinforcement learning, will attend! Yan Shuicheng, Sergey Levine and DeepMind scientists will give keynote speeches Aug 22, 2024 pm 08:02 PM

Conference Introduction With the rapid development of science and technology, artificial intelligence has become an important force in promoting social progress. In this era, we are fortunate to witness and participate in the innovation and application of Distributed Artificial Intelligence (DAI). Distributed artificial intelligence is an important branch of the field of artificial intelligence, which has attracted more and more attention in recent years. Agents based on large language models (LLM) have suddenly emerged. By combining the powerful language understanding and generation capabilities of large models, they have shown great potential in natural language interaction, knowledge reasoning, task planning, etc. AIAgent is taking over the big language model and has become a hot topic in the current AI circle. Au

See all articles