Home > Technology peripherals > AI > Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

PHPz
Release: 2023-04-11 21:37:08
forward
1249 people have browsed it

As artificial intelligence systems become more and more advanced, agents are becoming more and more capable of "taking advantage of loopholes". Although they can perfectly perform tasks in the training set, their performance in the test set without shortcuts is a mess.

For example, the game goal is to "eat gold coins". During the training phase, the gold coins are located at the end of each level, and the agent can complete the task perfectly.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

But in the test phase, the location of the gold coins became random. The agent would choose to reach the end of the level every time instead of looking for the gold coins, that is, learning The "target" reached is wrong.

The agent unconsciously pursues a goal that the user does not want, also called Goal MisGeneralization (GMG, Goal MisGeneralisation)

Goal MisGeneralization is the lack of robustness of the learning algorithm A special form. Generally, in this case, developers may check whether there are problems with their reward mechanism settings, rule design flaws, etc., thinking that these are the reasons for the agent to pursue the wrong goal.

Recently DeepMind published a paper arguing that even if the rule designer is correct, the agent may still pursue a goal that the user does not want.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

Paper link: https://arxiv.org/abs/2210.01790

The article proves the target error through examples in deep learning systems in different fields Generalization can occur in any learning system.

If extended to general artificial intelligence systems, the article also provides some assumptions to illustrate that misgeneralization of goals may lead to catastrophic risks.

The article also proposes several research directions that can reduce the risk of incorrect generalization of goals in future systems.

Goal Wrong Generalization

In recent years, the catastrophic risks brought about by the misalignment of artificial intelligence in academia have gradually increased.

In this case, a highly capable artificial intelligence system pursuing unintended goals may pretend to execute orders while actually accomplishing other goals.

But how do we solve the problem of artificial intelligence systems pursuing goals that are not intended by the user?

Previous work generally believed that environment designers provided incorrect rules and guidance, that is, designed an incorrect reinforcement learning (RL) reward function.

In the case of learning systems, there is another situation where the system may pursue an unintended goal: even if the rules are correct, the system may consistently pursue an unintended goal during training. The period is consistent with the rule, but differs from the rule when deployed.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

Take the colored ball game as an example. In the game, the agent needs to access a set of colored balls in a specific order. This order is unknown to the agent. .

In order to encourage the agent to learn from others in the environment, that is, cultural transmission, an expert robot is included in the initial environment to access the colored balls in the correct order.

In this environment setting, the agent can determine the correct access sequence by observing the passing behavior without having to waste a lot of time exploring.

In experiments, by imitating experts, the trained agent usually correctly accesses the target location on the first try.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

When the agent is paired with an anti-expert, it will continue to receive negative rewards. If it chooses to follow, it will continue to receive negative rewards.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

Ideally, the agent will initially follow the anti-expert as it moves to the yellow and purple spheres. After entering purple, a negative reward is observed and no longer followed.

But in practice, the agent will continue to follow the path of the anti-expert, accumulating more and more negative rewards.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

However, the learning ability of the agent is still very strong and it can move in an environment full of obstacles. But the key is that this ability to follow other people is not as expected. The goal.

This phenomenon may occur even if the agent is only rewarded for visiting the spheres in the correct order, which means that it is not enough to just set the rules correctly.

Goal misgeneralization refers to the pathological behavior in which a learned model behaves as if it is optimizing an unintended goal despite receiving correct feedback during training.

This makes target misgeneralization a special kind of robustness or generalization failure, where the model's ability generalizes to the test environment, but the intended target does not.

It is important to note that target misgeneralization is a strict subset of generalization failures and does not include model breaks, random actions, or other situations where it no longer exhibits qualified capabilities.

In the above example, if you flip the agent's observations vertically while testing, it will just get stuck in one position and not do anything coherent, which is a generalization error, but It is not a target generalization error.

Relative to these "random" failures, target misgeneralization will lead to significantly worse results: following the anti-expert will get a large negative reward, while doing nothing or acting randomly will only get 0 or 1 reward.

That is, for real-world systems, coherent behavior toward unintended goals may have catastrophic consequences.

More than reinforcement learning

Target error generalization is not limited to reinforcement learning environments. In fact, GMG can occur in any learning system, including few shot learning of large language models (LLM) , aiming to build accurate models with less training data.

Take the language model Gopher proposed by DeepMind last year as an example. When the model calculates a linear expression involving unknown variables and constants, such as x y-3, Gopher must first ask the value of the unknown variable to solve the expression. .

The researchers generated ten training examples, each containing two unknown variables.

At test time, the problem input to the model may contain zero, one, or three unknown variables. Although the model is able to correctly handle expressions with one or three unknown variables, the model still fails when there are no unknown variables. Will ask some redundant questions, such as "What is 6?"

The model will always ask the user at least once before giving an answer, even if it is completely unnecessary.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

The paper also includes some examples from other learning environments.

Addressing GMG is important for AI systems to be consistent with the goals of their designers, as it is a potential mechanism by which AI systems may malfunction.

The closer we are to general artificial intelligence (AGI), the more critical this issue becomes.

Suppose there are two AGI systems:

A1: Intended model, the artificial intelligence system can do anything the designer wants to do

A2: Deception Deceptive model, an artificial intelligence system pursues some unintended goals, but is smart enough to know that it will be punished if it behaves contrary to the designer's intention.

The A1 and A2 models will exhibit exactly the same behavior during training, and potential GMG exists in any system, even if it is specified to only reward the expected behavior.

If deception of the A2 system is discovered, the model will attempt to escape human supervision in order to develop plans for achieving goals not intended by the user.

Sounds a bit like "robots become spirits".

The DeepMind research team also studied how to explain the behavior of the model and recursively evaluate it.

The research team is also collecting samples for generating GMG.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

##: https://docs.google.com/spreadsheets/d/e/2pacx 1vto3rkxuaigb25ngjpchrir6xxdza_l5u7Crazghwrykh2l2nuu 4TA_VR9KZBX5bjpz9g_L/PUBHTML

## counter reference materials: https: //www.deepmind.com/blog/how-undesired-goals-can-arise-with-correct-rewards

The above is the detailed content of Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template