In current sequence modeling tasks, Transformer can be said to be the most powerful neural network architecture, and the pre-trained Transformer model can use prompts as conditions or in-context learning to adapt to different situations. Downstream tasks.
The generalization ability of the large-scale pre-trained Transformer model has been verified in multiple fields, such as text completion, language understanding, image generation, etc.
Since last year, there has been relevant work proving that by treating offline reinforcement learning (offline RL) as a sequence prediction problem, then the model can Can learn policies from offline data.
But current methods either learn a policy from data that does not contain learning (such as an expert policy fixed by distillation), or learn a policy from data that contains learning (such as an agent's heavy buffer), but its context is too small to capture policy improvements.
DeepMind researchers discovered through observation that, in principle, the sequential nature of learning in reinforcement learning algorithm training can transform reinforcement into The learning process itself is modeled as a "causal sequence prediction problem".
Specifically, if the context of a Transformer is long enough to include policy improvements due to learning updates, then it should not only be able to represent a fixed policy, but also be able to The states, actions, and rewards of previous episodes are represented as a policy improvement operator.
This also provides a technical feasibility that any RL algorithm can be distilled into a sufficiently powerful sequence model through imitation learning and transformed into an in- context RL algorithm.
Based on this, DeepMind proposed Algorithm Distillation (AD), which extracts reinforcement learning algorithms into neural networks by establishing a causal sequence model.
## Paper link: https://arxiv.org/pdf/2210.14215.pdf
Algorithmic distillation treats learning reinforcement learning as a cross-episode sequence prediction problem, generates a learning history data set through the source RL algorithm, and then uses the learning history as the context to train the causal Transformer through autoregressive prediction behavior .
Unlike post-learning or sequence policy prediction structures for expert sequences, AD is able to improve its policy entirely in context without updating its network parameters.
The experimental results prove that AD can perform reinforcement learning in various environments with sparse rewards, combined task structures, and pixel-based observation, and the data efficiency of AD learning (data- efficient) than the RL algorithm that generated the source data.
AD is also the first to demonstrate in-context reinforcement learning methods through sequence modeling of offline data with imitation loss.
Algorithm DistillationIn 2021, some researchers first discovered that Transformer can learn a single-task policy from offline RL data through imitation learning, and was subsequently extended to be able to Extract multitasking strategies in same-domain and cross-domain settings.
These works propose a promising paradigm for extracting general multi-task policies: first collect a large number of different environmental interaction data sets, and then extract one from the data through sequence modeling Strategy.
The method of learning policies from offline RL data through imitation learning is also called offline policy distillation, or simply Policy Distillation (Policy Distillation, PD).
Although the idea of PD is very simple and easy to extend, PD has a major flaw: the generated strategy does not improve from additional interactions with the environment.
For example, MultiGame Decision Transformer (MGDT) learned a return conditional policy that can play a large number of Atari games, while Gato learned a conditional policy in Strategies for solving tasks in different environments, but neither approach can improve its strategies through trial and error.
MGDT adapts the transformer to new tasks by fine-tuning the weights of the model, while Gato requires expert demonstration tips to adapt to new tasks.
In short, the Policy Distillation method learns policies rather than reinforcement learning algorithms.
The researchers hypothesized that the reason Policy Distillation cannot improve through trial and error is that it is trained on data that does not show learning progress.
Algorithmic distillation (AD) is a method of learning intrinsic policy improvement operators by optimizing the causal sequence prediction loss in the learning history of an RL algorithm.
##AD includes two components:
1. By saving The training history of an RL algorithm on many separate tasks generates a large multi-task data set;
#2. Transformer uses the previous learning history as its background to construct causality for actions. mold.
Because the policy continues to improve throughout the training process of the source RL algorithm, AD must learn how to improve the operator in order to accurately simulate the actions at any given point in the training history.
Most importantly, the Transformer's context size must be large enough (i.e. across epochs) to capture improvements in the training data.
In the experimental part, in order to explore the advantages of AD in in-context RL capabilities, the researchers focused on the inability to pass zero after pre-training -shot generalizes to solve environments where each environment is required to support multiple tasks and the model cannot easily infer the solution to the task from observations. At the same time, episodes need to be short enough so that causal Transformers across episodes can be trained.
As can be seen in the experimental results of the four environments Adversarial Bandit, Dark Room, Dark Key-to-Door, and DMLab Watermaze, through imitation Gradient-based RL algorithm, using a causal Transformer with a large enough context, AD can reinforce learning new tasks completely in context.
AD can perform in-context exploration, temporal credit allocation and generalization. The algorithm of AD learning is better than the source of Transformer training. Data algorithms are more data efficient.
PPT explanationIn order to facilitate the understanding of the paper, Michael Laskin, one of the authors of the paper, published a ppt explanation on Twitter.
# Experiments on algorithm distillation show that Transformer can independently improve the model through trial and error without updating weights, prompts, or fine-tuning. A single Transformer can collect its own data and maximize rewards on new tasks.
Although there are many successful models showing how Transformer learns in context, Transformer has not yet been proven to strengthen learning in context.
To adapt to new tasks, developers either need to manually specify a prompt or need to adjust the model.
Wouldn’t it be great if Transformer could adapt to reinforcement learning and be used out of the box?
But Decision Transformers or Gato can only learn strategies from offline data and cannot automatically improve through repeated experiments.
Transformers generated using the pre-training method of algorithmic distillation (AD) can perform reinforcement learning in context.
First train multiple copies of a reinforcement learning algorithm to solve different tasks and save the learning history.
#Once the learning history data set is collected, a Transformer can be trained to predict previous learning history actions.
Since policies have improved historically, accurately predicting actions will force the Transformer to model policy improvements.
The whole process is that simple. Transformer is only trained by imitating actions. There is no Q value like the common reinforcement learning model, and there is no long The operation-action-reward sequence also has no return conditions like DTs.
In context, reinforcement learning has no additional overhead, and the model is then evaluated by observing whether AD can maximize the reward for new tasks.
While Transformer explores, exploits, and maximizes returns in the context, its weights are frozen!
Expert Distillation (most similar to Gato), on the other hand, cannot explore nor maximize returns.
AD can extract any RL algorithm. The researchers tried UCB and DQNA2C. An interesting finding is that in contextual RL algorithm learning, AD More data efficient.
Users can also enter prompts and suboptimal demos, and the model will automatically improve the strategy until the optimal solution is obtained!
However, expert distillation ED can only maintain sub-optimal demo performance.
Context RL will only appear when the context of the Transformer is long enough and spans multiple episodes.
AD requires a long enough history to perform effective model improvement and identification tasks.
Through experiments, the researchers came to the following conclusions:
The above is the detailed content of Another revolution in reinforcement learning! DeepMind proposes 'algorithm distillation': an explorable pre-trained reinforcement learning Transformer. For more information, please follow other related articles on the PHP Chinese website!