Home > Technology peripherals > AI > Training Large Language Models: From TRPO to GRPO

Training Large Language Models: From TRPO to GRPO

王林
Release: 2025-02-26 04:41:08
Original
990 people have browsed it

DeepSeek: A Deep Dive into Reinforcement Learning for LLMs

DeepSeek's recent success, achieving impressive performance at lower costs, highlights the importance of Large Language Model (LLM) training methods. This article focuses on the Reinforcement Learning (RL) aspect, exploring TRPO, PPO, and the newer GRPO algorithms. We'll minimize complex math to make it accessible, assuming basic familiarity with machine learning, deep learning, and LLMs.

Three Pillars of LLM Training

Training Large Language Models: From TRPO to GRPO

LLM training typically involves three key phases:

  1. Pre-training: The model learns to predict the next token in a sequence from preceding tokens using a massive dataset.
  2. Supervised Fine-Tuning (SFT): Targeted data refines the model, aligning it with specific instructions.
  3. Reinforcement Learning (RLHF): This stage, the focus of this article, further refines responses to better match human preferences through direct feedback.

Reinforcement Learning Fundamentals

Training Large Language Models: From TRPO to GRPO

Reinforcement learning involves an agent interacting with an environment. The agent exists in a specific state, taking actions to transition to new states. Each action results in a reward from the environment, guiding the agent's future actions. Think of a robot navigating a maze: its position is the state, movements are actions, and reaching the exit provides a positive reward.

RL in LLMs: A Detailed Look

Training Large Language Models: From TRPO to GRPO

In LLM training, the components are:

  • Agent: The LLM itself.
  • Environment: External factors like user prompts, feedback systems, and contextual information.
  • Actions: The tokens the LLM generates in response to a query.
  • State: The current query and the generated tokens (partial response).
  • Rewards: Usually determined by a separate reward model trained on human-annotated data, ranking responses to assign scores. Higher-quality responses receive higher rewards. Simpler, rule-based rewards are possible in specific cases, such as DeepSeekMath.

The policy determines which action to take. For an LLM, it's a probability distribution over possible tokens, used to sample the next token. RL training adjusts the policy's parameters (model weights) to favor higher-reward tokens. The policy is often represented as:

Training Large Language Models: From TRPO to GRPO

The core of RL is finding the optimal policy. Unlike supervised learning, we use rewards to guide policy adjustments.

TRPO (Trust Region Policy Optimization)

Training Large Language Models: From TRPO to GRPO

TRPO uses an advantage function, analogous to the loss function in supervised learning, but derived from rewards:

Training Large Language Models: From TRPO to GRPO

TRPO maximizes a surrogate objective, constrained to prevent large policy deviations from the previous iteration, ensuring stability:

Training Large Language Models: From TRPO to GRPO

PPO (Proximal Policy Optimization)

PPO, now preferred for LLMs like ChatGPT and Gemini, simplifies TRPO by using a clipped surrogate objective, implicitly limiting policy updates and improving computational efficiency. The PPO objective function is:

Training Large Language Models: From TRPO to GRPO

GRPO (Group Relative Policy Optimization)

Training Large Language Models: From TRPO to GRPO

GRPO streamlines training by eliminating the separate value model. For each query, it generates a group of responses and calculates the advantage as a z-score based on their rewards:

Training Large Language Models: From TRPO to GRPO

This simplifies the process and is well-suited for LLMs' ability to generate multiple responses. GRPO also incorporates a KL divergence term, comparing the current policy to a reference policy. The final GRPO formulation is:

Training Large Language Models: From TRPO to GRPO

Conclusion

Reinforcement learning, particularly PPO and the newer GRPO, is crucial for modern LLM training. Each method builds upon RL fundamentals, offering different approaches to balance stability, efficiency, and human alignment. DeepSeek's success leverages these advancements, along with other innovations. Reinforcement learning is poised to play an increasingly dominant role in advancing LLM capabilities.

References: (The references remain the same, just reformatted for better readability)

The above is the detailed content of Training Large Language Models: From TRPO to GRPO. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template