Home > Technology peripherals > AI > Integrating more than 200 related studies, the latest review of the large model 'lifelong learning' is here

Integrating more than 200 related studies, the latest review of the large model 'lifelong learning' is here

WBOY
Release: 2024-09-02 15:24:40
Original
1005 people have browsed it
整合 200 多项相关研究,大模型「终生学习」最新综述来了

AIxivコラムは、本サイトの学術・技術コンテンツを掲載するコラムです。過去数年間で、このサイトの AIxiv コラムには 2,000 件を超えるレポートが寄せられ、世界中の主要な大学や企業のトップ研究室がカバーされ、学術交流と普及を効果的に促進しています。共有したい優れた作品がある場合は、お気軽に寄稿するか、報告のために当社までご連絡ください。提出電子メール: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

この論文の著者は全員、華南理工大学の馬 Qianli 教授のチームのメンバーです。彼らの研究室は機械学習とデータです ラボを掘ります。この論文の共同筆頭著者は博士課程の学生 Zheng Junhao、修士課程の Qiu Shengjie、修士課程の Shi Chengming の 3 人で、主な研究方向は大規模モデルと生涯学習です。責任著者は馬 Qianli 教授 (IEEE の副編集長) です。 ACM TASLP)。近年、Ma Qianli 教授のチームは、権威ある国際ジャーナル (TPAMI など) や一流の国際学術会議 (NeurIPS、AAAI、IJCAI、ACL、 KDD、ICDEなど)国内外の著名な大学や科学研究機関との広範な協力。

大規模な言語モデルの適用がさまざまな分野で拡大し続ける中、これらのモデルをデータ、タスク、ユーザー設定の変化に継続的に適応させるにはどうすればよいかが課題となっています。重要な問題になります。従来の静的データセットのトレーニング方法では、現実世界の動的なニーズを満たすことができなくなりました。

この課題を解決するために、生涯学習または継続学習テクノロジーが登場しました。これにより、大規模な言語モデルが業務期間中に継続的に学習して適応し、新しい知識を統合しながら以前に学習した情報を保持し、壊滅的な忘却 (壊滅的な忘却) を防ぐことができます。

最近、華南理工大学の研究者らは、生涯学習法とその大規模言語モデル (LLM) の開発見通しについて調査、整理、要約し、次のようにまとめました。包括的かつ最先端のレビュー。

整合 200 多项相关研究,大模型「终生学习」最新综述来了

  • 論文タイトル: 大規模言語モデルの生涯学習に向けて: 調査
  • 機関: 中国南部工科大学
  • 論文アドレス: https://arxiv.org/abs/2406.06391
  • プロジェクトアドレス: https://github .com/qianlima-lab/awesome-lifelong-learning-methods-for-llm

図 1 は、ラージ言語での生涯学習の応用を示しています。モデル 人間の学習プロセスとの類似。この図は、2 つの並行した学習パスを通じた生涯学習における人間と大規模な言語モデルの進化を示しています。

整合 200 多项相关研究,大模型「终生学习」最新综述来了

Human Learning

1. Walk: Humans start learning from the most basic skills (such as walking).
2. Ride a Bike: As learning progresses, humans master more complex skills (such as riding a bicycle).
3. Drive a Car: Eventually, humans can master more complex and advanced skills (such as driving).

Each step represents the process by which humans continue to acquire new skills and knowledge in a lifelong learning process.

Large language model learning (LLMs Learning)

1. New language (New Language): Large language models start by learning new languages ​​(such as learning to process different natural languages).
2. New Domain: Next, the model learns new domain knowledge (such as extending from natural language processing to the medical field).
3. New Information: Ultimately, the model can learn and integrate new information, whether it is language or domain.

Each step represents the process of the large language model continuously expanding and updating knowledge in the lifelong learning process. This diagram emphasizes the process of lifelong learning: Lifelong learning is a continuous process that covers a step-by-step evolution from basic to advanced. Lifelong learning is not just a simple accumulation of knowledge, but a dynamic and evolving process.

In recent years, lifelong learning has become an increasingly popular research topic, and large-scale surveys on neural network lifelong learning have emerged. Most existing research mainly focuses on various application scenarios of lifelong learning of convolutional neural networks (CNN) and lifelong learning of graph neural networks. However, only a small amount of literature focuses on lifelong learning of language models. Although some recent reviews have collected the latest literature on lifelong learning, none of them have covered scenarios such as continuous text classification, continuous named entity recognition, continuous relationship extraction, and continuous machine translation. There is also little discussion of retrieval-based lifelong learning.

This review is the first comprehensive and systematic survey of lifelong learning methods for large language models starting from 12 scenarios.

Overall, the main contributions of the review include:

  • Novel classification: introduction A detailed structured framework was developed that divided the extensive literature on lifelong learning into 12 scenarios;
  • Universal techniques: Common techniques for all lifelong learning situations were identified and present There is literature divided into different technical groups in each scenario;
  • Future directions: Emphasis on some emerging technologies such as model extension and data selection, which were less explored in the pre-LLM era .

1. Introduction

This review is systematically summarized The existing lifelong learning technology methods are divided into two categories: internal knowledge and external knowledge in Figure 2.

整合 200 多项相关研究,大模型「终生学习」最新综述来了

  • Internal knowledge refers to the absorption of new knowledge into model parameters through full or partial training, including continuous pre-training and continuous fine-tuning.
  • External knowledge refers to incorporating new knowledge from external resources such as Wikipedia or application program interfaces into the model without updating the model parameters, including retrieval-based lifelong learning and Tools for lifelong learning.

Internal Knowledge

1. Continual Pretraining:

  • Continual Vertical Domain Pretraining: for specific vertical fields (such as finance, medical etc.).
  • Continual Language Domain Pretraining: Continuous pretraining for natural language and code language.
  • Continual Temporal Domain Pretraining: Continuous pretraining for time-related data (such as time series data).

2. Continual Finetuning:

  • Task Specific:

  • Continuous Text Classification: For text classification tasks Continuous fine-tuning.
  • Continual Named Entity Recognition: Continuous fine-tuning for named entity recognition tasks.
  • Continuous Relation Extraction: Continuous fine-tuning for relation extraction tasks.
  • Continuous Machine Translation: Continuous fine-tuning for machine translation tasks.

  • Task Agnostic:

  • Continuous Instruction-Tuning: Continuous learning of the model is achieved through instruction fine-tuning.
  • Continuous Knowledge Editing: Continuous learning for knowledge updating.
  • Continuous Alignment: Continuous learning to align the model with new tasks.

External Knowledge (External Knowledge)

1. Retrieval-Based Lifelong Learning: Lifelong learning achieved by retrieving external knowledge bases.

2. Tool-Based Lifelong Learning: Lifelong learning achieved by calling external tools.

2. Overview of Lifelong Learning

2.1 Problem Definition

The goal of lifelong learning is to learn a language model from a series of tasks and generate target output by inputting natural language. Specifically, for generation tasks, such as question and answer, the input and output represent questions and answers respectively; for machine translation tasks, input and output represent the source language and target language; for text classification tasks, the input is text content and the output is category labels; For the pre-training task of the autoregressive language model, the input is a series of tokens, and the output is the corresponding next token.

2.2 Assessment Indicators

Overview introduces assessment throughout life Indicators of learning effect are mainly evaluated from three perspectives: overall performance, stability and adaptability:

  • Overall Measurement: including Average accuracy (AA) and average incremental accuracy (AIA). AA refers to the average performance of the model after learning all tasks, while AIA takes into account the historical changes after learning each task.
  • Stability Measurement: including forgetting measurement (FGT) and backward transfer (BWT). FGT evaluates the average performance degradation of old tasks, while BWT evaluates the average performance change of old tasks.
  • Plasticity Measurement: including forward transfer (FWD), which is the average improvement in the model's performance on new tasks.

2.3 General technology

Summary in Figure 3 Four main lifelong learning methods are demonstrated to deal with the catastrophic forgetting problem of large language models when processing continuous tasks (Task t-1 to Task t). Here's an explanation of each method:

整合 200 多项相关研究,大模型「终生学习」最新综述来了

(a) Replay-Based Methods:

  • Meaning: This method is used when training new tasks Replay data from previous tasks to consolidate the model's memory of old tasks. Usually, the replayed data is stored in a buffer and used for training together with the data of the current task. Mainly include:

– Experience Replay: Reduce forgetting by saving a part of the data samples of old tasks and reusing these data for training when training new tasks. occurrence.

–Generative Replay: Unlike saving old data, this method uses a generative model to create pseudo samples, thereby introducing knowledge of old tasks into the training of new tasks.

  • Illustration: Figure 3 shows the process from Task t-1 to Task t. The model is training Task When t, the old data in the buffer (Input t-1 ) is used.

(b) Regularization-Based Methods:

  • Meaning: This method prevents the model from over-adjusting old task parameters when learning a new task by imposing regularization constraints on the model parameters. Regularization constraints can help the model retain the memory of old tasks. Mainly include:

– Weight Regularization: By imposing additional constraints on model parameters, it limits the modification of important weights when training new tasks, thereby protecting the integrity of old tasks. Knowledge. For example, L2 regularization and Elastic Weight Consolidation (EWC) are common techniques.

–Feature Regularization: Regularization can not only act on weights, but also ensure that the feature distribution between new and old tasks remains stable by limiting the performance of the model in the feature space.

  • Illustration: Figure 3 shows the process from Task t-1 to Task t. The model is training Task When t, parameter regularization is used to maintain performance on Task t-1.

(c) Architecture-Based Methods:

整合 200 多项相关研究,大模型「终生学习」最新综述来了

  • Meaning: This approach focuses on adapting the model structure to seamlessly integrate new tasks while minimizing interference with previously learned knowledge. It mainly includes the six methods in Figure 4:

–(a) Prompt Tuning: By adding "Soft Prompts" before the input of the model , to guide model generation or classification tasks. This method only requires adjusting a small number of parameters (i.e. prompt words) without changing the backbone structure of the model.

–(b) Prefix Tuning: Add trained adjustable parameters to the prefix part of the input sequence. These parameters are inserted into the self-attention mechanism of the Transformer layer to help the model better Capture contextual information.

–(c) Low-Rank Adaptation (LoRA, Low-Rank Adaptation): LoRA adapts to new tasks by adding low-rank matrices at specific levels without changing the main weights of the large model. This approach greatly reduces the number of parameter adjustments while maintaining model performance.

–(d) Adapters: Adapters are trainable modules inserted between different layers of the model. These modules can adapt with a small number of additional parameters without changing the original model weights. New tasks. Usually applied in the FFN (Feed Forward Network) and MHA (Multi-Head Attention) parts.

–(e) Mixture of Experts: Process different inputs by selectively activating certain “expert” modules, which can be specific layers or subnetworks in the model. The Router module is responsible for deciding which expert module needs to be activated.

–(f) Model Expansion: Expand the capacity of the model by adding a new layer (New Layer) while retaining the original layer (Old Layer). This approach allows the model to gradually increase its capacity to accommodate more complex task requirements.

  • Illustration: Figure 3 shows the process from Task t-1 to Task t. When the model learns a new task, some parameters are Frozen, while the newly added module is used to train new tasks (Trainable).

(d) Distillation-Based Methods:

  • Meaning: This method transfers the knowledge of the old model to the new model through knowledge distillation. When training a new task, the new model not only learns the data of the current task, but also imitates the output of the old model for the old task, thereby maintaining the knowledge of the old task. Mainly include:

– Distillation from New Data: The student model learns new tasks under the guidance of the teacher model and distills old data. Model knowledge to reduce forgetting of old knowledge.

– Distillation from Old Data: Use the performance of the teacher model on old data to guide the student model to learn new tasks, thereby retaining the old data The effect of knowledge.

–Distillation from Pseudo-Old Data: By generating pseudo-old data (Pseudo-Old Data), the student model can learn new tasks Keep the memory of old knowledge alive.

  • Illustration: Figure 3 shows the transition from Task t-1 to Task t In the process, when the model trains a new task, it maintains the knowledge of the old task by imitating the prediction results of the old model.

3. Continuous pre-training

Continuous pre-training The internal knowledge of large language models can be updated without incurring the high cost of comprehensive pre-training, thereby enhancing the capabilities of large language models. Current research spans vertical, linguistic, and temporal domains, addressing difficult issues such as catastrophic forgetting and temporal adaptation. Technologies such as experience replay, knowledge distillation, efficient fine-tuning of parameters, model expansion and reheating have shown good prospects.

3.1 Continuous vertical field pre-training

Continuous vertical field pre-training (Continual Vertical Domain Pretraining) aims to ensure that the model performs well in multiple vertical fields or tasks by continuously training language models on a series of domain-specific data sets, while retaining previously acquired knowledge.

Main methods:

1. Parameter-Efficient Fine-Tuning:

  • Example: CorpusBrain++ uses a backbone-adapter architecture and experience replay strategy to tackle real-world knowledge-intensive language tasks.
  • Example: Med-PaLM introduces instruction prompt tuning in the medical field by using a small number of examples.

2. Model Expansion:

  • Example: ELLE adopts a feature-preserving model expansion strategy to improve the efficiency of knowledge acquisition and integration by flexibly expanding the width and depth of existing pre-trained language models.
  • Example: LLaMA Pro excels in general use, programming and math tasks by extending the Transformer block and fine-tuning it with a new corpus.

3. Re-warming:

  • Example: The strategy proposed by Gupta et al. adjusts the learning rate when introducing new data sets to prevent the learning rate from being too low during long-term training, thereby improving the effect of adapting to new data sets.

4. Data Selection:

  • Example: RHO-1 is trained with a Selective Language Model (SLM), which prioritizes tokens that have a greater impact on the training process.
  • Example: EcomGPT-CT enhances model performance on domain-specific tasks with semi-structured e-commerce data.

3.2 Pre-training in continuous language domain

Continuous language Domain pretraining (Continual Language Domain Pretraining) aims to enable the language model to continuously integrate new data and adapt to the changing language domain without forgetting previous knowledge.

Main methods:

1. Architecture-Based Methods:

  • Example: Yadav et al. improve prompt tuning by introducing a teacher forcing mechanism, creating a set of prompts to guide the fine-tuning of the model on new tasks.
  • Example: ModuleFormer and Lifelong-MoE use a mixture of experts (MoE) approach to enhance the efficiency and adaptability of LLM through modularity and dynamically increasing model capacity.

2. Re-warming:

  • Example: The rewarming method proposed by Ibrahim et al. helps the model adapt to new languages ​​faster by temporarily increasing the learning rate when training new data.

3.3 Continuous time domain pre-training

Continuous time Continual Temporal Domain Pretraining involves continuously updating the language model to maintain its accuracy and relevance on time-sensitive data.

Main challenges:

1. Performance degradation: The study by Lazaridou et al. shows that the performance of the model on future data The performance drops significantly, highlighting the difficulty of LLM in temporal generalization.
2. Limited improvement: Röttger et al. found that although temporal adaptation has a slight improvement on the mask language model task, compared with pure domain adaptation, the improvement in downstream task performance is not significant.

Through these methods and research, the author demonstrates the methods and challenges of continuous pre-training in different dimensions, and emphasizes applications in the vertical domain, language domain and time domain The necessity and effectiveness of lifelong learning.

4. Continuous fine-tuning

Continuous pre-training can enhance the internals of large language models Knowledge, on this basis, continuous fine-tuning enhances the internal knowledge of the large language model and adapts the large language model to specific tasks such as text classification, named entity recognition, relation extraction, machine translation or general generation tasks such as instruction adjustment, knowledge Edited and aligned with human preferences. To deal with challenges such as catastrophic forgetting and task interference, techniques such as distillation, replay, regularization, architecture-based and gradient-based methods are employed. The authors illustrate 7 consecutive fine-tuning scenarios in Figure 5 .

整合 200 多项相关研究,大模型「终生学习」最新综述来了

This diagram shows how seven different types of tasks are implemented in a large language model through continuous learning. The following is a detailed explanation of each part:

(a) Continuous Text Classification

  • Example: Continuous text classification task trains the model by gradually introducing new classification categories (such as Intent: Transfer -> Intent: Credit Score -> Intent: Fun Fact) so that it can adapt to changing classification needs.

(b) Continuous Named Entity Recognition

  • Example : The continuous named entity recognition task shows how to gradually introduce new entity types (such as Athlete -> Sports Team -> Politician) while recognizing specific entities, so that the model can still maintain the recognition of old entities while recognizing new entities. ability.

(c) Continuous relation extraction

  • Example: The continuous relationship extraction task shows how the model gradually expands its relationship extraction capabilities by continuously introducing new relationship types (such as Relation: Founded By -> Relation: State or Province of Birth -> Relation: Country of Headquarters).

(d) Continuous knowledge editing

  • Example: The continuous knowledge editing task ensures that it can accurately answer the latest facts by continuously updating the model's knowledge base (such as Who is the president of the US? -> Which club does Cristiano Ronaldo currently play for? -> Where was the last Winter Olympics held?).

(e) Continuous machine translation

  • Example: The continuous machine translation task demonstrates the model's adaptability in a multilingual environment by gradually expanding the model's translation capabilities into different languages ​​(such as English -> Chinese, English -> Spanish, English -> French).

(f) Continuous instruction fine-tuning

  • Example: The continuous instruction fine-tuning task trains the model's performance capabilities in multiple task types by gradually introducing new instruction types (such as Summarization -> Style Transfer -> Mathematics).

(g) Continuous alignment

  • Example: Continuous The alignment task demonstrates the model's continuous learning capabilities under different moral and behavioral standards by introducing new alignment goals (such as Helpful and Harmless -> Concise and Organized -> Positive Sentiment).

5. External knowledge

Continuous pre-training and Continuous fine-tuning is crucial to the lifelong learning of LLM. However, as LLM becomes larger and more powerful, two emerging directions are becoming more and more popular, which can make large language models without modifying parameters. Providing new external knowledge to large language models. The authors consider retrieval-based lifelong learning and tool-based lifelong learning because both approaches are promising ways to achieve lifelong learning in LLM. Figure 6 illustrates both approaches.

整合 200 多项相关研究,大模型「终生学习」最新综述来了

Retrieval-Based Lifelong Learning

  • Introduction: With the continuous increase of information in the world Scaling up and evolving rapidly, static models trained on historical data quickly become outdated and unable to understand or generate content about new developments. Retrieval-based lifelong learning addresses the critical need for large language models to acquire and assimilate the latest knowledge from external sources, and the model supplements or updates its knowledge base by retrieving these external resources when needed. These external resources provide a large current knowledge base, providing an important complementary asset for enhancing the static properties of pretrained LLMs.
  • Example: These external resources in the diagram are accessible and retrievable by the model. By accessing external information sources such as Wikipedia, books, databases, etc., the model is able to update its knowledge and adapt when encountering new information.

Tool-Based Lifelong Learning

  • Introduction: Tool-based lifelong learning arises from the necessity to extend its functionality beyond static knowledge and enable it to dynamically interact with the environment. In real-world applications, models are often required to perform tasks that involve operations beyond direct text generation or interpretation.
  • Example: The model in the figure uses these tools to extend and update its own capabilities, enabling lifelong learning through interaction with external tools. For example, models can obtain real-time data through application programming interfaces, or interact with the external environment through physical tools to complete specific tasks or acquire new knowledge.

6. Discussion and Conclusion

6.1 Main Challenges

  • Catastrophic Forgetting: This is one of the core challenges of lifelong learning, and the introduction of new information may Will overwrite what the model has learned previously.
  • Plasticity-Stability Dilemma: It is very critical to find a balance between maintaining the learning ability and stability of the model, which directly affects the model's ability to acquire new knowledge. while retaining its broad general capabilities.
  • Expensive Computation Cost: The computational requirements for fully fine-tuning a large language model can be very high.
  • Unavailability of model weights or pre-trained data: Due to privacy, proprietary restrictions, or commercial licenses, raw training data or model weights are often unavailable for further improvements.

6.2 Current Trends

  • From specific tasks to general tasks: Research gradually shifts from focusing on specific tasks (such as text classification, named entity recognition) to a wider range of general tasks, such as instruction tuning, knowledge editing, etc.
  • From full fine-tuning to partial fine-tuning: In view of the high resource consumption of full fine-tuning, partial fine-tuning strategies (such as Adapter layer, Prompt tuning, LoRA) are becoming more and more popular.
  • From internal knowledge to external knowledge: In order to overcome the limitations of frequent internal updates, more and more strategies use external knowledge sources, such as Retrieval-Augmented Generation and tools Learning enables models to dynamically access and exploit current external data.

6.3 Future Direction

  • Multimodal lifelong learning: Integrate multiple modalities beyond text (such as images, videos, audios, time series data, knowledge graphs) into lifelong learning to develop more comprehensive and adaptive sexual model.
  • Efficient lifelong learning: Researchers are working on developing more efficient strategies to manage the computational requirements of model training and updates, such as model pruning, model merging, model expansion and other methods.
  • Universal lifelong learning: The ultimate goal is to enable large language models to actively acquire new knowledge and learn through dynamic interaction with the environment, no longer relying solely on static data sets.

6.4 Conclusion

The author divides the existing research into It provides a comprehensive summary of 12 lifelong learning scenarios. The analysis also highlights the need to maintain a balance between managing catastrophic forgetting, ensuring computational efficiency, and between specificity and generality in knowledge acquisition. As the field continues to evolve, the integration of these advanced strategies will play a key role in shaping the next generation of artificial intelligence systems, helping them get closer to achieving truly human-like learning and adaptability.

Through a detailed study of these technological approaches and their respective categories, this review aims to highlight the integration of lifelong learning capabilities into lifelong learning tools, thereby enhancing their real-world Adaptability, reliability and overall performance in the application.At the same time, it provides researchers and engineers with a comprehensive perspective to help them better understand and apply lifelong learning technology and promote the further development of large language models. If you are interested in the article, you can check out the original paper to learn more about the research.

The above is the detailed content of Integrating more than 200 related studies, the latest review of the large model 'lifelong learning' is here. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template