Home Technology peripherals AI Let large models no longer be 'big Mac'. This is the latest review of efficient fine-tuning of large model parameters.

Let large models no longer be 'big Mac'. This is the latest review of efficient fine-tuning of large model parameters.

Apr 28, 2024 pm 04:04 PM
theory compression technology Memory usage Efficient fine-tuning of parameters

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com.

Recently, large-scale AI models such as large language models and Vincentian graph models have developed rapidly. Under this situation, how to adapt to rapidly changing needs and quickly adapt large models to various downstream tasks has become an important challenge. Limited by computing resources, traditional full-parameter fine-tuning methods may be insufficient, so more efficient fine-tuning strategies need to be explored. The above challenges have given rise to the recent rapid development of parameter efficient fine-tuning (PEFT) technology.

In order to comprehensively summarize the development history of PEFT technology and keep up with the latest research progress, recently, researchers from Northeastern University, University of California, Riverside, Arizona State University and New York University The researchers investigated, organized and summarized the application and development prospects of parameter efficient fine-tuning (PEFT) technology on large models, and summarized it into a comprehensive and cutting-edge review.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

Paper link: https://arxiv.org/pdf/2403.14608.pdf

PEFT provides a Efficient downstream task adaptation method for pre-trained models. By fixing most of the pre-training parameters and fine-tuning a few parameters, large models can be deployed lightly and quickly adapt to various downstream tasks, making large models no longer "giant". No tyrant".

The full text is 24 pages long, covering nearly 250 latest documents. It has been cited by Stanford University, Peking University and other institutions as soon as it was released, and has been published on various platforms. Quite a bit of heat.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

Specifically, this review focuses on PEFT algorithm classification, efficient PEFT design, PEFT cross-domain application, and PEFT system design and deployment At four levels, the development history of PEFT and its latest progress are comprehensively and carefully explained. Whether you are a practitioner in related industries or a beginner in the field of large model fine-tuning, this review can serve as a comprehensive learning guide.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

1. PEFT background introduction

#The paper first starts with the recently popular LLaMA model As a representative, the architecture and calculation process of large language models (LLM) and other Transformer-based models are analyzed and elaborated, and the required symbolic representations are defined to facilitate the analysis of various PEFT technologies in the following.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

In addition, the author also outlines the classification method of PEFT algorithm. The author divides the PEFT algorithm into additive fine-tuning, selective fine-tuning, heavy-parameterized fine-tuning and hybrid fine-tuning according to different operations. Figure 3 shows the classification of PEFT algorithms and the specific algorithm names included in each category. The specific definitions of each category will be explained in detail later.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

In the background section, the author also introduces common downstream benchmarks and data sets used to verify the performance of the PEFT method, making it easier for readers to become familiar with common task settings.

2. PEFT method classification

The author first gives additive fine-tuning, selective fine-tuning, and heavy parameters The definition of fine-tuning and hybrid fine-tuning:

  • Additive fine-tuning By adding learnable parameters at specific positions of the pre-trained model Modules or parameters to minimize the number of trainable parameters of the model when adapting to downstream tasks.
  • Selective fine-tuningIn the fine-tuning process, only a part of the parameters in the model are updated, while the remaining parameters are kept fixed. Compared with additive fine-tuning, selective fine-tuning does not require changing the architecture of the pre-trained model.
  • Re-parameterized fine-tuning is used for training by building a (low-rank) representation of the parameters of the pre-trained model. During inference, the parameters will be equivalently converted into the pre-trained model parameter structure to avoid introducing additional inference delays.

The distinction between the three is shown in Figure 4:

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

Hybrid fine-tuning combines the advantages of various PEFT methods and analyzes the similarities of different methods to build a unified PEFT architecture or find optimal PEFT hyperparameters.

Next, the author further subdivides each PEFT category:

A. Additive fine-tuning:

1) Adapter

Adapter achieves efficient fine-tuning of parameters by adding a small Adapter layer within the Transformer block. Each Adapter layer contains a down-projection matrix, an activation function, and an up-projection matrix. The down projection matrix maps the input features to the bottleneck dimension r, and the up projection matrix maps the bottleneck features back to the original dimension d.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

Figure 5 shows three typical insertion strategies of the Adapter layer in the model. The Serial Adapter is inserted sequentially after the Transformer module, and the Parallel Adapter is inserted next to the Transformer module in parallel. CoDA is a sparse Adapter method. For important tokens, CoDA uses both the pre-trained Transformer module and the Adapter branch for reasoning; for unimportant tokens, CoDA only uses the Adapter branch for reasoning to save computing overhead.

2) Soft Prompt

Soft Prompt adds a learnable vector to the head of the input sequence to Achieve efficient fine-tuning of parameters. Representative methods include Prefix-tuning and Prompt Tuning. Prefix-tuning enables fine-tuning of the model representation by adding learnable vectors in front of the key, value, and query matrices of each Transformer layer. Prompt Tuning only inserts learnable vectors in the first word vector layer to further reduce training parameters.

3) Others

In addition to the above two classifications, there are also some PEFT methods that are also introduced during the training process new parameters.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

The two typical methods are shown in Figure 6. (IA) 3 introduces three scaling vectors for adjusting keys, values, and activations of feedforward networks. SSF adjusts the activation value of the model through linear transformation. After each step, SSF adds an SSF-ADA layer to enable scaling and translation of activation values.

B. Selective fine-tuning:

1) Unstructured mask

This type of method determines the parameters that can be fine-tuned by adding a learnable binary mask to the model parameters. Many works, such as Diff pruning, FishMask, and LT-SFT, etc., focus on computing the position of the mask.

2) Structured mask

Unstructured mask has no restrictions on the shape of the mask, but This leads to inefficiency in its impact. Therefore, some works, such as FAR, S-Bitfit, Xattn Tuning, etc., impose structured restrictions on the shape of the mask. The difference between the two is shown in the figure below:

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

C. Re-parameterized fine-tuning:

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

1) Low-rank decomposition

This kind of method is achieved by finding Various low-dimensional reparameterized forms of pre-trained weight matrices to represent the entire parameter space for fine-tuning. The most typical method is LoRA, which constructs a low-rank representation of the original model parameters for training by adding two additional up- and down-projection matrices. After training, additional parameters can be seamlessly merged into pre-trained weights to avoid introducing additional inference overhead. DoRA decouples the weight matrix into modular length and direction, and leverages LoRA to fine-tune the direction matrix.

2) LoRA derivation method

The author divides the LoRA derivation method into dynamic selection of the rank of LoRA and LoRA Improvement in all aspects.
In LoRA dynamic rank, the typical method is DyLoRA, which constructs a series of ranks for simultaneous training during the training process, thus reducing the resources spent on finding the optimal rank.

In the improvement of LoRA, the author lists the shortcomings of traditional LoRA in various aspects and the corresponding solutions.

D. Hybrid fine-tuning:

This part studies how to integrate different PEFT technologies into a unified model and find An optimal design pattern. In addition, some solutions using neural architecture search (NAS) to obtain optimal PEFT training hyperparameters are also introduced.

3. Efficient PEFT design

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

In this section, the author discusses research on improving the efficiency of PEFT, focusing on the latency and peak memory overhead of its training and inference. The author mainly describes how to improve the efficiency of PEFT from three perspectives. They are:

PEFT pruning strategy: It combines neural network pruning technology and PEFT technology to further improve efficiency. Representative tasks include AdapterDrop, SparseAdapter, etc.

PEFT quantification strategy: That is, reducing the model size by reducing the model accuracy, thereby improving computational efficiency. When combined with PEFT, the main difficulty is how to better take into account the pre-training weights and the quantization processing of the new PEFT module. Representative works include QLoRA, LoftQ, etc.

Memory-efficient PEFT design: Although PEFT can update only a small number of parameters during training, due to the need for gradient calculation and backpropagation, Its memory footprint is still large. To deal with this challenge, some methods try to reduce memory overhead by bypassing the gradient calculation inside the pre-trained weights, such as Side-Tuning and LST. At the same time, other methods try to avoid backpropagation within the LLM to solve this problem, such as HyperTuning, MeZO, etc.

4. Cross-field applications of PEFT

In this chapter, the author The applications of PEFT in different fields are explored, and how to design better PEFT methods to improve the performance of specific models or tasks is discussed. This section mainly focuses on various large-scale pre-trained models, including LLM, visual Transformer (ViT), visual text model, and diffusion model, and describes in detail the role of PEFT in downstream task adaptation of these pre-trained models.

In terms of LLM, the author introduces how to use PEFT to fine-tune LLM to accept visual instruction input, representative work such as LLaMA-Adapter. In addition, the author also explores the application of PEFT in continuous learning of LLM and mentions how to fine-tune LLM with PEFT to expand its context window.

For ViT, the author describes how to use PEFT technology to adapt it to downstream image recognition tasks, and how to use PEFT to give ViT video recognition capabilities.

In terms of visual text models, the author introduced many works applying PEFT to fine-tune visual text models for open-set image classification tasks.

For the diffusion model, the authors identify two common scenarios: how to add additional inputs besides text, and how to achieve personalized generation, and describe each in PEFT here Applications in two types of tasks.

5. System design challenges of PEFT

In this chapter, the author First, the challenges faced by PEFT systems based on cloud services are described. It mainly includes the following points:

Centralized PEFT query service: In this mode, the cloud server stores a single LLM model copy and multiple PEFT module. According to the task requirements of different PEFT queries, the cloud server will select the corresponding PEFT module and integrate it with the LLM model.

Distributed PEFT query service: In this mode, the LLM model is stored on the cloud server, while the PEFT weights and data sets are stored on the user on the device. The user device uses the PEFT method to fine-tune the LLM model, and then uploads the fine-tuned PEFT weights and data set to the cloud server.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.Multiple PEFT training: Challenges include how to manage memory gradients and model weight storage, and how to design an efficient kernel to train PEFT in batches, etc.


In view of the above system design challenges, the author lists three detailed system design cases to provide a more in-depth analysis of these challenges and feasible solution strategies.

Offsite-Tuning: Mainly solves the data privacy dilemma and the problem of massive resource consumption when fine-tuning LLM.

PetS: Provides a unified service framework and provides a unified management and scheduling mechanism for the PEFT module.

Let large models no longer be big Mac. This is the latest review of efficient fine-tuning of large model parameters.

PEFT parallel training framework: Introduces two parallel PEFT training frameworks, including S-LoRA and Punica, and how they improve the training efficiency of PEFT.

6. Future research directions

The author believes that although PEFT technology has been used in many The downstream mission was successful, but there are still some shortcomings that need to be addressed in future work.

Establish a unified evaluation benchmark: Although some PEFT libraries already exist, there is a lack of a comprehensive benchmark to fairly compare the effectiveness and efficiency of different PEFT methods. . Establishing a recognized benchmark will foster innovation and collaboration within the community.

Enhance training efficiency: PEFT The amount of trainable parameters during training is not always consistent with the computational and memory savings during training . As discussed in the Efficient PEFT Design section, future research could further explore ways to optimize memory and computational efficiency.

Exploring the Law of Scaling: Many PEFT techniques are implemented on smaller Transformer models, and their effectiveness is not necessarily applicable to today's Various models with large parameter quantities. Future research could explore how to adapt the PEFT method to large models.

Serve more models and tasks: With the emergence of more large-scale models, such as Sora, Mamba, etc., PEFT technology can unlock new applications Scenes. Future research could focus on designing PEFT methods for specific models and tasks.

Enhanced Data Privacy: Centralized systems may face data privacy issues when serving or fine-tuning personalized PEFT modules. Future research could explore encryption protocols to protect personal data and intermediate training/inference results.

PEFT and model compression: The impact of model compression techniques such as pruning and quantization on the PEFT method has not been fully studied. Future research could focus on how the compressed model adapts to the performance of the PEFT method.

The above is the detailed content of Let large models no longer be 'big Mac'. This is the latest review of efficient fine-tuning of large model parameters.. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Breaking through the boundaries of traditional defect detection, 'Defect Spectrum' achieves ultra-high-precision and rich semantic industrial defect detection for the first time. Breaking through the boundaries of traditional defect detection, 'Defect Spectrum' achieves ultra-high-precision and rich semantic industrial defect detection for the first time. Jul 26, 2024 pm 05:38 PM

In modern manufacturing, accurate defect detection is not only the key to ensuring product quality, but also the core of improving production efficiency. However, existing defect detection datasets often lack the accuracy and semantic richness required for practical applications, resulting in models unable to identify specific defect categories or locations. In order to solve this problem, a top research team composed of Hong Kong University of Science and Technology Guangzhou and Simou Technology innovatively developed the "DefectSpectrum" data set, which provides detailed and semantically rich large-scale annotation of industrial defects. As shown in Table 1, compared with other industrial data sets, the "DefectSpectrum" data set provides the most defect annotations (5438 defect samples) and the most detailed defect classification (125 defect categories

How to fine-tune deepseek locally How to fine-tune deepseek locally Feb 19, 2025 pm 05:21 PM

Local fine-tuning of DeepSeek class models faces the challenge of insufficient computing resources and expertise. To address these challenges, the following strategies can be adopted: Model quantization: convert model parameters into low-precision integers, reducing memory footprint. Use smaller models: Select a pretrained model with smaller parameters for easier local fine-tuning. Data selection and preprocessing: Select high-quality data and perform appropriate preprocessing to avoid poor data quality affecting model effectiveness. Batch training: For large data sets, load data in batches for training to avoid memory overflow. Acceleration with GPU: Use independent graphics cards to accelerate the training process and shorten the training time.

NVIDIA dialogue model ChatQA has evolved to version 2.0, with the context length mentioned at 128K NVIDIA dialogue model ChatQA has evolved to version 2.0, with the context length mentioned at 128K Jul 26, 2024 am 08:40 AM

The open LLM community is an era when a hundred flowers bloom and compete. You can see Llama-3-70B-Instruct, QWen2-72B-Instruct, Nemotron-4-340B-Instruct, Mixtral-8x22BInstruct-v0.1 and many other excellent performers. Model. However, compared with proprietary large models represented by GPT-4-Turbo, open models still have significant gaps in many fields. In addition to general models, some open models that specialize in key areas have been developed, such as DeepSeek-Coder-V2 for programming and mathematics, and InternVL for visual-language tasks.

Training with millions of crystal data to solve the crystallographic phase problem, the deep learning method PhAI is published in Science Training with millions of crystal data to solve the crystallographic phase problem, the deep learning method PhAI is published in Science Aug 08, 2024 pm 09:22 PM

Editor |KX To this day, the structural detail and precision determined by crystallography, from simple metals to large membrane proteins, are unmatched by any other method. However, the biggest challenge, the so-called phase problem, remains retrieving phase information from experimentally determined amplitudes. Researchers at the University of Copenhagen in Denmark have developed a deep learning method called PhAI to solve crystal phase problems. A deep learning neural network trained using millions of artificial crystal structures and their corresponding synthetic diffraction data can generate accurate electron density maps. The study shows that this deep learning-based ab initio structural solution method can solve the phase problem at a resolution of only 2 Angstroms, which is equivalent to only 10% to 20% of the data available at atomic resolution, while traditional ab initio Calculation

Google AI won the IMO Mathematical Olympiad silver medal, the mathematical reasoning model AlphaProof was launched, and reinforcement learning is so back Google AI won the IMO Mathematical Olympiad silver medal, the mathematical reasoning model AlphaProof was launched, and reinforcement learning is so back Jul 26, 2024 pm 02:40 PM

For AI, Mathematical Olympiad is no longer a problem. On Thursday, Google DeepMind's artificial intelligence completed a feat: using AI to solve the real question of this year's International Mathematical Olympiad IMO, and it was just one step away from winning the gold medal. The IMO competition that just ended last week had six questions involving algebra, combinatorics, geometry and number theory. The hybrid AI system proposed by Google got four questions right and scored 28 points, reaching the silver medal level. Earlier this month, UCLA tenured professor Terence Tao had just promoted the AI ​​Mathematical Olympiad (AIMO Progress Award) with a million-dollar prize. Unexpectedly, the level of AI problem solving had improved to this level before July. Do the questions simultaneously on IMO. The most difficult thing to do correctly is IMO, which has the longest history, the largest scale, and the most negative

Nature's point of view: The testing of artificial intelligence in medicine is in chaos. What should be done? Nature's point of view: The testing of artificial intelligence in medicine is in chaos. What should be done? Aug 22, 2024 pm 04:37 PM

Editor | ScienceAI Based on limited clinical data, hundreds of medical algorithms have been approved. Scientists are debating who should test the tools and how best to do so. Devin Singh witnessed a pediatric patient in the emergency room suffer cardiac arrest while waiting for treatment for a long time, which prompted him to explore the application of AI to shorten wait times. Using triage data from SickKids emergency rooms, Singh and colleagues built a series of AI models that provide potential diagnoses and recommend tests. One study showed that these models can speed up doctor visits by 22.3%, speeding up the processing of results by nearly 3 hours per patient requiring a medical test. However, the success of artificial intelligence algorithms in research only verifies this

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

PRO | Why are large models based on MoE more worthy of attention? PRO | Why are large models based on MoE more worthy of attention? Aug 07, 2024 pm 07:08 PM

In 2023, almost every field of AI is evolving at an unprecedented speed. At the same time, AI is constantly pushing the technological boundaries of key tracks such as embodied intelligence and autonomous driving. Under the multi-modal trend, will the situation of Transformer as the mainstream architecture of AI large models be shaken? Why has exploring large models based on MoE (Mixed of Experts) architecture become a new trend in the industry? Can Large Vision Models (LVM) become a new breakthrough in general vision? ...From the 2023 PRO member newsletter of this site released in the past six months, we have selected 10 special interpretations that provide in-depth analysis of technological trends and industrial changes in the above fields to help you achieve your goals in the new year. be prepared. This interpretation comes from Week50 2023

See all articles