Table of Contents
Lightning Attention-2 Introduction
Lightning Attention-2 speed comparison
Summary
Home Technology peripherals AI ​New generation attention mechanism Lightning Attention-2: unlimited sequence length, constant computing power overhead, higher modeling accuracy

​New generation attention mechanism Lightning Attention-2: unlimited sequence length, constant computing power overhead, higher modeling accuracy

Jan 18, 2024 pm 02:12 PM
Model train

The current application of large language models is restricted by the sequence length limit, which limits its application in the field of artificial intelligence. For example, there are certain challenges in multi-turn dialogue, long text understanding, and multi-modal data processing and generation. The fundamental reason for this limitation is that the Transformer architecture commonly used in large language models currently has a quadratic relationship between computational complexity and sequence length. Therefore, as the sequence length increases, the demand for computing resources increases exponentially. Therefore, how to efficiently process long sequences has always been one of the challenges faced by large language models.

Past approaches have mainly focused on adapting large language models to longer sequences during the inference stage. One approach is to employ Alibi or similar relative position encoding to enable the model to adapt to input sequences of different lengths. Another approach is to use RoPE or similar relative position encoding for difference, and briefly fine-tune the already trained model to extend the sequence length. These methods enable large models to have certain long sequence modeling capabilities, but the overhead of training and inference has not been reduced.

The OpenNLPLab team has open sourced a new linear attention mechanism called Lightning Attention-2, which is designed to solve the long sequence problem of large language models. This mechanism keeps the cost of training and inferencing long sequences consistent with 1K sequence lengths, enabling a set-and-forget solution. Even before hitting a memory bottleneck, increasing sequence length does not negatively impact model training speed, thus enabling unlimited length pretraining. In addition, the inference cost of very long texts is also consistent or even lower compared to 1K Tokens, thus greatly reducing the inference cost of current large language models. As shown in the figure below, when the model size is 400M, 1B and 3B, as the sequence length increases, the training speed of LLaMA supported by FlashAttention2 begins to decrease rapidly, while the speed of TansNormerLLM supported by Lightning Attention-2 has almost no change.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

figure 1

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

  • Paper: Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models
  • Paper address: https://arxiv.org/pdf/ 2401.04658.pdf
  • Open source address: https://github.com/OpenNLPLab/lightning-attention

Lightning Attention-2 Introduction

Keeping the pre-training speed of large models consistent across different sequence lengths sounds like an impossible task. However, since the advent of linear attention in 2020, researchers have been working hard to make the actual efficiency of linear attention consistent with its theoretical linear computational complexity. Until mid-2023, research on linear attention will mainly focus on accuracy alignment with the Transformer architecture. Finally, after the advent of the improved linear attention mechanism, it is comparable to the state-of-the-art Transformer architecture in accuracy. However, the most critical calculation trick of "left multiplication to right multiplication" in linear attention is much slower than the direct left multiplication algorithm in actual implementation. This is because the implementation of right multiplication requires the use of cumulative summation (cumsum) containing a large number of loop operations, and a large number of I/O operations make right multiplication much less efficient than left multiplication. Therefore, there is still a challenge to keep the pre-training speed of large models consistent across different sequence lengths. Researchers need to further explore and improve the implementation of linear attention to improve its computational efficiency and reduce I/O operations. This will help achieve consistency in pre-training speed to better cope with task requirements of different sequence lengths.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

Figure 2

In order to better understand the idea of ​​Lightning Attention-2, let us first Let’s review the calculation formula of traditional softmax attention: O=softmax ((QK^T)⊙M_) V, where Q, K, V, M, and O are query, key, value, mask and output matrix respectively. M here is In one-way tasks (such as GPT), it is a lower triangular all-1 matrix, but in two-way tasks (such as Bert), it can be ignored, that is, there is no mask matrix for two-way tasks.

The author summarizes the overall idea of ​​Lightning Attention-2 into the following three points for explanation:

1. One of the core ideas of Linear Attention is to remove the computationally expensive softmax operator, so that the calculation formula of Attention can be written as O=((QK^T)⊙M_) V. However, due to the existence of the mask matrix M in the one-way task, this form can still only perform left multiplication calculations, so the complexity of O (N) cannot be obtained. But for bidirectional tasks, since there is no mask matrix, the calculation formula of Linear Attention can be further simplified to O=(QK^T) V. The subtlety of Linear Attention is that by simply using the associative law of matrix multiplication, its calculation formula can be further transformed into: O=Q (K^T V). This calculation form is called right multiplication, and the corresponding former is Take the left. From Figure 2, we can intuitively understand that Linear Attention can achieve an attractive O (N) complexity in bidirectional tasks!

2. However, as the decoder-only GPT model gradually becomes the de facto standard of LLM, how to use the right multiplication feature of Linear Attention to accelerate one-way tasks has become an urgent problem to be solved. . In order to solve this problem, the author of this article proposed to use the idea of ​​​​"divide and conquer" to divide the calculation of the attention matrix into two forms: diagonal matrix and non-diagonal matrix, and use different ways to calculate them. As shown in Figure 3, Linear Attention-2 uses the Tiling idea commonly used in the computer field to divide the Q, K, and V matrices into the same number of blocks. Among them, the calculation of the block itself (intra-block) still retains the left multiplication calculation method due to the existence of the mask matrix, with a complexity of O (N^2); while the calculation of the block (inter-block) does not have the mask matrix. With the existence of , you can use the right multiplication calculation method to enjoy the complexity of O (N). After the two are calculated separately, they can be directly added to obtain the Linear Attention output Oi corresponding to the i-th block. At the same time, the state of KV is accumulated through cumsum for use in the calculation of the next block. In this way, the algorithm complexity of the entire Lightning Attention-2 is O (N^2) for intra-block and O (N) for inter-block Trade-off. How to obtain better trade-off is determined by Tiling's block size.

3. Careful readers will find that the above process is only the algorithm part of Lightning Attention-2. The reason why Lightning is named is because the author fully considered the algorithm process to be executed on GPU hardware. efficiency issues in the process. Inspired by the FlashAttention series of work, when actually performing calculations on the GPU, the author moved the split Q_i, K_i, V_i tensors from the slower HBM with larger capacity inside the GPU to the faster SRAM with smaller capacity. Computation is performed on the system, thereby reducing a large amount of memory IO overhead. After the block completes the calculation of Linear Attention, its output result O_i will be moved back to HBM. Repeat this process until all blocks have been processed.

Readers who want to know more details can carefully read Algorithm 1 and Algorithm 2 in this article, as well as the detailed derivation process in the paper. Both the Algorithm and the derivation process distinguish between the forward and reverse processes of Lightning Attention-2, which can help readers have a deeper understanding.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

image 3

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

Lightning Attention-2 Accuracy Comparison

The researchers first compared the accuracy difference between Lightning Attention-2 and Lightning Attention-1 on a small-scale (400M) parameter model, as shown below As shown, there is almost no difference between the two.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

The researchers then combined the Lightning Attention-2 supported TransNormerLLM (TNL-LA2) with other advanced non-Transformer architectures on 1B and 3B. The network and LLaMA powered by FlashAttention2 were compared under the same corpus. As shown in the figure below, TNL-LA2 and LLaMA maintain a similar trend, and the loss performance is better. This experiment shows that Lightning Attention-2 has an accuracy performance that is not inferior to the state-of-the-art Transformer architecture in language modeling.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

In the large language model task, the researchers compared the results of TNL-LA2 15B and Pythia on common benchmarks for large models of similar size. As shown in the table below, under the condition of eating the same tokens, TNL-LA2 is slightly higher than the Pythia model based on Softmax attention in common sense reasoning and multiple choice comprehensive capabilities.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

Lightning Attention-2 speed comparison

The researchers conducted a comparison between Lightning Attention-2 and FlashAttention2 Comparison of single module speed and memory usage. As shown in the figure below, compared to Lightning Attention-1 and FlashAttention2, Lightning Attention-2 shows a strict linear increase in speed compared to the sequence length. In terms of memory usage, all three show similar trends, but Lightning Attention-2 has a smaller memory footprint. The reason for this is that the memory usage of FlashAttention2 and Lightning Attention-1 is also approximately linear.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

The author noticed that the main focus of this article is to solve the training speed of linear attention network and realize long sequences of arbitrary length. Similar training speed to 1K sequences. In terms of inference speed, there is not much introduction. This is because linear attention can be losslessly converted to RNN mode during reasoning, thereby achieving a similar effect, that is, the speed of reasoning for a single token is constant. For Transformer, the inference speed of the current token is related to the number of tokens before it.

The author tested the comparison of the inference speed between TransNormerLLM-7B supported by Lightning Attention-1 and the common 7B model. As shown in the figure below, under the approximate parameter size, the throughput speed of Lightning Attention-1 is 4 times that of Baichuan and more than 3.5 times that of ChatGLM, showing an excellent inference speed advantage.

​新一代注意力机制Lightning Attention-2:无限序列长度、恒定算力开销、更高建模精度

Summary

Lightning Attention-2 represents a major advancement in linear attention mechanisms, making it It can perfectly replace the traditional Softmax attention in terms of accuracy and speed, providing sustainable expansion capabilities for larger and larger models in the future, and providing a way to process infinitely long sequences with higher efficiency. The OpenNLPLab team will study sequential parallel algorithms based on linear attention mechanisms in the future to solve the currently encountered memory barrier problem.

The above is the detailed content of ​New generation attention mechanism Lightning Attention-2: unlimited sequence length, constant computing power overhead, higher modeling accuracy. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Apr 03, 2024 pm 12:04 PM

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo May 07, 2024 pm 04:13 PM

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao Apr 09, 2024 am 11:52 AM

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

KAN, which replaces MLP, has been extended to convolution by open source projects KAN, which replaces MLP, has been extended to convolution by open source projects Jun 01, 2024 pm 10:03 PM

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Kuaishou version of Sora 'Ke Ling' is open for testing: generates over 120s video, understands physics better, and can accurately model complex movements Kuaishou version of Sora 'Ke Ling' is open for testing: generates over 120s video, understands physics better, and can accurately model complex movements Jun 11, 2024 am 09:51 AM

What? Is Zootopia brought into reality by domestic AI? Exposed together with the video is a new large-scale domestic video generation model called "Keling". Sora uses a similar technical route and combines a number of self-developed technological innovations to produce videos that not only have large and reasonable movements, but also simulate the characteristics of the physical world and have strong conceptual combination capabilities and imagination. According to the data, Keling supports the generation of ultra-long videos of up to 2 minutes at 30fps, with resolutions up to 1080p, and supports multiple aspect ratios. Another important point is that Keling is not a demo or video result demonstration released by the laboratory, but a product-level application launched by Kuaishou, a leading player in the short video field. Moreover, the main focus is to be pragmatic, not to write blank checks, and to go online as soon as it is released. The large model of Ke Ling is already available in Kuaiying.

The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks Apr 29, 2024 pm 06:55 PM

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

The U.S. Air Force showcases its first AI fighter jet with high profile! The minister personally conducted the test drive without interfering during the whole process, and 100,000 lines of code were tested for 21 times. The U.S. Air Force showcases its first AI fighter jet with high profile! The minister personally conducted the test drive without interfering during the whole process, and 100,000 lines of code were tested for 21 times. May 07, 2024 pm 05:00 PM

Recently, the military circle has been overwhelmed by the news: US military fighter jets can now complete fully automatic air combat using AI. Yes, just recently, the US military’s AI fighter jet was made public for the first time and the mystery was unveiled. The full name of this fighter is the Variable Stability Simulator Test Aircraft (VISTA). It was personally flown by the Secretary of the US Air Force to simulate a one-on-one air battle. On May 2, U.S. Air Force Secretary Frank Kendall took off in an X-62AVISTA at Edwards Air Force Base. Note that during the one-hour flight, all flight actions were completed autonomously by AI! Kendall said - "For the past few decades, we have been thinking about the unlimited potential of autonomous air-to-air combat, but it has always seemed out of reach." However now,

See all articles