Table of Contents
Model Framework
Experimental results
Home Technology peripherals AI Transformer model optimization method for long code sequences to improve performance in long code scenarios

Transformer model optimization method for long code sequences to improve performance in long code scenarios

Apr 29, 2023 am 08:34 AM
Model calculate

Alibaba Cloud Machine Learning Platform PAI collaborated with the team of Professor Gao Ming of East China Normal University to publish the structure-aware sparse attention Transformer model SASA at SIGIR2022. This is a Transformer model optimization method for long code sequences, dedicated to improving long code scenarios. effect and performance. Since the complexity of the self-attention module increases exponentially with the sequence length, most programming-based Pretrained Language Models (PPLM) use sequence truncation to process code sequences. The SASA method sparses the calculation of self-attention and combines the structural characteristics of the code, thereby improving the performance of long sequence tasks and reducing memory and computational complexity.

Paper: Tingting Liu, Chengyu Wang, Cen Chen, Ming Gao, and Aoying Zhou. Understanding Long Programming Languages ​​with Structure-Aware Sparse Attention. SIGIR 2022

Model Framework

The following figure shows the overall framework of SASA:

面向长代码序列的 Transformer 模型优化方法,提升长代码场景性能

Among them, SASA mainly consists of two stages: the preprocessing stage and the Sparse Transformer training stage. In the preprocessing stage, the interaction matrices between two tokens are obtained, one is the top-k frequency matrix, and the other is the AST pattern matrix. The Top-k frequency matrix uses a code pre-trained language model to learn the attention interaction frequency between tokens on the CodeSearchNet corpus. The AST pattern matrix is ​​an Abstract Syntax Tree (AST) that parses the code. It is obtained based on the connection relationship of the syntax tree. Interactive information between tokens. The Sparse Transformer training phase uses Transformer Encoder as the basic framework, replaces full self-attention with structure-aware sparse self-attention, and performs attention calculations between token pairs that conform to specific patterns, thereby reducing computational complexity.

SASA sparse attention includes the following four modules:

  • Sliding window attention: only calculates self-attention between tokens in the sliding window, retaining the characteristics of the local context. The computational complexity is, is the sequence length, and is the sliding window size.
  • Global attention: Set certain global tokens. These tokens will perform attention calculations with all tokens in the sequence to obtain the global information of the sequence. The calculation complexity is, which is the number of global tokens.
  • Top-k sparse attention: The attention interaction in the Transformer model is sparse and long-tailed. For each token, only the top-k tokens with the highest attention interaction are calculated. The complexity is.
  • AST-aware structure attention: The code is different from the natural language sequence and has stronger structural characteristics. The code is parsed into an abstract syntax tree (AST), and then the attention calculation is determined based on the connection relationship in the syntax tree. scope.

In order to adapt to the parallel computing characteristics of modern hardware, we divide the sequence into several blocks instead of calculating in token units. Each query block is related to

面向长代码序列的 Transformer 模型优化方法,提升长代码场景性能

sliding window blocks and

面向长代码序列的 Transformer 模型优化方法,提升长代码场景性能

global blocks and

面向长代码序列的 Transformer 模型优化方法,提升长代码场景性能

top-k and AST Blocks calculate attention, and the overall computational complexity is

面向长代码序列的 Transformer 模型优化方法,提升长代码场景性能

b is block size.

Each sparse attention pattern corresponds to an attention matrix. Taking sliding window attention as an example, the calculation of the attention matrix is:

面向长代码序列的 Transformer 模型优化方法,提升长代码场景性能

ASA pseudo code:

面向长代码序列的 Transformer 模型优化方法,提升长代码场景性能

Experimental results

We use four task data sets provided by CodeXGLUE[1] for evaluation, namely code clone detection, defect detection, code search, and code summarization. We extract the data whose sequence length is greater than 512 to form a long sequence data set. The experimental results are as follows:

面向长代码序列的 Transformer 模型优化方法,提升长代码场景性能

It can be seen from the experimental results that SASA has the best performance on the three data sets. Performance significantly exceeds all Baselines. Among them, Roberta-base[2], CodeBERT[3], and GraphCodeBERT[4] use truncation to process long sequences, which will lose part of the context information. Longformer[5] and BigBird[6] are methods used to process long sequences in natural language processing, but they do not take into account the structural characteristics of the code, and the direct transfer to the code task is ineffective.

In order to verify the effect of top-k sparse attention and AST-aware sparse attention modules, we conducted ablation experiments on BigCloneBench and Defect Detection data sets. The results are as follows:

面向长代码序列的 Transformer 模型优化方法,提升长代码场景性能

The sparse attention module not only improves the performance of long code tasks, but also greatly reduces the use of video memory. Under the same device, SASA can set a larger batch size, while the full self-attention model faces out of memory problem, the specific video memory usage is as follows:

面向长代码序列的 Transformer 模型优化方法,提升长代码场景性能

As a sparse attention module, SASA can be migrated to other pre-training models based on Transformer for processing Long sequence natural language processing tasks will be integrated into the open source framework EasyNLP (https://github.com/alibaba/EasyNLP) and contributed to the open source community.

Paper link:
https://arxiv.org/abs/2205.13730

The above is the detailed content of Transformer model optimization method for long code sequences to improve performance in long code scenarios. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

CUDA's universal matrix multiplication: from entry to proficiency! CUDA's universal matrix multiplication: from entry to proficiency! Mar 25, 2024 pm 12:30 PM

General Matrix Multiplication (GEMM) is a vital part of many applications and algorithms, and is also one of the important indicators for evaluating computer hardware performance. In-depth research and optimization of the implementation of GEMM can help us better understand high-performance computing and the relationship between software and hardware systems. In computer science, effective optimization of GEMM can increase computing speed and save resources, which is crucial to improving the overall performance of a computer system. An in-depth understanding of the working principle and optimization method of GEMM will help us better utilize the potential of modern computing hardware and provide more efficient solutions for various complex computing tasks. By optimizing the performance of GEMM

The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo May 07, 2024 pm 04:13 PM

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao Apr 09, 2024 am 11:52 AM

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Apr 01, 2024 pm 07:46 PM

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

KAN, which replaces MLP, has been extended to convolution by open source projects KAN, which replaces MLP, has been extended to convolution by open source projects Jun 01, 2024 pm 10:03 PM

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! May 06, 2024 pm 04:13 PM

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

FisheyeDetNet: the first target detection algorithm based on fisheye camera FisheyeDetNet: the first target detection algorithm based on fisheye camera Apr 26, 2024 am 11:37 AM

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

See all articles