Home Technology peripherals AI Tian Yuandong's new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious

Tian Yuandong's new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious

Jun 12, 2023 pm 01:56 PM
ai

The Transformer architecture has swept across many fields including natural language processing, computer vision, speech, multi-modality, etc. However, the experimental results are currently very impressive, and the relevant research on the working principle of Transformer is still very limited.

The biggest mystery is why Transformer can emerge efficient representations from gradient training dynamics by relying only on a "simple prediction loss"?

Recently Dr. Tian Yuandong announced the team’s latest research results. In a mathematically rigorous way, he analyzed the performance of a layer of Transformer (a self-attention layer plus a decoder layer) in the next token prediction task. SGD training dynamics on.

Tian Yuandongs new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious

## Paper link: https://arxiv.org/abs/2305.16380

This paper opens the black box of the dynamic process of how self-attention layers combine input tokens, and reveals the nature of potential inductive bias.

Specifically, under the assumption that there is no position encoding, long input sequences, and the decoder layer learns faster than the self-attention layer, the researchers proved that self-attention is a Discriminative scanning algorithm :

Starting from uniform attention (uniform attention), for the specific next token to be predicted, the model Gradually pay attention to different key tokens, and pay less attention to common tokens that appear in multiple next token windows

For different tokens, the model will gradually reduce the attention weight, following the training The order of co-occurrence between concentrated key tokens and query tokens from low to high.

What’s interesting is that this process does not lead to a winner-take-all, but is slowed down by a phase transition controlled by the two-layer learning rate, and finally becomes an (almost) fixed token combination. This dynamic is also verified on synthetic and real-world data.

Dr. Tian Yuandong is a researcher and research manager at the Meta Artificial Intelligence Research Institute and the leader of the Go AI project. His research directions are deep reinforcement learning and its application in games, as well as deep learning models theoretical analysis. He received his bachelor's and master's degrees from Shanghai Jiao Tong University in 2005 and 2008, and his doctorate from the Robotics Institute of Carnegie Mellon University in the United States in 2013.

was nominated for the 2013 International Conference on Computer Vision (ICCV) Marr Prize Honorable Mentions and the ICML2021 Outstanding Paper Honorable Mention Award.

After graduating from the Ph.D., he published a series of "Five-Year Doctoral Summary", covering aspects such as research direction selection, reading accumulation, time management, work attitude, income and sustainable career development. Summary of thoughts and experiences on doctoral career.

Revealing the 1-layer Transformer

The pre-training model based on the Transformer architecture usually only includes very simple supervision tasks, such as predicting the next word, filling in the blanks, etc., but it can Providing very rich representations for downstream tasks is mind-boggling.

Although previous work has proven that Transformer is essentially a universal approximator, previously commonly used machine learning models, such as kNN, kernel SVM, and multi-layer perceptron etc. are actually universal approximators. This theory cannot explain the huge gap in performance between these two types of models.

Tian Yuandongs new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious

Researchers believe that it is important to understand the training dynamics of Transformer, that is, in During training, you can learn how parameters change over time.

The article first uses a rigorous mathematical definition to formally describe the training dynamics of SGD with a layer of positionless coding Transformer on the next token prediction (a commonly used training paradigm for GPT series models). .

The Transformer of layer 1 contains a softmax self-attention layer and a decoder layer that predicts the next token.

Tian Yuandongs new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious

Assuming that the sequence is long and the decoder learns faster than the self-attention layer, prove The dynamic behavior of self-attention during training:

1. Frequency Bias

The model will gradually pay attention to those key tokens that co-occur with the query token in large quantities, and reduce its attention to those tokens that co-occur less.

2. Discriminative Bias

The model pays more attention to those to be predicted next The only unique token that appears in the next token, and loses interest in those common tokens that appear in multiple next tokens.

These two characteristics show that self-attention implicitly runs a discriminative scanning algorithm and has an inductive bias, that is, it is biased towards Unique key tokens that often co-occur with query tokens

Additionally, although self-attention layers tend to become sparser during training, as the frequency bias suggests, the model Because of the phase transition in the training dynamics, it does not collapse into one hot.

Tian Yuandongs new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious

The final stage of learning does not converge to any saddle point with zero gradient, but instead enters an attention change Slow region (i.e. logarithm over time), with parameter freezing and learned.

The research results further show that the onset of phase transition is controlled by the learning rate: a large learning rate will produce sparse attention patterns, while under a fixed self-attention learning rate , a large decoder learning rate leads to faster phase transitions and dense attention patterns.

The researchers named the SGD dynamics discovered in their work scan and snap:

Scan phase: Self attention is focused on key tokens, that is, different tokens that often appear at the same time as the next predicted token; attention on all other tokens decreases.

snap stage: Attention is almost frozen, and the token combination is fixed.

Tian Yuandongs new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious

This phenomenon has also been verified in simple real-world data experiments, using SGD trained on WikiText 1 Observing the lowest self-attention layer of the layer and the 3-layer Transformer, we can find that even if the learning rate remains constant throughout the training process, the attention will freeze at a certain moment during the training process and become sparse.

The above is the detailed content of Tian Yuandong's new work: Opening the first layer of Transformer black box, the attention mechanism is not so mysterious. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to define header files for vscode How to define header files for vscode Apr 15, 2025 pm 09:09 PM

How to define header files using Visual Studio Code? Create a header file and declare symbols in the header file using the .h or .hpp suffix name (such as classes, functions, variables) Compile the program using the #include directive to include the header file in the source file. The header file will be included and the declared symbols are available.

Do you use c in visual studio code Do you use c in visual studio code Apr 15, 2025 pm 08:03 PM

Writing C in VS Code is not only feasible, but also efficient and elegant. The key is to install the excellent C/C extension, which provides functions such as code completion, syntax highlighting, and debugging. VS Code's debugging capabilities help you quickly locate bugs, while printf output is an old-fashioned but effective debugging method. In addition, when dynamic memory allocation, the return value should be checked and memory freed to prevent memory leaks, and debugging these issues is convenient in VS Code. Although VS Code cannot directly help with performance optimization, it provides a good development environment for easy analysis of code performance. Good programming habits, readability and maintainability are also crucial. Anyway, VS Code is

Docker uses yaml Docker uses yaml Apr 15, 2025 am 07:21 AM

YAML is used to configure containers, images, and services for Docker. To configure: For containers, specify the name, image, port, and environment variables in docker-compose.yml. For images, basic images, build commands, and default commands are provided in Dockerfile. For services, set the name, mirror, port, volume, and environment variables in docker-compose.service.yml.

Can vscode run kotlin Can vscode run kotlin Apr 15, 2025 pm 06:57 PM

Running Kotlin in VS Code requires the following environment configuration: Java Development Kit (JDK) and Kotlin compiler Kotlin-related plugins (such as Kotlin Language and Kotlin Extension for VS Code) create Kotlin files and run code for testing to ensure successful environment configuration

Which one is better, vscode or visual studio Which one is better, vscode or visual studio Apr 15, 2025 pm 08:36 PM

Depending on the specific needs and project size, choose the most suitable IDE: large projects (especially C#, C) and complex debugging: Visual Studio, which provides powerful debugging capabilities and perfect support for large projects. Small projects, rapid prototyping, low configuration machines: VS Code, lightweight, fast startup speed, low resource utilization, and extremely high scalability. Ultimately, by trying and experiencing VS Code and Visual Studio, you can find the best solution for you. You can even consider using both for the best results.

How to build vscode c How to build vscode c Apr 15, 2025 pm 05:03 PM

VS Code provides a powerful C development environment that improves development efficiency. When configuring, you need to pay attention to path issues, memory leaks and dependency management. Advantages include extended ecosystems, excellent code editing capabilities, and integrated debuggers, while disadvantages are extended dependencies and resource consumption.

Can vscode be used for java Can vscode be used for java Apr 15, 2025 pm 08:33 PM

VS Code is absolutely competent for Java development, and its powerful expansion ecosystem provides comprehensive Java development capabilities, including code completion, debugging, version control and building tool integration. In addition, VS Code's lightweight, flexibility and cross-platformity make it better than bloated IDEs. After installing JDK and configuring JAVA_HOME, you can experience VS Code's Java development capabilities by installing "Java Extension Pack" and other extensions, including intelligent code completion, powerful debugging functions, construction tool support, etc. Despite possible compatibility issues or complex project configuration challenges, these issues can be addressed by reading extended documents or searching for solutions online, making the most of VS Code’s

What does sublime renewal balm mean What does sublime renewal balm mean Apr 16, 2025 am 08:00 AM

Sublime Text is a powerful customizable text editor with advantages and disadvantages. 1. Its powerful scalability allows users to customize editors through plug-ins, such as adding syntax highlighting and Git support; 2. Multiple selection and simultaneous editing functions improve efficiency, such as batch renaming variables; 3. The "Goto Anything" function can quickly jump to a specified line number, file or symbol; but it lacks built-in debugging functions and needs to be implemented by plug-ins, and plug-in management requires caution. Ultimately, the effectiveness of Sublime Text depends on the user's ability to effectively configure and manage it.

See all articles