Home Technology peripherals AI Revealed new version: Mathematical principles of Transformer that you have never seen before

Revealed new version: Mathematical principles of Transformer that you have never seen before

Jan 12, 2024 pm 11:48 PM
theory arxiv

Recently, a paper was published on arxiv, which provides a new interpretation of the mathematical principles of Transformer. The content is very long and the knowledge is a lot. It is recommended to read the original text.

In 2017, "Attention is all you need" published by Vaswani et al. became an important milestone in the development of neural network architecture. The core contribution of this paper is the self-attention mechanism, which is the innovation that distinguishes Transformers from traditional architectures and plays an important role in its excellent practical performance.

In fact, this innovation has become a key catalyst for advances in artificial intelligence in areas such as computer vision and natural language processing, and also played a key role in the emergence of large language models effect. Therefore, understanding Transformers, and in particular the mechanisms by which self-attention processes data, is a crucial but largely understudied area.

Revealed new version: Mathematical principles of Transformer that you have never seen before

Paper address: https://arxiv.org/pdf/2312.10794.pdf

Deep Neural Network ( DNNs) have a common feature: the input data is processed layer by layer in order, forming a time-discrete dynamic system (for specific content, please refer to "Deep Learning" published by MIT, also known as "Flower Book" in China). This perspective has been successfully used to model residual networks onto time-continuous dynamic systems, which are called neural ordinary differential equations (neural ODEs). In the divine constant differential equation, the input image Revealed new version: Mathematical principles of Transformer that you have never seen before will evolve according to the given time-varying velocity field Revealed new version: Mathematical principles of Transformer that you have never seen before at the time interval (0, T). Therefore, DNN can be regarded as a flow map (Flow Map) Revealed new version: Mathematical principles of Transformer that you have never seen before from one Revealed new version: Mathematical principles of Transformer that you have never seen before to another Revealed new version: Mathematical principles of Transformer that you have never seen before. There is a strong similarity between flow maps even in velocity fields under the constraints of classic DNN architectures. Revealed new version: Mathematical principles of Transformer that you have never seen before

Researchers found that Transformers are actually flow mappings on
, that is, mappings between d-dimensional probability measure spaces (the space of probability measures). In order to implement this flow mapping that converts between metric spaces, Transformers need to establish a mean-field interacting particle system. Revealed new version: Mathematical principles of Transformer that you have never seen before

Specifically, each particle (which can be understood as a token in the context of deep learning) follows the flow of the vector field, and the flow depends on the empirical measurement of all particles ( empirical measure). In turn, the equations determine the evolution of empirical measurements of particles, a process that can last for a long time and require sustained attention.

The researchers' main observation was that particles tend to eventually clump together. This phenomenon is particularly evident in learning tasks such as one-way derivation (i.e., predicting the next word in a sequence). The output metric encodes the probability distribution of the next token, and a small number of possible results can be filtered out based on the clustering results.

The research results of this article show that the limit distribution is actually a point mass without diversity or randomness, but this is inconsistent with the actual observation results. This apparent paradox is resolved by the fact that the particles exist in variable states for long periods of time. As can be seen from Figures 2 and 4, Transformers have two different time scales: in the first stage, all tokens quickly form several clusters, while in the second stage (much slower than the first stage), through During the pairwise merging process of clusters, all tokens eventually collapse into one point.

Revealed new version: Mathematical principles of Transformer that you have never seen before

Revealed new version: Mathematical principles of Transformer that you have never seen before

The goal of this article is twofold. On the one hand, This article aims to provide a general and easy-to-understand framework for studying Transformers from a mathematical perspective. In particular, the structure of these systems of interacting particles allows researchers to make concrete connections to established topics in mathematics, including nonlinear transport equations, Wasserstein gradient flows, models of collective behavior, and optimal configurations of points on a sphere. On the other hand, this paper describes several promising research directions, with a special focus on clustering phenomena over long time spans. The main outcome measures proposed by the researchers are new, and they also raise open questions throughout the paper that they consider interesting.

The main contributions of this article are divided into three parts.

Revealed new version: Mathematical principles of Transformer that you have never seen before


Part 1: Modeling. This article defines an ideal model of the Transformer architecture that treats the number of layers as a continuous-time variable. This approach to abstraction is not new and is similar to the approach taken by classic architectures such as ResNets. The model in this article only focuses on two key components of the Transformer architecture: self-attention mechanism and layer normalization. Layer normalization effectively limits particles to the space of the unit sphere, while the self-attention mechanism achieves nonlinear coupling between particles through empirical measurements. In turn, the empirical measure evolves according to a continuity partial differential equation. This article also introduces a simpler and easier-to-use alternative model for self-attention, a Wasserstein gradient flow of an energy function, and there are already mature research methods for the optimal configuration of points on the sphere of the energy function. Revealed new version: Mathematical principles of Transformer that you have never seen before

Part 2: Clustering. In this part, the researchers propose new mathematical results on token clustering over a longer time span. As Theorem 4.1 shows, in high-dimensional space, a group of n particles randomly initialized on the unit ball will gather into a point at
. The researchers' precise description of the shrinkage rate of the particle clusters complements this result. Specifically, the researchers plotted histograms of the distances between all particles, as well as the time points when all particles were about to complete clustering (see Section 4 of the original article). The researchers also obtained clustering results without assuming a large dimension d (see Section 5 of the original article). Revealed new version: Mathematical principles of Transformer that you have never seen before

Part 3: Looking ahead. This article proposes potential lines of future research by posing primarily questions in the form of open-ended questions and substantiating them through numerical observations. The researchers first focus on the case of dimension d = 2 (see Section 6 of the original article) and draw out the connection with the Kuramoto oscillator. It is then briefly shown how difficult problems related to spherical optimization can be solved by making simple and natural modifications to the model (see Section 7 of the original article). The following chapters explore the interacting particle systems that make it possible to adjust parameters in the Transformer architecture, which may later lead to practical applications.

The above is the detailed content of Revealed new version: Mathematical principles of Transformer that you have never seen before. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Breaking through the boundaries of traditional defect detection, 'Defect Spectrum' achieves ultra-high-precision and rich semantic industrial defect detection for the first time. Breaking through the boundaries of traditional defect detection, 'Defect Spectrum' achieves ultra-high-precision and rich semantic industrial defect detection for the first time. Jul 26, 2024 pm 05:38 PM

In modern manufacturing, accurate defect detection is not only the key to ensuring product quality, but also the core of improving production efficiency. However, existing defect detection datasets often lack the accuracy and semantic richness required for practical applications, resulting in models unable to identify specific defect categories or locations. In order to solve this problem, a top research team composed of Hong Kong University of Science and Technology Guangzhou and Simou Technology innovatively developed the "DefectSpectrum" data set, which provides detailed and semantically rich large-scale annotation of industrial defects. As shown in Table 1, compared with other industrial data sets, the "DefectSpectrum" data set provides the most defect annotations (5438 defect samples) and the most detailed defect classification (125 defect categories

NVIDIA dialogue model ChatQA has evolved to version 2.0, with the context length mentioned at 128K NVIDIA dialogue model ChatQA has evolved to version 2.0, with the context length mentioned at 128K Jul 26, 2024 am 08:40 AM

The open LLM community is an era when a hundred flowers bloom and compete. You can see Llama-3-70B-Instruct, QWen2-72B-Instruct, Nemotron-4-340B-Instruct, Mixtral-8x22BInstruct-v0.1 and many other excellent performers. Model. However, compared with proprietary large models represented by GPT-4-Turbo, open models still have significant gaps in many fields. In addition to general models, some open models that specialize in key areas have been developed, such as DeepSeek-Coder-V2 for programming and mathematics, and InternVL for visual-language tasks.

Nature's point of view: The testing of artificial intelligence in medicine is in chaos. What should be done? Nature's point of view: The testing of artificial intelligence in medicine is in chaos. What should be done? Aug 22, 2024 pm 04:37 PM

Editor | ScienceAI Based on limited clinical data, hundreds of medical algorithms have been approved. Scientists are debating who should test the tools and how best to do so. Devin Singh witnessed a pediatric patient in the emergency room suffer cardiac arrest while waiting for treatment for a long time, which prompted him to explore the application of AI to shorten wait times. Using triage data from SickKids emergency rooms, Singh and colleagues built a series of AI models that provide potential diagnoses and recommend tests. One study showed that these models can speed up doctor visits by 22.3%, speeding up the processing of results by nearly 3 hours per patient requiring a medical test. However, the success of artificial intelligence algorithms in research only verifies this

Google AI won the IMO Mathematical Olympiad silver medal, the mathematical reasoning model AlphaProof was launched, and reinforcement learning is so back Google AI won the IMO Mathematical Olympiad silver medal, the mathematical reasoning model AlphaProof was launched, and reinforcement learning is so back Jul 26, 2024 pm 02:40 PM

For AI, Mathematical Olympiad is no longer a problem. On Thursday, Google DeepMind's artificial intelligence completed a feat: using AI to solve the real question of this year's International Mathematical Olympiad IMO, and it was just one step away from winning the gold medal. The IMO competition that just ended last week had six questions involving algebra, combinatorics, geometry and number theory. The hybrid AI system proposed by Google got four questions right and scored 28 points, reaching the silver medal level. Earlier this month, UCLA tenured professor Terence Tao had just promoted the AI ​​Mathematical Olympiad (AIMO Progress Award) with a million-dollar prize. Unexpectedly, the level of AI problem solving had improved to this level before July. Do the questions simultaneously on IMO. The most difficult thing to do correctly is IMO, which has the longest history, the largest scale, and the most negative

Training with millions of crystal data to solve the crystallographic phase problem, the deep learning method PhAI is published in Science Training with millions of crystal data to solve the crystallographic phase problem, the deep learning method PhAI is published in Science Aug 08, 2024 pm 09:22 PM

Editor |KX To this day, the structural detail and precision determined by crystallography, from simple metals to large membrane proteins, are unmatched by any other method. However, the biggest challenge, the so-called phase problem, remains retrieving phase information from experimentally determined amplitudes. Researchers at the University of Copenhagen in Denmark have developed a deep learning method called PhAI to solve crystal phase problems. A deep learning neural network trained using millions of artificial crystal structures and their corresponding synthetic diffraction data can generate accurate electron density maps. The study shows that this deep learning-based ab initio structural solution method can solve the phase problem at a resolution of only 2 Angstroms, which is equivalent to only 10% to 20% of the data available at atomic resolution, while traditional ab initio Calculation

arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it Aug 01, 2024 pm 05:18 PM

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

PRO | Why are large models based on MoE more worthy of attention? PRO | Why are large models based on MoE more worthy of attention? Aug 07, 2024 pm 07:08 PM

In 2023, almost every field of AI is evolving at an unprecedented speed. At the same time, AI is constantly pushing the technological boundaries of key tracks such as embodied intelligence and autonomous driving. Under the multi-modal trend, will the situation of Transformer as the mainstream architecture of AI large models be shaken? Why has exploring large models based on MoE (Mixed of Experts) architecture become a new trend in the industry? Can Large Vision Models (LVM) become a new breakthrough in general vision? ...From the 2023 PRO member newsletter of this site released in the past six months, we have selected 10 special interpretations that provide in-depth analysis of technological trends and industrial changes in the above fields to help you achieve your goals in the new year. be prepared. This interpretation comes from Week50 2023

See all articles