Table of Contents
Constructing a deep Transformer that is trainable without shortcuts
Experiment
Home Technology peripherals AI Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Apr 12, 2023 pm 05:31 PM
train transforme

Despite many notable achievements, practical progress in training deep neural networks (DNNs) has been largely independent of the theoretical basis. Most successful modern DNNs rely on specific arrangements of residual connections and normalization layers, but the general principles of how to use these components in new architectures are still unknown, and their role in existing architectures is still not fully understood. .

Residual architectures are the most popular and successful, originally developed in the context of convolutional neural networks (CNN) and later emerged ubiquitously from attention networks. transformer architecture. One reason for the success of residual architectures is better signal propagation compared to ordinary DNNs, where signal propagation refers to the transmission of geometric information through DNN layers and is represented by a kernel function.

Recently, using signal propagation principles to train deeper DNNs without the involvement of residual connections and/or normalization layers in residual architectures has become an area of ​​community interest. . The reasons are twofold: firstly it validates the signal propagation hypothesis for the effectiveness of residual architectures, thus clarifying the understanding of DNN interpretability; secondly this may enable general principles and methods for DNN trainability beyond the residual paradigm.


For CNNs, the work of Xiao et al. (2018) shows that improved signal propagation through better initialization can efficiently train ordinary deep networks , albeit significantly slower than residual networks. The work of Martens et al. (2021) proposed Deep Kernel Shaping (DKS), which uses activation function transformation to control signal propagation, and uses strong second-order optimizers such as K-FAC to implement the training of ordinary networks and residual networks on ImageNet. The speeds are equal. The work of Zhang et al. (2022) extends DKS to a larger class of activation functions and achieves near equality in generalization.

The key quantity to analyze in signal propagation is the initialization time kernel of the DNN, or more precisely, the approximate kernel under the infinite width limit. For multilayer perceptrons (MLPs) and CNNs using delta initialization, the kernel can be written as a simple layer recursion containing only 2D functions to facilitate straightforward analysis. The kernel evolution of cross-layer transformers is more complex, so existing methods such as DKS are not suitable for transformers or indeed any architecture containing self-attention layers.

In MLP, signal propagation is judged by looking at the behavior of the (one-dimensional) kernel, while signal propagation in the transformer can be judged by looking at the (high-dimensional) kernel matrix at the network layer Judging from the evolution in .

This research must avoid a situation where diagonal elements grow or shrink rapidly with increasing depth, which is related to uncontrolled activation norms and may lead to saturation loss or numerical problems . Avoiding rank collapse is necessary for the trainability of deep transformers, while whether deep residual-free transformers can be trained remains an open question.

This paper from the blind review stage of ICLR 2023 solves this problem and demonstrates for the first time that it is possible to successfully train deep transformers without residual connections or normalization layers. To this end, they study the signal propagation and rank collapse problems in deep residual-free transformers and derive three methods to prevent them. Specifically, the approach uses a combination of parameter initialization, bias matrices, and position-dependent rescaling, and highlights several complexities specific to signal propagation in transformers, including interactions with position encoding and causal masking. The researchers empirically demonstrated that their method can generate deep trainable residual-free transformers.

In the experimental part, on the WikiText-103 and C4 data sets, the researchers demonstrated the use of their main method-Exponential Signal Preserving Attention (E- SPA), can make the training loss of the standard transformer comparable to that of the residual transformer in the paper by extending the training time by about five times. In addition, by combining this method with residual connections, the researchers also showed that transformers without normalization layers can achieve training speeds comparable to standard transformers.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Paper address: https://openreview.net/pdf?id=NPrsUQgMjKK

Regarding this paper, Rohan Anil, chief engineer of Google AI, believes that it is a big step forward for the Transformer architecture and a fundamental improvement.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Constructing a deep Transformer that is trainable without shortcuts

So far, the only strategy to correct Transformer rank collapse relies on residual connections, which This approach skips the inherent trainability issues of the self-attention layer. In contrast, this study directly addresses this question. First better understand signal propagation through attention layers, and then modify based on insights to achieve faithful signal transmission in deep transformers, which can be trained with or without residual connections.

Specifically, first, the study conducted a simple setting of a deep vanilla transformer with only attention, and then they assumed that the transformer has a single head (h = 1) setting or With a multi-head setup, where the attention matrix A does not change between different heads. If the block l≤L has an attention matrix A_l when initialized, the representation of the final block is X_L:

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

For the above formula, if Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture? and Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture? adopt orthogonal initialization, then Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture? can be orthogonal during initialization.

Under the above assumptions, if Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture? is used to represent the cross-position input kernel matrix, after some simplification, the following formula can be obtained:

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

From this simplified formula (kernel matrix in depth-only attention transformer), three requirements for (A_l)_l can be determined:

  1. Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?Must perform well within each block, avoiding degenerate situations such as rank collapse and exploding/vanishing diagonal values;
  2. A_l must be element-wise nonnegative ∀l;
  3. A_l should be lower triangular ∀l to be compatible with causal mask attention.

In the following sections 3.1 and 3.2, the research focuses on finding an attention matrix that meets the above needs, and they propose 3 methods E-SPA, U- SPA and Value-Skipinit, each method is used to control the attention matrix of the transformer, enabling faithful signal propagation even at deep depths. Furthermore, Section 3.3 demonstrates how to modify softmax attention to implement these attention matrices.

In the figure below, the study verified the two proposed SPA schemes, U-SPA and E-SPA. The results show that it can successfully avoid even when the network is deep. Pay attention only to the phenomenon of rank collapse in vanilla transformers.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Experiment

WikiText-103 Baseline: First, this study verifies that standard deep transformers without residual connections are not trainable, even if they have normalization layers (LN) and transformed activations , but the method in this article can solve this problem. As shown in Figure 2, it can be clearly seen that removing the residual connection from the standard transformer makes it untrainable, and the training loss stabilizes at around 7.5. As shown in Figure 1, the standard transformer suffers from rank collapse.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

On the other hand, the E-SPA method proposed in this study outperforms U-SPA and Value-Skipinit. However, the default transformer with residuals and LN still maintains the training speed advantage compared to our residual-free method.

In Table 1, the study evaluates the impact of different activation functions in the MLP block using the proposed method, as well as the use of LN in the residual-free transformer. It can be seen that at depth 36, our method achieves good training performance for a series of activations: DKS-transformed GeLU, TAT-transformed Leaky ReLU and untransformed GeLU, but not untransformed Sigmoid. It has also been seen experimentally that layer normalization is relatively unimportant for training speed and can even be detrimental to transformed activation when using SPA, which already has built-in mechanisms for controlling activation specifications.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

In Figure 3, we see that one way to match the default transformer training loss without requiring more iterations is to use normalization residual connection.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Table 2 shows that E-SPA with normalized residuals and LN outperforms the default PreLN transformer.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Figure 4(a) below shows that E-SPA again outperforms other methods; 4(b) shows that the training loss gap can be improved by simply increasing Training time to eliminate.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

The above is the detailed content of Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Apr 03, 2024 pm 12:04 PM

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

Abandon the encoder-decoder architecture and use the diffusion model for edge detection, which is more effective. The National University of Defense Technology proposed DiffusionEdge Abandon the encoder-decoder architecture and use the diffusion model for edge detection, which is more effective. The National University of Defense Technology proposed DiffusionEdge Feb 07, 2024 pm 10:12 PM

Current deep edge detection networks usually adopt an encoder-decoder architecture, which contains up and down sampling modules to better extract multi-level features. However, this structure limits the network to output accurate and detailed edge detection results. In response to this problem, a paper on AAAI2024 provides a new solution. Thesis title: DiffusionEdge:DiffusionProbabilisticModelforCrispEdgeDetection Authors: Ye Yunfan (National University of Defense Technology), Xu Kai (National University of Defense Technology), Huang Yuxing (National University of Defense Technology), Yi Renjiao (National University of Defense Technology), Cai Zhiping (National University of Defense Technology) Paper link: https ://ar

Tongyi Qianwen is open source again, Qwen1.5 brings six volume models, and its performance exceeds GPT3.5 Tongyi Qianwen is open source again, Qwen1.5 brings six volume models, and its performance exceeds GPT3.5 Feb 07, 2024 pm 10:15 PM

In time for the Spring Festival, version 1.5 of Tongyi Qianwen Model (Qwen) is online. This morning, the news of the new version attracted the attention of the AI ​​community. The new version of the large model includes six model sizes: 0.5B, 1.8B, 4B, 7B, 14B and 72B. Among them, the performance of the strongest version surpasses GPT3.5 and Mistral-Medium. This version includes Base model and Chat model, and provides multi-language support. Alibaba’s Tongyi Qianwen team stated that the relevant technology has also been launched on the Tongyi Qianwen official website and Tongyi Qianwen App. In addition, today's release of Qwen 1.5 also has the following highlights: supports 32K context length; opens the checkpoint of the Base+Chat model;

Large models can also be sliced, and Microsoft SliceGPT greatly increases the computational efficiency of LLAMA-2 Large models can also be sliced, and Microsoft SliceGPT greatly increases the computational efficiency of LLAMA-2 Jan 31, 2024 am 11:39 AM

Large language models (LLMs) typically have billions of parameters and are trained on trillions of tokens. However, such models are very expensive to train and deploy. In order to reduce computational requirements, various model compression techniques are often used. These model compression techniques can generally be divided into four categories: distillation, tensor decomposition (including low-rank factorization), pruning, and quantization. Pruning methods have been around for some time, but many require recovery fine-tuning (RFT) after pruning to maintain performance, making the entire process costly and difficult to scale. Researchers from ETH Zurich and Microsoft have proposed a solution to this problem called SliceGPT. The core idea of ​​this method is to reduce the embedding of the network by deleting rows and columns in the weight matrix.

Updated Point Transformer: more efficient, faster and more powerful! Updated Point Transformer: more efficient, faster and more powerful! Jan 17, 2024 am 08:27 AM

Original title: PointTransformerV3: Simpler, Faster, Stronger Paper link: https://arxiv.org/pdf/2312.10035.pdf Code link: https://github.com/Pointcept/PointTransformerV3 Author unit: HKUSHAILabMPIPKUMIT Paper idea: This article is not intended to be published in Seeking innovation within the attention mechanism. Instead, it focuses on leveraging the power of scale to overcome existing trade-offs between accuracy and efficiency in the context of point cloud processing. Draw inspiration from recent advances in 3D large-scale representation learning,

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

LLaVA-1.6, which catches up with Gemini Pro and improves reasoning and OCR capabilities, is too powerful LLaVA-1.6, which catches up with Gemini Pro and improves reasoning and OCR capabilities, is too powerful Feb 01, 2024 pm 04:51 PM

In April last year, researchers from the University of Wisconsin-Madison, Microsoft Research, and Columbia University jointly released LLaVA (Large Language and Vision Assistant). Although LLaVA is only trained with a small multi-modal instruction data set, it shows very similar inference results to GPT-4 on some samples. Then in October, they launched LLaVA-1.5, which refreshed the SOTA in 11 benchmarks with simple modifications to the original LLaVA. The results of this upgrade are very exciting, bringing new breakthroughs to the field of multi-modal AI assistants. The research team announced the launch of LLaVA-1.6 version, targeting reasoning, OCR and

The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks Apr 29, 2024 pm 06:55 PM

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

See all articles