Home > Technology peripherals > AI > Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

WBOY
Release: 2023-04-12 17:31:03
forward
844 people have browsed it

Despite many notable achievements, practical progress in training deep neural networks (DNNs) has been largely independent of the theoretical basis. Most successful modern DNNs rely on specific arrangements of residual connections and normalization layers, but the general principles of how to use these components in new architectures are still unknown, and their role in existing architectures is still not fully understood. .

Residual architectures are the most popular and successful, originally developed in the context of convolutional neural networks (CNN) and later emerged ubiquitously from attention networks. transformer architecture. One reason for the success of residual architectures is better signal propagation compared to ordinary DNNs, where signal propagation refers to the transmission of geometric information through DNN layers and is represented by a kernel function.

Recently, using signal propagation principles to train deeper DNNs without the involvement of residual connections and/or normalization layers in residual architectures has become an area of ​​community interest. . The reasons are twofold: firstly it validates the signal propagation hypothesis for the effectiveness of residual architectures, thus clarifying the understanding of DNN interpretability; secondly this may enable general principles and methods for DNN trainability beyond the residual paradigm.


For CNNs, the work of Xiao et al. (2018) shows that improved signal propagation through better initialization can efficiently train ordinary deep networks , albeit significantly slower than residual networks. The work of Martens et al. (2021) proposed Deep Kernel Shaping (DKS), which uses activation function transformation to control signal propagation, and uses strong second-order optimizers such as K-FAC to implement the training of ordinary networks and residual networks on ImageNet. The speeds are equal. The work of Zhang et al. (2022) extends DKS to a larger class of activation functions and achieves near equality in generalization.

The key quantity to analyze in signal propagation is the initialization time kernel of the DNN, or more precisely, the approximate kernel under the infinite width limit. For multilayer perceptrons (MLPs) and CNNs using delta initialization, the kernel can be written as a simple layer recursion containing only 2D functions to facilitate straightforward analysis. The kernel evolution of cross-layer transformers is more complex, so existing methods such as DKS are not suitable for transformers or indeed any architecture containing self-attention layers.

In MLP, signal propagation is judged by looking at the behavior of the (one-dimensional) kernel, while signal propagation in the transformer can be judged by looking at the (high-dimensional) kernel matrix at the network layer Judging from the evolution in .

This research must avoid a situation where diagonal elements grow or shrink rapidly with increasing depth, which is related to uncontrolled activation norms and may lead to saturation loss or numerical problems . Avoiding rank collapse is necessary for the trainability of deep transformers, while whether deep residual-free transformers can be trained remains an open question.

This paper from the blind review stage of ICLR 2023 solves this problem and demonstrates for the first time that it is possible to successfully train deep transformers without residual connections or normalization layers. To this end, they study the signal propagation and rank collapse problems in deep residual-free transformers and derive three methods to prevent them. Specifically, the approach uses a combination of parameter initialization, bias matrices, and position-dependent rescaling, and highlights several complexities specific to signal propagation in transformers, including interactions with position encoding and causal masking. The researchers empirically demonstrated that their method can generate deep trainable residual-free transformers.

In the experimental part, on the WikiText-103 and C4 data sets, the researchers demonstrated the use of their main method-Exponential Signal Preserving Attention (E- SPA), can make the training loss of the standard transformer comparable to that of the residual transformer in the paper by extending the training time by about five times. In addition, by combining this method with residual connections, the researchers also showed that transformers without normalization layers can achieve training speeds comparable to standard transformers.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Paper address: https://openreview.net/pdf?id=NPrsUQgMjKK

Regarding this paper, Rohan Anil, chief engineer of Google AI, believes that it is a big step forward for the Transformer architecture and a fundamental improvement.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Constructing a deep Transformer that is trainable without shortcuts

So far, the only strategy to correct Transformer rank collapse relies on residual connections, which This approach skips the inherent trainability issues of the self-attention layer. In contrast, this study directly addresses this question. First better understand signal propagation through attention layers, and then modify based on insights to achieve faithful signal transmission in deep transformers, which can be trained with or without residual connections.

Specifically, first, the study conducted a simple setting of a deep vanilla transformer with only attention, and then they assumed that the transformer has a single head (h = 1) setting or With a multi-head setup, where the attention matrix A does not change between different heads. If the block l≤L has an attention matrix A_l when initialized, the representation of the final block is X_L:

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

For the above formula, if Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture? and Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture? adopt orthogonal initialization, then Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture? can be orthogonal during initialization.

Under the above assumptions, if Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture? is used to represent the cross-position input kernel matrix, after some simplification, the following formula can be obtained:

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

From this simplified formula (kernel matrix in depth-only attention transformer), three requirements for (A_l)_l can be determined:

  1. Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?Must perform well within each block, avoiding degenerate situations such as rank collapse and exploding/vanishing diagonal values;
  2. A_l must be element-wise nonnegative ∀l;
  3. A_l should be lower triangular ∀l to be compatible with causal mask attention.

In the following sections 3.1 and 3.2, the research focuses on finding an attention matrix that meets the above needs, and they propose 3 methods E-SPA, U- SPA and Value-Skipinit, each method is used to control the attention matrix of the transformer, enabling faithful signal propagation even at deep depths. Furthermore, Section 3.3 demonstrates how to modify softmax attention to implement these attention matrices.

In the figure below, the study verified the two proposed SPA schemes, U-SPA and E-SPA. The results show that it can successfully avoid even when the network is deep. Pay attention only to the phenomenon of rank collapse in vanilla transformers.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Experiment

WikiText-103 Baseline: First, this study verifies that standard deep transformers without residual connections are not trainable, even if they have normalization layers (LN) and transformed activations , but the method in this article can solve this problem. As shown in Figure 2, it can be clearly seen that removing the residual connection from the standard transformer makes it untrainable, and the training loss stabilizes at around 7.5. As shown in Figure 1, the standard transformer suffers from rank collapse.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

On the other hand, the E-SPA method proposed in this study outperforms U-SPA and Value-Skipinit. However, the default transformer with residuals and LN still maintains the training speed advantage compared to our residual-free method.

In Table 1, the study evaluates the impact of different activation functions in the MLP block using the proposed method, as well as the use of LN in the residual-free transformer. It can be seen that at depth 36, our method achieves good training performance for a series of activations: DKS-transformed GeLU, TAT-transformed Leaky ReLU and untransformed GeLU, but not untransformed Sigmoid. It has also been seen experimentally that layer normalization is relatively unimportant for training speed and can even be detrimental to transformed activation when using SPA, which already has built-in mechanisms for controlling activation specifications.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

In Figure 3, we see that one way to match the default transformer training loss without requiring more iterations is to use normalization residual connection.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Table 2 shows that E-SPA with normalized residuals and LN outperforms the default PreLN transformer.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

Figure 4(a) below shows that E-SPA again outperforms other methods; 4(b) shows that the training loss gap can be improved by simply increasing Training time to eliminate.

Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?

The above is the detailed content of Papers that were highly praised by the reviewers during the ICLR blind review stage: Will it be a major innovation in the Transformer architecture?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template