[论文翻译]在提供适当署名的情况下,Google特此授权允许复制本文中的表格和图形,仅限用于新闻或学术作品。


原文地址:https://arxiv.org/pdf/1706.03762v7


Provided proper attribution is provided, Google hereby grants permission to reproduce the tables and figures in this paper solely for use in journalistic or scholarly works.

在提供适当署名的情况下,Google特此授权允许复制本文中的表格和图形,仅限用于新闻或学术作品。

Attention Is All You Need

Attention Is All You Need

Ashish Vaswani∗ Google Brain avaswani@google.com

Ashish Vaswani∗ Google Brain avaswani@google.com

Noam Shazeer∗ Google Brain noam@google.com

Noam Shazeer∗ Google Brain noam@google.com

Niki Parmar∗ Google Research nikip@google.com

Niki Parmar∗ Google Research nikip@google.com

Jakob Uszkoreit∗ Google Research usz@google.com

Jakob Uszkoreit∗ Google Research usz@google.com

Aidan N. Gomez∗ † University of Toronto aidan@cs.toronto.edu

Aidan N. Gomez∗ † 多伦多大学 aidan@cs.toronto.edu

Łukasz Kaiser∗ Google Brain lukasz kaiser@google.com

Łukasz Kaiser∗ Google Brain lukasz kaiser@google.com

Illia Polosukhin∗ ‡ illia.polosukhin@gmail.com

Illia Polosukhin∗ ‡ illia.polosukhin@gmail.com

Abstract

摘要

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallel iz able and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 Englishto-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

主流的序列转换模型基于复杂的循环或卷积神经网络,这些网络包含编码器和解码器。性能最佳的模型还通过注意力机制连接编码器和解码器。我们提出了一种新的简单网络架构——Transformer,完全基于注意力机制,彻底摒弃了循环和卷积结构。在两个机器翻译任务上的实验表明,该模型在质量上更优,同时具有更高的并行化能力,且训练时间显著缩短。我们的模型在WMT 2014英德翻译任务上取得了28.4 BLEU值,比现有最佳结果(包括集成模型)提高了超过2 BLEU。在WMT 2014英法翻译任务中,我们的模型在8块GPU上训练3.5天后,以41.8 BLEU创造了新的单模型最优成绩,其训练成本仅为文献中最佳模型的一小部分。通过成功应用于英语成分句法分析(无论训练数据量大小),我们证明Transformer能很好地泛化至其他任务。

1 Introduction

1 引言

Recurrent neural networks, long short-term memory [13] and gated recurrent [7] neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation [35, 2, 5]. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures [38, 24, 15].

循环神经网络,尤其是长短期记忆网络 [13] 和门控循环网络 [7],已被牢固确立为序列建模和转换问题(如语言建模和机器翻译 [35, 2, 5])中的最先进方法。此后,大量研究持续推动循环语言模型和编码器-解码器架构 [38, 24, 15] 的发展。

Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_{t}$ , as a function of the previous hidden state $h_{t-1}$ and the input for position $t$ . This inherently sequential nature precludes parallel iz ation within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks [21] and conditional computation [32], while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.

循环模型通常沿着输入和输出序列的符号位置进行因子分解计算。通过将位置与计算时间步对齐,它们会生成一系列隐藏状态 $h_{t}$ ,该状态是前一个隐藏状态 $h_{t-1}$ 和位置 $t$ 的输入的函数。这种固有的顺序特性阻碍了训练样本内的并行化,这在较长序列长度时变得尤为关键,因为内存限制会制约跨样本的批处理。近期研究通过因子分解技巧 [21] 和条件计算 [32] 显著提升了计算效率,后者还同时改善了模型性能。然而,顺序计算的根本限制依然存在。

Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences [2, 19]. In all but a few cases [27], however, such attention mechanisms are used in conjunction with a recurrent network.

注意力机制已成为各种任务中引人注目的序列建模和转换模型不可或缺的组成部分,它能够对依赖关系进行建模,而无需考虑这些依赖在输入或输出序列中的距离 [2, 19]。然而,除少数情况外 [27],此类注意力机制通常与循环网络结合使用。

In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallel iz ation and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.

在本工作中,我们提出了Transformer,这是一种摒弃了循环结构、完全依赖注意力机制来捕捉输入与输出间全局依赖关系的模型架构。该架构能实现更高程度的并行化,仅需在8块P100 GPU上训练12小时,就能达到翻译质量的新最优水平。

2 Background

2 背景

The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [16], ByteNet [18] and ConvS2S [9], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmic ally for ByteNet. This makes it more difficult to learn dependencies between distant positions [12]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2.

减少序列计算的目标也是扩展神经GPU [16]、ByteNet [18] 和 ConvS2S [9] 的基础,它们都使用卷积神经网络作为基本构建块,并行计算所有输入和输出位置的隐藏表示。在这些模型中,关联两个任意输入或输出位置信号所需的操作数量随位置间距离增长,ConvS2S 呈线性增长,ByteNet 呈对数增长。这使得学习远距离位置间的依赖关系更加困难 [12]。在 Transformer 中,这一操作数量被减少为常数,尽管代价是由于对注意力加权位置进行平均而降低了有效分辨率,我们通过第 3.2 节描述的多头注意力 (Multi-Head Attention) 机制来抵消这一影响。

Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstract ive sum mari z ation, textual entailment and learning task-independent sentence representations [4, 27, 28, 22].

自注意力 (self-attention) ,有时也称为内部注意力 (intra-attention) ,是一种将单个序列的不同位置关联起来以计算序列表示的注意力机制。自注意力已成功应用于多种任务,包括阅读理解、抽象摘要、文本蕴含和学习与任务无关的句子表示 [4, 27, 28, 22]。

End-to-end memory networks are based on a recurrent attention mechanism instead of sequencealigned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks [34].

端到端记忆网络基于循环注意力机制而非序列对齐循环,已被证明在简单语言问答和语言建模任务中表现良好 [34]。

To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequencealigned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as [17, 18] and [9].

然而,据我们所知,Transformer 是首个完全依赖自注意力 (self-attention) 来计算输入和输出表征的转换模型,无需使用序列对齐的 RNN 或卷积。在接下来的章节中,我们将介绍 Transformer,阐述自注意力的原理,并讨论其相较于 [17, 18] 和 [9] 等模型的优势。

3 Model Architecture

3 模型架构

Most competitive neural sequence transduction models have an encoder-decoder structure [5, 2, 35]. Here, the encoder maps an input sequence of symbol representations $(x_{1},...,x_{n})$ to a sequence of continuous representations z=$(z_{1},...,z_{n})$ . Given z , the decoder then generates an output sequence $(y_{1},...,y_{m})$ . of symbols one element at a time. At each step the model is auto-regressive [10], consuming the previously generated symbols as additional input when generating the next.

最具竞争力的神经序列转换模型都采用编码器-解码器结构 [5, 2, 35]。其中,编码器将输入符号序列 $(x_{1},...,x_{n})$ 映射为连续表示序列 z=$(z_{1},...,z_{n})$。解码器在给定 z 后,逐步生成输出符号序列 $(y_{1},...,y_{m})$,每次生成一个元素。该模型在每个步骤都采用自回归方式 [10],即在生成下一个符号时将先前生成的符号作为额外输入。


Figure 1: The Transformer - model architecture.

图 1: Transformer模型架构。

The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively.

Transformer 遵循这一整体架构,在编码器和解码器中均采用堆叠的自注意力 (self-attention) 机制和逐点全连接层,分别如图 1 左右两部分所示。

3.1 Encoder and Decoder Stacks

3.1 编码器与解码器堆栈

Encoder: The encoder is composed of a stack of $N=6$ identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, positionwise fully connected feed-forward network. We employ a residual connection [11] around each of the two sub-layers, followed by layer normalization [1]. That is, the output of each sub-layer is LayerNorm $(x+{\mathrm{Sublayer}}(x))$ , where Sublayer $(x)$ is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $d_{\mathrm{model}}=512$ .

编码器 (Encoder):编码器由 $N=6$ 个相同层堆叠而成。每层包含两个子层:第一层是多头自注意力机制 (multi-head self-attention mechanism),第二层是简单的位置全连接前馈网络。我们在每个子层周围采用残差连接 [11],并随后进行层归一化 [1]。具体而言,每个子层的输出为 LayerNorm $(x+{\mathrm{Sublayer}}(x))$,其中 Sublayer $(x)$ 是该子层自身实现的函数。为支持这些残差连接,模型中所有子层及嵌入层的输出维度均为 $d_{\mathrm{model}}=512$。

Decoder: The decoder is also composed of a stack of $N=6$ identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$ .

解码器:解码器同样由 $N=6$ 个相同层堆叠而成。除包含编码器层中的两个子层外,解码器还插入了第三个子层,该子层对编码器堆栈的输出执行多头注意力机制。与编码器类似,我们在每个子层周围采用残差连接,后接层归一化。同时修改解码器堆栈中的自注意力子层,以防止当前位置关注到后续位置。这种掩码机制与输出嵌入向右偏移一位的特性相结合,确保位置 $i$ 的预测仅能依赖于小于 $i$ 的已知输出。

3.2 Attention

3.2 Attention

An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum

注意力函数可以描述为将查询(query)和一组键值对(key-value pairs)映射到输出(output)的过程,其中查询、键、值和输出均为向量。输出结果通过加权求和计算得出。


Figure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.

图 2: (左) 缩放点积注意力 (Scaled Dot-Product Attention)。(右) 多头注意力 (Multi-Head Attention) 由多个并行运行的注意力层组成。

of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.

每个值的权重由查询与对应键的兼容性函数计算得出。

3.2.1 Scaled Dot-Product Attention

3.2.1 缩放点积注意力 (Scaled Dot-Product Attention)

We call our particular attention "Scaled Dot-Product Attention" (Figure 2). The input consists of queries and keys of dimension $d_{k}$ , and values of dimension $d_{v}$ . We compute the dot products of the query with all keys, divide each by $\sqrt{d_{k}}$ , and apply a softmax function to obtain the weights on the values.

我们特别关注的"缩放点积注意力 (Scaled Dot-Product Attention)" (图 2)。输入由维度为$d_{k}$的查询和键,以及维度为$d_{v}$的值组成。我们计算查询与所有键的点积,将每个结果除以$\sqrt{d_{k}}$,然后应用softmax函数得到值的权重。

In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$ . The keys and values are also packed together into matrices $K$ and $V$ . We compute the matrix of outputs as:

实践中,我们会同时计算一组查询(query)的注意力函数,将它们打包成矩阵$Q$。键(key)和值(value)也同样被打包成矩阵$K$和$V$。输出矩阵的计算公式如下:

$$
{\mathrm{Attention}}(Q,K,V)=\operatorname{softmax}({\frac{Q K^{T}}{\sqrt{d_{k}}}})V
$$

$$
{\mathrm{Attention}}(Q,K,V)=\operatorname{softmax}({\frac{Q K^{T}}{\sqrt{d_{k}}}})V
$$

The two most commonly used attention functions are additive attention [2], and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_{k}}}$ . Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.

最常用的两种注意力函数是加性注意力 [2] 和点积(乘性)注意力。点积注意力与我们的算法相同,除了缩放因子 $\frac{1}{\sqrt{d_{k}}}$。加性注意力使用具有单个隐藏层的前馈网络计算兼容性函数。虽然两者在理论复杂度上相似,但点积注意力在实践中速度更快且更节省空间,因为它可以使用高度优化的矩阵乘法代码实现。

While for small values of $d_{k}$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_{k}$ [3]. We suspect that for large values of $d_{k}$ , the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients 4. To counteract this effect, we scale the dot products by √1d .

当 $d_{k}$ 值较小时,两种机制表现相似;但在 $d_{k}$ 值较大时,加性注意力 (additive attention) 的表现优于未缩放的点积注意力 (dot product attention) [3]。我们认为当 $d_{k}$ 值较大时,点积的幅值会变得很大,将 softmax 函数推入梯度极小的区域。为了抵消这种影响,我们将点积缩放为 $\frac{1}{\sqrt{d}}$。

3.2.2 Multi-Head Attention

3.2.2 多头注意力机制 (Multi-Head Attention)

Instead of performing a single attention function with $d_{\mathrm{model}}$ -dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_{k}$ , $d_{k}$ and $d_{v}$ dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_{v}$ -dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2.

我们不对 $d_{\mathrm{model}}$ 维度的键、值和查询执行单一注意力函数,而是发现将查询、键和值分别用不同的可学习线性投影线性变换 $h$ 次到 $d_{k}$ 、 $d_{k}$ 和 $d_{v}$ 维度更为有效。然后我们对这些投影后的查询、键和值并行执行注意力函数,生成 $d_{v}$ 维输出值。这些输出值会被拼接并再次投影,最终得到结果值,如图 2 所示。

Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.

多头注意力机制 (Multi-head attention) 使得模型能够同时关注来自不同位置、不同表示子空间的信息。而单一注意力头会因平均化操作抑制这一特性。

$$
\begin{array}{r l}&{\mathrm{MultiHead}(Q,K,V)=\mathrm{Concat}(\mathrm{head}_ {1},...,\mathrm{head}_ {\mathrm{h}})W^{O}}\ &{\quad\quad\quad\quad\mathrm{where~head}_ {\mathrm{i}}=\mathrm{Attention}(Q W_{i}^{Q},K W_{i}^{K},V W_{i}^{V})}\end{array}
$$

$$
\begin{array}{r l}&{\mathrm{MultiHead}(Q,K,V)=\mathrm{Concat}(\mathrm{head}_ {1},...,\mathrm{head}_ {\mathrm{h}})W^{O}}\ &{\quad\quad\quad\quad\mathrm{where~head}_ {\mathrm{i}}=\mathrm{Attention}(Q W_{i}^{Q},K W_{i}^{K},V W_{i}^{V})}\end{array}
$$

Where the projections are parameter matrices $W_{i}^{Q}\in\mathbb{R}^{d_{\mathrm{model}}\times d_{k}}$ , W iK ∈ Rdmodel×dk , W iV ∈ Rdmodel×dv and W O Rhdv×dmodel.

其中投影是参数矩阵 $W_{i}^{Q}\in\mathbb{R}^{d_{\mathrm{model}}\times d_{k}}$ , W iK ∈ Rdmodel×dk , W iV ∈ Rdmodel×dv 和 W O Rhdv×dmodel。

In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_{k}=d_{v}=d_{\mathrm{model}}/h\stackrel{.}{=}\dot{6}4$ . Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.

在本工作中,我们采用 $h=8$ 个并行注意力层(即注意力头)。每个头的维度设置为 $d_{k}=d_{v}=d_{\mathrm{model}}/h\stackrel{.}{=}\dot{6}4$ 。由于每个头的维度降低,总计算成本与全维度的单头注意力机制相当。

3.2.3 Applications of Attention in our Model

3.2.3 注意力机制在我们模型中的应用

The Transformer uses multi-head attention in three different ways:

Transformer 通过三种不同方式使用多头注意力机制:

3.3 Position-wise Feed-Forward Networks

3.3 位置级前馈网络

In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.

除了注意力子层之外,我们的编码器和解码器中的每一层都包含一个全连接前馈网络,该网络分别且相同地应用于每个位置。它由两个线性变换组成,中间有一个ReLU激活函数。

$$
\mathrm{FFN}(x)=\operatorname*{max}(0,x W_{1}+b_{1})W_{2}+b_{2}
$$

$$
\mathrm{FFN}(x)=\operatorname*{max}(0,x W_{1}+b_{1})W_{2}+b_{2}
$$

While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $d_{\mathrm{model}}=512$ , and the inner-layer has dimensionality $d_{f f}=2048$ .

虽然线性变换在不同位置上是相同的,但它们在不同层之间使用不同的参数。另一种描述方式是将其视为两个核大小为1的卷积。输入和输出的维度为$d_{\mathrm{model}}=512$,而内部层的维度为$d_{f f}=2048$。

3.4 Embeddings and Softmax

3.4 嵌入和Softmax

Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $d_{\mathrm{model}}$ . We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to [30]. In the embedding layers, we multiply those weights by $\sqrt{d_{\mathrm{{model}}}}$

与其他序列转导模型类似,我们使用学习到的嵌入(embeddings)将输入token和输出token转换为维度$d_{\mathrm{model}}$的向量。同样采用常规的学习线性变换和softmax函数,将解码器输出转换为预测的下一个token概率。在我们的模型中,两个嵌入层与softmax前的线性变换共享相同的权重矩阵,类似[30]的做法。在嵌入层中,我们将这些权重乘以$\sqrt{d_{\mathrm{{model}}}}$。

Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations for different layer types. $n$ is the sequence length, $d$ is the representation dimension, $k$ is the kernel size of convolutions and $r$ the size of the neighborhood in restricted self-attention.

Layer TypeComplexity per LayerSequential OperationsMaximum Path Length
Self-AttentionO(n2 . d)0(1)0(1)
RecurrentO(n · d2O (nO(n
ConvolutionalO(k·n·d20(1)((u)4601)0
Self-Attention (restricted)O(r · n · d)O(n/r)

表 1: 不同层类型的最大路径长度、每层复杂度和最小顺序操作数。$n$ 是序列长度,$d$ 是表示维度,$k$ 是卷积核大小,$r$ 是受限自注意力中邻域的大小。

层类型 每层复杂度 顺序操作数 最大路径长度
自注意力 (Self-Attention) O(n²·d) O(1) O(1)
循环 (Recurrent) O(n·d²) O(n) O(n)
卷积 (Convolutional) O(k·n·d²) O(1) O(logₖ(n))
受限自注意力 (Self-Attention restricted) O(r·n·d) - O(n/r)

3.5 Positional Encoding

3.5 位置编码

Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d_{\mathrm{model}}$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed [9].

由于我们的模型不包含循环和卷积结构,为了让模型能够利用序列的顺序信息,我们必须注入一些关于token在序列中相对或绝对位置的信息。为此,我们在编码器和解码器堆栈底部的输入嵌入中添加了"位置编码 (positional encodings)"。位置编码的维度 $d_{\mathrm{model}}$ 与嵌入维度相同,因此二者可以相加。位置编码有多种选择方案,包括可学习的和固定的 [9]。

In this work, we use sine and cosine functions of different frequencies:

在本工作中,我们使用了不同频率的正弦和余弦函数:

$$
\begin{array}{r}{P E_{(p o s,2i)}=s i n(p o s/10000^{2i/d_{\mathrm{model}}})}\ {P E_{(p o s,2i+1)}=c o s(p o s/10000^{2i/d_{\mathrm{model}}})}\end{array}
$$

$$
\begin{array}{r}{P E_{(p o s,2i)}=s i n(p o s/10000^{2i/d_{\mathrm{model}}})}\ {P E_{(p o s,2i+1)}=c o s(p o s/10000^{2i/d_{\mathrm{model}}})}\end{array}
$$

where pos is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000\cdot2\pi$ . We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$ , $P E_{p o s+k}$ can be represented as a linear function of P Epos.

其中 pos 表示位置,$i$ 表示维度。也就是说,位置编码的每个维度都对应一个正弦波。波长从 $2\pi$ 到 $10000\cdot2\pi$ 呈几何级数增长。我们选择这个函数是因为假设它能让模型更容易通过相对位置进行注意力学习,因为对于任何固定偏移量 $k$,$PE_{pos+k}$ 都可以表示为 $PE_{pos}$ 的线性函数。

We also experimented with using learned positional embeddings [9] instead, and found that the two versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.

我们还尝试使用了学习式位置嵌入 [9],发现两种版本的结果几乎相同(见表3行(E))。我们选择正弦版本是因为它可能让模型外推到比训练时遇到的更长的序列长度。

4 Why Self-Attention

4 为什么选择自注意力机制 (Self-Attention)

In this section we compare various aspects of self-attention layers to the recurrent and convolutional layers commonly used for mapping one variable-length sequence of symbol representations $\left(x_{1},...,x_{n}\right)$ to another sequence of equal length $\left(z_{1},\ldots,z_{n}\right)$ , with $x_{i},z_{i}\in{\bf\dot{R}}^{d}$ , such as a hidden layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we consider three desiderata.

在本节中,我们将自注意力(self-attention)层的多个方面与常用于符号表示序列映射的循环层和卷积层进行比较。这些层通常用于将一个可变长度的符号表示序列$\left(x_{1},...,x_{n}\right)$映射为另一个等长序列$\left(z_{1},\ldots,z_{n}\right)$,其中$x_{i},z_{i}\in{\bf\dot{R}}^{d}$,例如典型序列转导编码器或解码器中的隐藏层。为了说明使用自注意力的动机,我们考虑了三个关键需求。

One is the total computational complexity per layer. Another is the amount of computation that can be parallel i zed, as measured by the minimum number of sequential operations required.

一是每层的总计算复杂度。二是可并行化的计算量,以所需的最少顺序操作数衡量。

The third is the path length between long-range dependencies in the network. Learning long-range dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies [12]. Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types.

第三是网络中长距离依赖之间的路径长度。学习长距离依赖是许多序列转导任务中的关键挑战。影响学习此类依赖能力的一个关键因素是前向和后向信号在网络中必须穿越的路径长度。输入和输出序列中任意位置组合之间的路径越短,学习长距离依赖就越容易 [12]。因此,我们还比较了由不同层类型组成的网络中任意两个输入和输出位置之间的最大路径长度。

As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires $O(n)$ sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length $n$ is smaller than the representation dimensionality $d$ , which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece [38] and byte-pair [31] representations. To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size $r$ in the input sequence centered around the respective output position. This would increase the maximum path length to $O(n/r)$ . We plan to investigate this approach further in future work.

如表 1 所示,自注意力层 (self-attention) 通过恒定数量的顺序执行操作连接所有位置,而循环层需要 $O(n)$ 次顺序操作。在计算复杂度方面,当序列长度 $n$ 小于表示维度 $d$ 时,自注意力层比循环层更快,这种情况常见于机器翻译中最先进模型使用的句子表示,例如 word-piece [38] 和 byte-pair [31] 表示。为了提高涉及超长序列任务的计算性能,可以将自注意力限制为仅考虑输入序列中以相应输出位置为中心、大小为 $r$ 的邻域。这将使最大路径长度增加到 $O(n/r)$。我们计划在未来的工作中进一步研究这种方法。

A single convolutional layer with kernel width $k<n$ does not connect all pairs of input and output positions. Doing so requires a stack of $O(n/k)$ convolutional layers in the case of contiguous kernels, or $O(l o g_{k}(n))$ in the case of dilated convolutions [18], increasing the length of the longest paths between any two positions in the network. Convolutional layers are generally more expensive than recurrent layers, by a factor of $k$ . Separable convolutions [6], however, decrease the complexity considerably, to ${\dot{O(k\cdot n\cdot d+n\cdot d^{2})}}$ . Even with $k=n$ , however, the complexity of a separable convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer, the approach we take in our model.

单个卷积层的核宽度 $k<n$ 时,无法连接所有输入和输出位置。对于连续核的情况,需要堆叠 $O(n/k)$ 个卷积层;而对于扩张卷积 [18] 的情况,则需要 $O(log_{k}(n))$ 层,这会增加网络中任意两点间最长路径的长度。卷积层通常比循环层计算成本更高,其代价因子为 $k$。然而,可分离卷积 [6] 能显著降低复杂度至 ${\dot{O(k\cdot n\cdot d+n\cdot d^{2})}}$。但即使 $k=n$ 时,可分离卷积的复杂度仍等同于自注意力层加逐点前馈层的组合,这也正是我们模型所采用的方法。

As side benefit, self-attention could yield more interpret able models. We inspect attention distributions from our models and present and discuss examples in the appendix. Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences.

作为额外优势,自注意力机制 (self-attention) 能产生更具可解释性的模型。我们在附录中检查了模型的注意力分布并展示讨论了相关案例。单个注意力头不仅能清晰学会执行不同任务,许多注意力头还表现出与句子句法语义结构相关的行为。

5 Training

5 训练

This section describes the training regime for our models.

本节介绍我们模型的训练方案。

5.1 Training Data and Batching

5.1 训练数据与批处理

We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding [3], which has a shared sourcetarget vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary [38]. Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens.

我们在标准的WMT 2014英德数据集上进行了训练,该数据集包含约450万句对。句子使用字节对编码[3]进行编码,共享的源-目标词汇表约含37000个token。对于英法翻译,我们使用了规模更大的WMT 2014英法数据集,包含3600万句对,并将token划分为32000个词片词汇表[38]。句对按近似序列长度批量处理,每个训练批次包含一组句对,约含25000个源token和25000个目标token。

5.2 Hardware and Schedule

5.2 硬件与进度安排

We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyper parameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days).

我们在配备8块NVIDIA P100 GPU的单台机器上训练模型。对于采用本文所述超参数的基准模型,每个训练步骤耗时约0.4秒。基准模型共训练100,000步(12小时)。大型模型(见表3末行参数配置)的单步训练时间为1.0秒,总训练步数为300,000步(3.5天)。

5.3 Optimizer

5.3 优化器

We used the Adam optimizer [20] with $\beta_{1}=0.9$ , $\beta_{2}=0.98$ and $\epsilon=10^{-9}$ . We varied the learning rate over the course of training, according to the formula:

我们使用了Adam优化器 [20],其中$\beta_{1}=0.9$、$\beta_{2}=0.98$且$\epsilon=10^{-9}$。根据以下公式在训练过程中动态调整学习率:

$$
l r a t e=d_{\mathrm{model}}^{-0.5}\cdot\operatorname*{min}(s t e p_{-}n u m^{-0.5},s t e p_{-}n u m\cdot w a r m u p_{-}s t e p s^{-1.5})
$$

$$
l r a t e=d_{\mathrm{model}}^{-0.5}\cdot\operatorname*{min}(s t e p_{-}n u m^{-0.5},s t e p_{-}n u m\cdot w a r m u p_{-}s t e p s^{-1.5})
$$

This corresponds to increasing the learning rate linearly for the first warm up steps training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used $w a r m u p_s t e p s=4000$ .

这对应于在前warm up steps训练步骤中线性增加学习率,之后按步骤数的平方根倒数比例递减。我们使用了$warmup_steps=4000$。

5.4 Regular iz ation

5.4 正则化

We employ three types of regular iz ation during training:

我们在训练过程中采用三种正则化方法:

Table 2: The Transformer achieves better BLEU scores than previous state-of-the-art models on the English-to-German and English-to-French news test 2014 tests at a fraction of the training cost.

ModelBLEUTraining Cost (FLOPs)
EN-DEEN-FREN-DEEN-FR
ByteNet [18]23.75
Deep-Att+ PosUnk[39]39.21.0 ·1020
GNMT + RL [38]24.639.922.3·10191.4 ·1020
ConvS2S [9]25.1640.469.6 · 10181.5·1020
MoE [32]26.0340.562.0·10191.2·1020
Deep-Att + PosUnk Ensemble [39]40.48.0 · 1020
GNMT + RL Ensemble [38]26.3041.161.8· 10201.1·1021
ConvS2SEnsemble[9]26.3641.297.7· 10191.2·1021
Transformer (base model)27.338.13.3·1018
Transformer (big)28.441.82.3.1019

表 2: Transformer 在英德和英法新闻测试 2014 上以更低的训练成本取得了比之前最先进模型更好的 BLEU 分数。

模型 BLEU (EN-DE) BLEU (EN-FR) 训练成本 (FLOPs, EN-DE) 训练成本 (FLOPs, EN-FR)
ByteNet [18] 23.75
Deep-Att+ PosUnk[39] 39.2 1.0 ·1020
GNMT + RL [38] 24.6 39.92 2.3·1019 1.4 ·1020
ConvS2S [9] 25.16 40.46 9.6 · 1018 1.5·1020
MoE [32] 26.03 40.56 2.0·1019 1.2·1020
Deep-Att + PosUnk Ensemble [39] 40.4 8.0 · 1020
GNMT + RL Ensemble [38] 26.30 41.16 1.8· 1020 1.1·1021
ConvS2SEnsemble[9] 26.36 41.29 7.7· 1019 1.2·1021
Transformer (基础模型) 27.3 38.1 3.3·1018
Transformer (大型) 28.4 41.8 2.3·1019

Residual Dropout We apply dropout [33] to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of $P_{d r o p}=0.1$ .

残差丢弃 (Residual Dropout) 我们对每个子层的输出应用丢弃 [33],然后将其与子层输入相加并进行归一化。此外,我们还在编码器和解码器堆栈中对嵌入向量与位置编码的和应用丢弃。对于基础模型,我们使用的丢弃率为 $P_{d r o p}=0.1$。

Label Smoothing During training, we employed label smoothing of value $\epsilon_{l s}=0.1$ [36]. This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score.

标签平滑
在训练过程中,我们采用了值为$\epsilon_{l s}=0.1$的标签平滑技术 [36]。这会增加困惑度,因为模型学会了更加不确定,但能提高准确率和BLEU分数。

6 Results

6 结果

6.1 Machine Translation

6.1 机器翻译

On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.0 BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is listed in the bottom line of Table 3. Training took 3.5 days on 8 P100 GPUs. Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models.

在WMT 2014英语到德语翻译任务中,大型Transformer模型 (表2中的Transformer (big)) 以超过2.0 BLEU的优势超越了之前报道的最佳模型 (包括集成模型),创下了28.4 BLEU的最新最优成绩。该模型的配置列于表3的最后一行。训练在8块P100 GPU上耗时3.5天。即使我们的基础模型也超越了所有先前发布的模型和集成模型,而训练成本仅为任何竞争模型的一小部分。

On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0, outperforming all of the previously published single models, at less than $1/4$ the training cost of the previous state-of-the-art model. The Transformer (big) model trained for English-to-French used dropout rate $P_{d r o p}=0.1$ , instead of 0.3.

在WMT 2014英法翻译任务中,我们的大模型以41.0的BLEU分数超越了所有先前发布的单一模型,且训练成本不到之前最先进模型的$1/4$。该英法翻译Transformer (big)模型使用的dropout率为$P_{d r o p}=0.1$,而非0.3。

For the base models, we used a single model obtained by averaging the last 5 checkpoints, which were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We used beam search with a beam size of 4 and length penalty $\alpha=0.6$ [38]. These hyper parameters were chosen after experimentation on the development set. We set the maximum output length during inference to input length $+~50$ , but terminate early when possible [38].

对于基础模型,我们采用了对最后5个检查点取平均得到的单一模型,这些检查点每隔10分钟保存一次。对于大型模型,我们对最后20个检查点进行了平均。我们使用了束搜索 (beam search),束宽为4,长度惩罚系数$\alpha=0.6$ [38]。这些超参数是在开发集上经过实验后选定的。在推理过程中,我们将最大输出长度设置为输入长度$+~50$,但会尽可能提前终止[38]。

Table 2 summarizes our results and compares our translation quality and training costs to other model architectures from the literature. We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and an estimate of the sustained single-precision floating-point capacity of each GPU 5.

表 2: 总结了我们的结果,并将我们的翻译质量和训练成本与文献中的其他模型架构进行了比较。我们通过将训练时间、使用的 GPU 数量以及每个 GPU 的持续单精度浮点运算能力估计值相乘,来估算用于训练模型的浮点运算次数 [5]。

6.2 Model Variations

6.2 模型变体

To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the development set, news test 2013. We used beam search as described in the previous section, but no checkpoint averaging. We present these results in Table 3.

为了评估Transformer各组件的重要性,我们以不同方式调整基础模型,测量其在开发集news test 2013上英德翻译任务的性能变化。我们采用前文所述的束搜索 (beam search) 方法,但未使用检查点平均 (checkpoint averaging) 。实验结果如表3所示。

Table 3: Variations on the Transformer architecture. Unlisted values are identical to those of the base model. All metrics are on the English-to-German translation development set, news test 2013. Listed perplexities are per-wordpiece, according to our byte-pair encoding, and should not be compared to per-word perplexities.

N dmodeldffhdkdvPdropElstrain stepsPPL (dev)BLEU (dev)params ×106
base65122048864640.1 0.1100K4.9225.865
(A)15125125.2924.9
41281285.0025.5
16 3232 1632 164.91 5.0125.8
(B)165.16 5.0125.4 25.158
(C)2 4326.1125.4 23.760 36
85.1925.350
25632324.88 5.7525.5 24.580
10241281284.6626.028 168
10245.1225.453
(D)40964.7526.290
0.05.7724.6
0.24.9525.5
(E)0.0 0.24.6725.3
positional embedding instead of sinusoids5.4725.7
big61024 4096160.3300K4.92 4.3325.7 26.4213

表 3: Transformer架构的变体。未列出的值与基础模型相同。所有指标均基于英语到德语的翻译开发集news test 2013。列出的困惑度(perplexity)按我们的字节对编码(wordpiece)计算,不应与按词(word)计算的困惑度直接比较。

N dmodel dff h dk dv Pdrop Els train steps PPL (dev) BLEU (dev) params ×10⁶
base 6 512 2048 8 64 64 0.1 0.1 100K 4.92 25.8 65
(A) 1 512 512 5.29 24.9
4 128 128 5.00 25.5
16 32 32 16 4.91 5.01 25.8
(B) 16 5.16 5.01 25.4 25.1
(C) 2 4 32 6.11 25.4 23.7 60 36
8 5.19 25.3 50
256 32 32 4.88 5.75 25.5 24.5 80
1024 128 128 4.66 26.0 28 168
1024 5.12 25.4 53
(D) 4096 4.75 26.2 90
0.0 5.77 24.6
0.2 4.95 25.5
(E) 0.0 0.2 4.67 25.3
使用位置嵌入代替正弦函数 5.47 25.7
big 6 1024 4096 16 0.3 300K 4.92 4.33 25.7 26.4 213

In Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions, keeping the amount of computation constant, as described in Section 3.2.2. While single-head attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads.

在表3的(A)行中,我们按照3.2.2节的描述,保持计算量不变,调整了注意力头数以及注意力键和值的维度。虽然单头注意力比最佳设置低了0.9 BLEU,但头数过多时质量也会下降。

In Table 3 rows (B), we observe that reducing the attention key size $d_{k}$ hurts model quality. This suggests that determining compatibility is not easy and that a more sophisticated compatibility function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected, bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our sinusoidal positional encoding with learned positional embeddings [9], and observe nearly identical results to the base model.

在表3的(B)行中,我们观察到减小注意力键尺寸$d_{k}$会损害模型质量。这表明确定兼容性并非易事,使用比点积更复杂的兼容性函数可能更有益。在(C)和(D)行中,我们进一步观察到,正如预期的那样,更大的模型表现更好,而dropout对于避免过拟合非常有帮助。在(E)行中,我们用学习得到的位置嵌入[9]替换了正弦位置编码,观察到与基础模型几乎相同的结果。

6.3 English Constituency Parsing

6.3 英语成分句法分析

To evaluate if the Transformer can generalize to other tasks we performed experiments on English constituency parsing. This task presents specific challenges: the output is subject to strong structural constraints and is significantly longer than the input. Furthermore, RNN sequence-to-sequence models have not been able to attain state-of-the-art results in small-data regimes [37].

为了评估Transformer是否能泛化到其他任务,我们在英语成分句法分析上进行了实验。该任务面临特定挑战:输出受到严格的结构约束,且明显长于输入。此外,RNN序列到序列模型在小数据场景下尚未取得最先进的结果 [37]。

We trained a 4-layer transformer with $d_{m o d e l}=1024$ on the Wall Street Journal (WSJ) portion of the Penn Treebank [25], about 40K training sentences. We also trained it in a semi-supervised setting, using the larger high-confidence and Berkley Parser corpora from with approximately 17M sentences [37]. We used a vocabulary of 16K tokens for the WSJ only setting and a vocabulary of 32K tokens for the semi-supervised setting.

我们在宾州树库 (Penn Treebank) [25] 的《华尔街日报》(WSJ) 部分训练了一个 $d_{m o d e l}=1024$ 的 4 层 Transformer,约含 4 万条训练语句。同时采用半监督设置进行训练,使用包含约 1700 万条语句的高置信度语料库和伯克利解析器 (Berkley Parser) 语料库 [37]。纯 WSJ 设置使用 16K Token 的词表,半监督设置使用 32K Token 的词表。

We performed only a small number of experiments to select the dropout, both attention and residual (section 5.4), learning rates and beam size on the Section 22 development set, all other parameters remained unchanged from the English-to-German base translation model. During inference, we increased the maximum output length to input length $+300$ . We used a beam size of 21 and $\alpha=0.3$ for both WSJ only and the semi-supervised setting.

我们仅进行了少量实验来选择 dropout (包括注意力机制和残差连接部分,见第 5.4 节) 、学习率和集束搜索 (beam size) 大小,实验基于第 22 节的开发集完成,其余所有参数均保持与英德基础翻译模型一致。在推理阶段,我们将最大输出长度调整为输入长度 $+300$。无论是仅使用 WSJ 数据还是半监督学习设置,我们都采用集束大小为 21 且 $\alpha=0.3$ 的参数配置。

Table 4: The Transformer generalizes well to English constituency parsing (Results are on Section 23 of WSJ)

ParserTrainingWSJ23F1
Vinyals & Kaiser el al.(2014)[37] Petrov et al. (2006) [29]WSJ only,discriminative WSJ only, discriminative88.3 90.4
Zhu et al. (2013) [40]WSJ only,discriminative90.4
Dyer et al.(2016) [8] Transformer (4 layers)WSJ only,discriminative91.7
Zhu et al. (2013) [40]WSJ only,discriminative91.3
semi-supervised91.3
Huang & Harper(2009)[14]semi-supervised91.3
McClosky et al.(2006) [26]semi-supervised92.1
Vinyals & Kaiser el al.(2014)[37]semi-supervised92.1
Transformer (4 layers)semi-supervised92.7
Luong et al. (2015) [23]multi-task93.0
Dyer et al. (2016) [8]generative93.3

表 4: Transformer 在英语成分句法分析中表现优异 (结果基于 WSJ 第 23 节)

Parser Training WSJ23F1
Vinyals & Kaiser el al.(2014)[37] Petrov et al. (2006) [29] WSJ only, discriminative WSJ only, discriminative 88.3 90.4
Zhu et al. (2013) [40] WSJ only, discriminative 90.4
Dyer et al.(2016) [8] Transformer (4 layers) WSJ only, discriminative 91.7
Zhu et al. (2013) [40] WSJ only, discriminative 91.3
semi-supervised 91.3
Huang & Harper(2009)[14] semi-supervised 91.3
McClosky et al.(2006) [26] semi-supervised 92.1
Vinyals & Kaiser el al.(2014)[37] semi-supervised 92.1
Transformer (4 layers) semi-supervised 92.7
Luong et al. (2015) [23] multi-task 93.0
Dyer et al. (2016) [8] generative 93.3

Our results in Table 4 show that despite the lack of task-specific tuning our model performs surprisingly well, yielding better results than all previously reported models with the exception of the Recurrent Neural Network Grammar [8].

表4中的结果显示,尽管缺乏针对特定任务的调优,我们的模型表现却出人意料地好,除Recurrent Neural Network Grammar [8]外,其效果优于之前报告的所有模型。

In contrast to RNN sequence-to-sequence models [37], the Transformer outperforms the BerkeleyParser [29] even when training only on the WSJ training set of 40K sentences.

与RNN序列到序列模型[37]相比,Transformer即使在仅使用4万句WSJ训练集进行训练时,其表现也优于BerkeleyParser[29]。

7 Conclusion

7 结论

In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention.

在这项工作中,我们提出了Transformer,这是首个完全基于注意力机制的序列转导模型,它用多头自注意力取代了编码器-解码器架构中最常用的循环层。

For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles.

在翻译任务中,Transformer的训练速度显著快于基于循环层或卷积层的架构。在WMT 2014英德和WMT 2014英法翻译任务中,我们实现了新的最优性能。在前一项任务中,我们的最佳模型甚至超越了之前所有已报告的集成模型表现。

We are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. Making generation less sequential is another research goals of ours.

我们对基于注意力机制(attention-based)模型的未来充满期待,并计划将其应用于其他任务。我们计划将Transformer扩展到涉及非文本输入输出的场景,同时研究局部受限注意力机制,以高效处理图像、音频和视频等大规模输入输出数据。另一个研究目标是降低生成过程的序列依赖性。

The code we used to train and evaluate our models is available at https://github.com/ tensorflow/tensor 2 tensor.

我们用于训练和评估模型的代码可在 https://github.com/tensorflow/tensor2tensor 获取。

Acknowledgements We are grateful to Nal Kal ch brenner and Stephan Gouws for their fruitful comments, corrections and inspiration.

致谢
我们衷心感谢Nal Kalchbrenner和Stephan Gouws富有成效的评论、修改建议和启发。

References

参考文献

Attention Visualization s

注意力可视化


Figure 3: An example of the attention mechanism following long-distance dependencies in the encoder self-attention in layer 5 of 6. Many of the attention heads attend to a distant dependency of the verb ‘making’, completing the phrase ‘making...more difficult’. Attentions here shown only for the word ‘making’. Different colors represent different heads. Best viewed in color.

图 3: 编码器自注意力机制在6层结构中第5层遵循长距离依赖关系的示例。多数注意力头聚焦于动词"making"的远距离依赖,补全短语"making...more difficult"。此处仅展示单词"making"的注意力分布。不同颜色代表不同注意力头。建议彩色查看。


Figure 4: Two attention heads, also in layer 5 of 6, apparently involved in anaphora resolution. Top: Full attentions for head 5. Bottom: Isolated attentions from just the word ‘its’ for attention heads 5 and 6. Note that the attentions are very sharp for this word.

图 4: 两个注意力头(同样位于6层中的第5层),明显参与指代消解任务。顶部: 头5的完整注意力分布。底部: 仅单词"its"在注意力头5和6中的独立注意力分布。注意该单词的注意力分布非常集中。


Figure 5: Many of the attention heads exhibit behaviour that seems related to the structure of the sentence. We give two such examples above, from two different heads from the encoder self-attention at layer 5 of 6. The heads clearly learned to perform different tasks.

图 5: 许多注意力头(attention head)表现出与句子结构相关的行为。上图展示了两个此类示例,分别来自6层编码器中第5层的两个不同自注意力头。这些头明显学会了执行不同的任务。

阅读全文(20积分)