[論文速速讀]Attention Is All You Need

Posted by John on 2020-04-14
Words 1.5k and Reading Time 7 Minutes
Viewed Times

〖想觀看更多中文論文導讀,至[論文速速讀]系列文章介紹可以看到目前已發布的所有文章!〗

簡介

paper: Attention Is All You Need

提到nlp近年來的重點技術之一就不能不提到attention,注意力機制提出後幾乎所有nlp論文都被重新用attention掃過一輪benchmark。

雖然這篇不是第一個提出注意力機制的,不過後面的各種芝麻街是基於這篇來延伸。

關於attention一路走來的發展,可以參考之前我寫的[DL]Attention Mechanism學習筆記,這篇會主要在摘要paper重點內容。

Abstract

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.

  • 以往的seqence transduction models是使用基於encoder & decoder的複雜RNN/CNN模型
    • 最佳的模型則是使用了基於attention mechanism的encoder decoder(還是依據RNN/CNN)
  • 提出了transformer,不使用CNN/RNN,完全只使用attention mechanism的網路架構
    • 不過他的架構還是encoder decoder的概念,只是沒用到RNN/CNN

Introduction

Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $ht$, as a function of the previous hidden state $h{t−1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks [21] and conditional computation [32], while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.

  • RNN主要是透過將sequence的位置與time steps對齊來考慮不同sequence之間的關係
  • 很大的一個問題: 無法平行運算

Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences [2, 19]. In all but a few cases [27], however, such attention mechanisms are used in conjunction with a recurrent network.

Background

Model Architecture

Encoder

  • N個block組成,每層有兩個sub-layers
    • Multi-head attention
    • Fully connected
  • residual connection + layer normalization

Decoder

  • N個block組成,每層有三個sub-layers
    • Masked Multi-head attention
    • Multi-head attention
    • Fully connected
  • residual connection + layer normalization

We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$

  • 透過一個mask確保attention在i時刻不會關注到i之後的資料
    • 因為真實情況你是不會有未來的資料的
    • look_ahead_mask

look_ahead_mask

1
2
3
4
5
# 建立一個 2 維矩陣,維度為 (size, size),
# 其遮罩為一個右上角的三角形
def create_look_ahead_mask(size):
mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
return mask # (seq_len, seq_len)
[[0. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[0. 0. 1. 1. 1. 1. 1. 1. 1. 1.]
[0. 0. 0. 1. 1. 1. 1. 1. 1. 1.]
[0. 0. 0. 0. 1. 1. 1. 1. 1. 1.]
[0. 0. 0. 0. 0. 1. 1. 1. 1. 1.]
[0. 0. 0. 0. 0. 0. 1. 1. 1. 1.]
[0. 0. 0. 0. 0. 0. 0. 1. 1. 1.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 1.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]

這個mask在attention做完要進行softmax的時候會用到,也就是讓遮罩為1的地方加上一個趨近負無限大的值,使得softmax完的值會趨近於0

1
2
3
4
5
def scaled_dot_product_attention(q, k, v, mask):
...
# 將遮罩「加」到被丟入 softmax 前的 logits
if mask is not None:
scaled_attention_logits += (mask * -1e9)

Attention

An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.

  • 透過query vector和key-value vector的mapping
    • q, k, v都是相同的vector -> self-attention的原因

Scaled Dot-Product Attention

$Attention(Q,K,V)=softmax(\frac{QK^T}{\sqrt{d_k}})V$

Multi-Head Attention


$MultiHead(Q,K,V)=Concat(head_1,…,head_h)W^o$
$where \space head_i=Attention(QW_i^Q,KW_i^K,VW_i^V)$

  • data分成Q, K, V後先做了一個linear transformation
  • 然後attention完後concat起來,再做一次linear transformation
  • 不同的attention可以關注不同的訊息(local, global…)

Applications of Attention in our Model

Multi-head attention被用在以下三個地方

  • encoder layers: Q,K,V來自相同的input
  • decoder layers: Q,K,V來自相同的input,還使用了look_ahead_mask防止decoder看到未來的資料
  • “encoder-decoder attention” layers: Q來自上一層decoder的輸出; K跟V來自encoder最後一層的輸出
    • 使得decoder可以關注到encoder的所有資料

Position-wise Feed-Forward Networks

就是FC,不同層有不同參數

Embeddings and Softmax

就是常用的word embedding跟softmax

Positional Encoding

“self-attention會看sequence的每個資料,那我資料放第一個跟最後一個其實沒差阿?”

  • Ex: “天涯若比鄰” “比天若涯鄰” 的結果應該會是相同的

In order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence.

畫起來就是下面這種神奇的圖

Why Self-Attention

References


>