# [論文速速讀]Efficient Estimation of Word Representation in Vector Space

Posted by John on 2020-06-29
Words 2.3k and Reading Time 10 Minutes
Viewed Times

〖想觀看更多中文論文導讀，至[論文速速讀]系列文章介紹可以看到目前已發布的所有文章！〗

Word2vec是一種word embedding技術，為什麼需要word embedding? 因為這樣可以將NLP task的word轉成一個numeric vector，然後加以進行分析和運算

## Abstract

We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.

word2vec的paper，提出了Skip-gram跟CBOW，相較於傳統的方法，能夠大幅提高訓練速度。

## Introduction

Many current NLP systems and techniques treat words as atomic units - there is no notion of similarity between words, as these are represented as indices in a vocabulary. This choice has several good reasons - simplicity, robustness and the observation that simple models trained on huge amounts of data outperform complex systems trained on less data.

However, the simple techniques are at their limits in many tasks. For example, the amount of relevant in-domain data for automatic speech recognition is limited - the performance is usually dominated by the size of high quality transcribed speech data (often just millions of words). In machine translation, the existing corpora for many languages contain only a few billions of words or less. Thus, there are situations where simple scaling up of the basic techniques will not result in any significant progress, and we have to focus on more advanced techniques.

• 可從大量資料(數十億level)中學習word vectors的技術
• 據目前所知，還沒有方法提出可以在幾百萬的資料上學習50-100的詞向量
• 希望學習到相似的詞距離相近，而且詞有不同的相似度(multiple degrees of similarity)
• 名詞的多個詞尾可能在相近的空間中被找到
• 新的模型可以做到向量的線性操作
• vector(”King”) - vector(”Man”) + vector(”Woman”) results in a vector that is closest to the vector representation of the word Queen

… Another interesting architecture of NNLM was presented in [13, 14], where the word vectors are first learned using neural network with a single hidden layer. The word vectors are then used to train the NNLM. Thus, the word vectors are learned even without constructing the full NNLM.

word2vec就是基於這個想法的擴展應用。

## Model Architectures

• E: 迭代次數
• T: 訓練集的詞個數
• Q: 模型參數
• Common choice is E = 3 − 50 and T up to one billion

### Feedforward Neural Net Language Model (NNLM)

(上圖是原始paper(A Neural Probabilistic Language Model)的圖，annotation可能會跟下面word2vec的對不起來，下面使用原本word2vec的annotation)

• input layer: N words使用one-hot encoding成V維的向量，V是(vocab size)
• 注意這裡是用前N words，而不是用所有words來訓練，這是和word2vec中CBOW投影層的差異!!
• projection layer: input(NxV)會使用同一個projection matrix(VxD)投影到projection layer P(NxD)
• D是投影後的維度
• 共用一個projection matrix，所以這裡的cost還算低
• hidden layer: 隱藏層來計算整個word的機率，有H個neuron
• output layer有V個neuron

• 其中output layer的HxV最重要
• 有一些優化的方法，例如hierarchical softmax，使用binary tree representations of the vocabulary(Huffman tree)，可以降到$\log_2(V)$
• 所以其實主要的計算量在hidden layer

### Recurrent Neural Net Language Model (RNNLM)

• D和隱藏層H有相同的維度
• 使用hierarchical softmax + huffman tree，H×V可以降低為H×$\log_2V$，所以大部分的複雜度來自D×H

## New Log-linear Models

### Continuous Bag-of-Words Model

• 所有的单词都投影到同一个位置（所有向量取平均值）。这样不考虑单词的位置顺序信息，叫做词袋模型
• 詞的順序對於不影響投影
• 会用到将来的词，例如如果窗口 windows 为 2，这样训练中心词的词向量时，会选取中心词附近的 4 个上下文词（前面 2 个后面 2 个）
• 输出层是一个 log-linear 分类器

• log(V)是用到了hierarchical softmax + huffman tree

### Continuous Skip-gram Model

• 通過實驗發現，windows越大效果越好(但cost也越大)
• 距離較遠的word通常關聯性較小，所以透過抽取較少的樣本(降低機率)來降低對距離較遠word的權重

• C是window size的2倍，也就是要預測的word個數
• 也就是說預測每個word所需的參數量是D+D×log(V)
• 看到logV就知道用到了hierarchical softmax

## Results

We follow previous observation that there can be many different types of similarities between words, for example, word big is similar to bigger in the same sense that small is similar to smaller. Example of another type of relationship can be word pairs big - biggest and small - smallest [20]. We further denote two pairs of words with the same relationship as a question, as we can ask: ”What is the word that is similar to small in the same sense as biggest is similar to big?”

## 延伸討論: Word2vec implement detail

### Hierarchical Softmax v.s. Negative sampling

#### Hierarchical Softmax

• 透過huffman tree + Hierarchical Softmax，tree的根節點是每一個word，透過一步步走到leaf來求得softmax的值
• 從root到leaf只需要log(V)步
• 如此可以大幅的加快求得softmax的速度
• 缺點: 但是如果我们的训练样本里的中心词w是一个很生僻的词，那么就得在霍夫曼树中辛苦的向下走很久了
• 於是有了Negative Sampling

#### Negative sampling

• 只需要window內的所有正樣本 + sampling一定數量的負樣本就足夠訓練模型

• 設計這個分佈時希望詞被抽到的機率要跟這個詞出現的頻率有關，出現在文本中的頻率越高越高越有可能被抽到

$f(w_i)$代表$w_i$出現次數(頻率)，3/4是實驗try出來的數據

• 例如：有一個詞編號是 100，它出現在整個文本中 1000 次，所以 100 在 unigram table 就會出現 $1000^{0.75} = 177$ 次

Our experiments indicate that values of k in the range 5–20 are useful for small training datasets, while for large datasets the k can be as small as 2–5.

## 結論

word2vec真滴神，然後skip-gram似乎普遍比CBOW好用

>