Problem Set 7 (beta version)
Due at 3pm on Monday, Apr. 17. Please submit to Blackboard and follow the general guidelines regarding homework assignments.
Theoretical exercises
Word embeddings are representations of words as vectors. The embeddings are obtained using unsupervised learning, and are used as input features for other language models trained via supervised learning. Word2Vec is a popular word embedding model introduced by Mikolov et al. 2013 and featured in a TensorFlow tutorial. Word2Vec has two main variants, Continuous Bag-of-Words (CBOW) and Skip-Gram (SG).
-
The CBOW model is trained to predict the middle word from a context window of surrounding words . Here each word is represented by an integer from 1 to , where is the size of the vocabulary. The prediction of the model is \[ o_{w} = \sum_{a=1}^d \sum_{i=-k\atop i\neq 0}^k U_{aw}V_{aw_i} \] followed by softmax.
- Write the CBOW model as a multilayer perceptron with softmax output. Suppose that the input is represented by word counts, , for all words in the vocabulary. Use the notation for the activities of the hidden layer, and the notation for the activities of the output layer following softmax.
- How many neurons are in the hidden layer?
- How many neurons are in the output layer?
- What are the connections from the input layer to the hidden layer?
- What are the connections from the hidden layer to the output layer?
- Write down the log loss for this model as a function of the prediction and the correct word .
We can regard the th column of the matrix as an embedding of the th word of the vocabulary in -dimensional space. The same is true of the rows of the matrix . Therefore training the CBOW model yields two word embeddings. This is an example of dimensionality reduction, since the embedding dimension is generally chosen much smaller than the size of the vocabulary.
In the Skip-Gram model, the central word is used to predict the surrounding words . You can think of this as separate prediction problems organized into a minibatch of size . This can also be regarded as a multilayer perceptron with a single hidden layer.
Programming exercises
The theoretical exercise above gives a rough idea of how word embeddings are trained. (For other important details you’ll have to consult the original papers.) In the following exercises you will visualize and use pre-trained word embeddings. We will use an embedding model called GloVe rather than Word2Vec for convenience. Pre-trained GloVe embeddings are available for a relatively small vocabulary size, which requires less RAM and computational cycles. Download glove.6B.zip
, which is the smallest of the pre-trained GloVe embeddings. If you unzip this file, you will see several txt
files. Use glove.6B.200d.txt
for the following exercises. Each line of this file is a word followed by 200 numbers, which constitute the embedding of that word.
-
Visualization of Word Embeddings.
-
Consider the following 10 seed words: mathematics, rose, cherry, toy, fear, approval, insect, steel, invention, music. For each seed word, list the 9 nearest neighbors based on cosine similarity of word embeddings. You will end up with a total of 100 words.
-
Compute the top two principal components of the 100 word embeddings, and project the embeddings onto the 2D subspace spanned by the principal components. Use the projections to plot the 100 words as points in a 2D scatter plot. Use one plot symbol for seed words, and another for neighbors. Assign a distinct color to each seed word along with its nearest neighbors. Include a legend mapping the seed words to their colors.
-
-
Solving analogy problems with vector arithmetic. In the analogy problem, you are given three words (e.g. “king”, “man”, and “woman”) as input, and your task is to generate a fourth word (“queen”) that completes the analogy (“king” is to “man” as “queen” is to “woman”). One possible solution is to use word embeddings. The idea is that the “king” to “man” relationship is represented by the difference between their word embeddings, and that an analogous relationship should have approximately the same difference between word embeddings. In other words, it should be the case that . Therefore the minimization over all words in the vocabulary ought to recover “queen.” Solve this list of analogies using word embeddings. Your code should predict the fourth word from the first three words in each analogy. Report the accuracy of your code. You can ignore analogies that contain words that are missing from the GloVe vocabulary.