Problem Set 4 (alpha version)
\( \DeclareMathOperator*\argmax{argmax} \)
Due at 3pm on Wednesday, Mar. 8. Please submit to Blackboard and follow the general guidelines regarding homework assignments.
For Exercises 1-3, download the ConvolutionBasics
notebook in the github repo. Your task is to fill in the missing parts of the code. Submit the completed notebook.
-
Exercise 1 is intended to help you think about the definition of 1D convolution. The answer is somewhat different depending on whether you use 1-based indexing for arrays (Julia) or 0-based indexing (Python).
-
Exercise 2 is intended to help you think about how convolution generalizes to 2D (and higher).
-
Exercise 3 is supposed to illustrate that 1D convolution is equivalent to multiplication by a Toeplitz matrix, which is a way of seeing that a convolutional net is a special case of a neural net.
-
Design a ConvNet that detects this pattern anywhere in a binary input image. \[ \begin{array}{cccc} 1 &1 &1 &1\cr 1 &0 &0 &1\cr 1 &0 &0 &1\cr 1 &1 &1 &1 \end{array} \] (When detecting this pattern, it doesn’t matter what the input is outside the block.) The output of the ConvNet should be a single feature map that encodes the existence of the above pattern at the locations in the input image. Here’s a constraint on your solution: the maximum allowable size of your kernels is . Use the Heaviside step function as your nonlinearity (since we are modeling Boolean functions and don’t care about backprop here). Write code to implement your ConvNet on input images. (You can use sample code like the
ToyConvolutionPooling
notebook in the github repo.)-
In your submitted notebook, one cell should contain the kernels in each layer, along with biases/thresholds. (Hint: no pooling is necessary.) Explain very briefly why this is a correct answer.
-
Show tests of your code on a few input images.
-
-
The
LeNet
notebook in the github repo contains sample code to train LeNet on MNIST. Modify the code to include biases in all the layers, and add code for training the biases. Note that all neurons in a feature map share the same bias. Run the code to show that the learning curve goes down. (This code is written for pedagogical purposes, not efficiency. It’s slow so we are not bothering with any time-consuming model selection. We will do that later with a deep learning framework.)