Search
Deep Convolutional Neural Networks (CNNs) re-entered into the computer vision community recently, especially after the breakthrough paper of Krizhevsky et al. [1] that presented a large scale image category recognition with a remarkable success. In 2012, the CNN-based algorithm outperformed competing teams from many renowned institutions by a significant margin. This success initiated an enormous interest in neural networks in computer vision, to the extent that most successful methods are using neural networks nowadays.
The convolutional network is an extremely flexible classifier that is capable of fitting on very complex recognition/regression problems with a good generalization ability. The network consists of a nested ensemble of non-linear functions. The network is usually deep, i.e. it has many layers. Typically it has more parameters than number of data samples in the training set. There are mechanism to prevent overfitting. One of the basic tricks is leveraging the convolutional layers. The network learns shift-invariant filters instead of individual weights on every input pixel. Thus much fewer parameters are required, since the weights are shared.
Fig. 1: Architecture of a Deep Convolutional Neural Network. Figure adapted from [1].
Usually, the architecture of an image classification CNN is composed of several convolutional layers (which are meant to learn a representation) followed by a few fully connected layers (which implement the non-linear classification stage on top of the invariant representation), see figure 1.
In the following two labs, you will get in touch with the CNNs. In the first part, you will work with a pre-trained network, while in the second part you will train your own network from scratch. We recommend to use MatConvNet toolbox from Oxford [2].
Download cnn-1.zip, that contains all necessary files: MatConvNet toolbox, EdgeBoxes toolbox, pre-trained models, and test images and test scripts. Extract the content of the archive into a separate directory.
The main script is test.m. First, the script initializes the MatConvNet toolbox and compiles it if necessary on your machine. You will be later supposed to code a couple of functions which are called by the test script.
test.m
We will use model imagenet-vgg-f. This is a fast and one of the best performing network for large scale image categorization. It was trained to recognize 1000 classes from ImageNet. The network has the same architecture as the original network proposed in [1], but was trained from scratch by MatConvNet authors.
imagenet-vgg-f
%% Load pre-trained CNN model model = 'imagenet-vgg-f' ; net = load(sprintf('models/%s.mat', model)) ; %display net structure vl_simplenn_display(net) %display filters figure(1); filter_img = vl_imarraysc(net.layers{1}.weights{1}); imagesc(filter_img); title('First layer filters')
The above code loads the model and displays the architecture by layers and visualizes the first layer filters, figure 2. Make sure you understand the meaning and functionality of all the layers input, conv, relu, mpool, softmx. Refer to the user's manual if you are unsure.
input
conv
relu
mpool
softmx
Fig. 2: First layer filters.
A test image below is first normalized to fixed size 224×224 pixels and the average image (over the training set) is substracted.
I = imread('grocery.jpg'); figure(2); imagesc(I); axis image title('Input image') %normalize image im = imresize(I, net.meta.normalization.imageSize(1:2)); im = single(im) - net.meta.normalization.averageImage; %run network res = vl_simplenn(net, im, [], [], 'mode', 'test');
Then the network is executed and all the responses of the network including the final output are stored in structure res. The test script shows responses of the first layer filters. Classification scores for all 1000 classes are found in the last layer. The code bellow prints the top 5 scoring classes.
res
%gather results r = squeeze(gather(res(end).x)); [rs, id] = sort(r, 'descend'); fprintf('\n'); for i=1:5 fprintf('%.3f %s \n', rs(i), net.meta.classes.description{id(i)}); end fprintf('\n');
The output should be:
0.380 grocery store, grocery, food market, market 0.210 pineapple, ananas 0.147 banana 0.102 custard apple 0.038 strawberry
Feel free to switch an input image to one of other attached test images or to your favorite image. You should get an intuition what the network can recognize and where limitations are.
The network correctly classified the above image. However, the image apparently contains multiple objects of various categories. The most straightforward approach to detect multiple classes is to use scanning windows.
The idea is that an image is exhaustively scanned with windows that defines a sub-image. All the sub-images are cropped and normalized to the fixed size that is passed into the network. A problem is that there are too many of all possible sub-images. Luckily, we need not evaluate on all of them, since the network is to some extent insensitive on a precise alignment of an object in the image. Therefore, we can scan the image with a small overlap among scanning windows.
Your task will be to write function scanning_windows.m that takes the input image, a minimum size of the scanning window, a stride of the scan and a multi-scale factor and outputs a list of square bounding boxes. See the function template for an exact format.
scanning_windows.m
The set of scanning window bounding boxes will be used by the test script to prepare a batch of images that is then fed into the network. Top scoring classes of each of the bounding box are collected and scores above a threshold are displayed by show_detections.m. Note that you can click on an object to highlight the bounding box and the textual description which might be useful in case of multiple overlapping detections.
show_detections.m
Having scores of all 1000 classes in all bounding boxes, the test script shows a response map of a particular class (over all scanned locations and all scales); e.g. a pineapple in the figure below.
A drawback of the exhaustive scanning window approach is high computational cost, since even homogeneous (textureless) regions of the images are evaluated. An attempt to avoid expensive search is using the Edge Boxes [3]. The Edgeboxes deliver a list of promising bounding boxes, where an object is likely to be present. The algorithm should work independently on the object class. It is based on a simple idea that an object bounding box has many edges that are contained inside the bounding box but very few edges crossing the bounding box boundary.
The following code runs the EdgeBoxes (using author's implementation), and transforms the output into the same format as the scanning-window bounding boxes.
addpath dep/edges-master model=load('dep/edges-master/models/forest/modelBsds'); model=model.model; model.opts.multiscale=0; model.opts.sharpen=2; model.opts.nThreads=4; % set up opts for edgeBoxes (see edgeBoxes.m) opts = edgeBoxes; opts.alpha = .65; % step size of sliding window search opts.beta = .75; % nms threshold for object proposals opts.minScore = .01; % min score of boxes to detect opts.maxBoxes = 1e4; % max number of boxes to detect % detect Edge Box bounding box proposals (see edgeBoxes.m) fprintf('\n') tic, fprintf('Generating EdgeBoxes...') bbs=edgeBoxes(I,model,opts); toc bboxes = double([bbs(:,1), bbs(:,2), bbs(:,1)+bbs(:,3), bbs(:,2)+bbs(:,4)]');
After the output EdgeBoxes are extracted, normalized, fed into the network, the result should look as follows.
Many bounding boxes are overlapping each other that makes the result a bit chaotic. Your task will be to implement a simple algorithm that selects the highest scoring detections while suppressing those with a lower score that are overlapping them. Implement function stable_detections.m that take a list of bounding boxes (delivered by edge boxes), a corresponding list of scores (given by the network output), and an overlap IoU threshold (intersection over union ratio) as a parameter, and outputs an index of bounding boxes that are finally selected. Refer to the function template for an exact specification.
stable_detections.m
The stable detection algorithm proceeds as follows:
The final result should look similar as in the figure below:
You are supposed to upload functions scanning_windows.m and stable_detections.m together with all used non-standard functions you have created.
To test your codes, run test_publish.m that calls test.m, the main test script and generates a html page. Compare your results with ours.
test_publish.m
In this lab, we will experiment with convolutional network training for hand-written digit recognition. Neural networks have been the state of the art in this task for a long time [4], using convolutional layers.
Download cnn-2.zip that contains all necessary files: the MNIST dataset of labeled hand-written digits and the test.m script. Extract the content into a separate directory at the same level as the previous lab. We will use again the MatConvNet toolbox, which has been installed last time.
The MNIST dataset is loaded into structure imdb. The dataset contains 70k of labeled images of size 28×28 pixels. The training/test split is already made in imdb.images.set, where 1-training, 2-test (validation).
imdb
imdb.images.set
A sample from the dataset is shown here:
A structure of the network needs to be set up first. This code initializes the network structure and prints the structure by layers.
net = cnn_mnist_init(); %set up the network structure vl_simplenn_display(net);
The network is relatively shallow, much simpler than the ImageNet category network, but has proven excellent performance.
layer| 0| 1| 2| 3| 4| 5| 6| 7| 8| 9| 10| 11| type|input|conv|bnorm|mpool|conv|bnorm|mpool|conv|bnorm|relu|conv|softmxl| name| n/a| | | | | | | | | | | | ----------|-----|----|-----|-----|----|-----|-----|----|-----|----|----|-------| support| n/a| 5| 1| 2| 5| 1| 2| 4| 1| 1| 1| 1| filt dim| n/a| 1| n/a| n/a| 20| n/a| n/a| 50| n/a| n/a| 500| n/a| num filts| n/a| 20| n/a| n/a| 50| n/a| n/a| 500| n/a| n/a| 10| n/a| stride| n/a| 1| 1| 2| 1| 1| 2| 1| 1| 1| 1| 1| pad| n/a| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| ----------|-----|----|-----|-----|----|-----|-----|----|-----|----|----|-------| rf size| n/a| 5| 5| 6| 14| 14| 16| 28| 28| 28| 28| 28| rf offset| n/a| 3| 3| 3.5| 7.5| 7.5| 8.5|14.5| 14.5|14.5|14.5| 14.5| rf stride| n/a| 1| 1| 2| 2| 2| 4| 4| 4| 4| 4| 4| ----------|-----|----|-----|-----|----|-----|-----|----|-----|----|----|-------| data size| 27| 23| 23| 11| 7| 7| 3| 0| 0| 0| 0| 0| data depth| 1| 20| 20| 20| 50| 50| 50| 500| 500| 500| 10| 1| data num| 1| 1| 1| 1| 1| 1| 1| 1| 1| 1| 1| 1| ----------|-----|----|-----|-----|----|-----|-----|----|-----|----|----|-------| data mem| 3KB|41KB| 41KB| 9KB|10KB| 10KB| 2KB| 0B| 0B| 0B| 0B| 0B| param mem| n/a| 2KB| 320B| 0B|98KB| 800B| 0B| 2MB| 8KB| 0B|20KB| 0B| parameter memory|2MB (4.3e+05 parameters)| data memory|116KB (for batch size 1)|
In the following, we will keep the network structure fixed. The weights were initialized randomly. The training process will now optimize the soft-max loss (empirical multi-class error) iteratively by stochastic gradient descent (SGD), a.k.a. back-propagation.
trainOpts = []; trainOpts.batchSize = 100 ; %number of images in the SGD step trainOpts.numEpochs = 15 ; %number of iterations over all data samples trainOpts.continue = true ; %resume if true trainOpts.learningRate = 0.001 ; %scalar that scales the gradient trainOpts.expDir = 'mnist/baseline' ; %working directory % Call training function in MatConvNet [net,info] = cnn_train(net, imdb, @getBatch, trainOpts) ;
Function cnn_train takes the network (with initial weights) net, the image dataset imdb, function that extracts the batch of images with labels getBatch, together with training options trainOpts.
cnn_train
net
getBatch
trainOpts
During the training, several statistics are measured after every batch. The toolbox plots the objective and top1-error and top5-error for both training and validation data after each epoch. The training takes a couple of minutes, and finally the training curves will look similar to the following:
Our network achieved 0.014 validation error. Now, let us try to use the network to read a hand-written phone number.
A scanning window that exhaustively evaluates all possible positions of the digits in the image is implemented. The search is horizontal only. Similarly as the last time, we prepare a stack of images that is fed into the trained network.
In the above visualization, we see the response map of all the characters, the best score labels and the best score in each position of the sliding window. The results are not as good as you might have expected. What is wrong?
Obviously, the network cannot recognize blank space and consistently outputs digit '1' instead as the sparsest digit. Moreover, while the network was trained on isolated digits, scanning windows often contain adjacent characters and the network is thus confused with the context. This is an example of both digits '5' that are recognized usually as digit '8'.
To measure quantitatively the network performance in the surrounding of other digits, we prepared a small set of 1000 digits with the simulated context. A sample is shown below.
Although the error rate of the baseline network for the isolated recognition was only 0.014, the error rate climbs to 0.079 on the context dataset.
A remedy is to train the network to recognize the digits in the context. We will train a new network that will have one more class for blank space and that will be given labeled examples with the simulated context when training to be insensitive to it. This approach is called the data augmentation. The following code will do the training:
% Add "space" character to capture blank space imdb.images.data(:,:,:,end:end+5000) = 0; imdb.images.labels(end:end+5000) = 11; imdb.images.set(end:end+3000) = 1; imdb.images.set(end:end+2000) = 2; imdb.meta.classes{end+1} = ' ' net = cnn_mnist_init('num_classes', 11); %one more class added vl_simplenn_display(net); trainOpts = []; trainOpts.batchSize = 100 ; trainOpts.numEpochs = 15 ; trainOpts.continue = true ; trainOpts.learningRate = 0.001 ; trainOpts.expDir = 'mnist/context' ; % Call training function in MatConvNet [net,info] = cnn_train(net, imdb, @getBatchWithContext, trainOpts) ;
Your task will be to implement function getBatchWithContext which replaces the original getBatch of the baseline network and delivers images with simulated context. We recommend to compose the simulated images with random adjacent digits (from the same batch) and randomly tight margins between the characters. Your images should look similar to those in our context test set.
getBatchWithContext
After the training, the result on the context set should improve. You should achieve the error rate in the context set around 0.018, while keeping around 0.013 error rate for the isolated recognition.
Results should also improve for scanning windows on the phone number image. Notice, that besides the spaces correctly found, there is much less confusion in the response map and the digit's are always correctly recognized when the window is well aligned with the digit.
The above image shows a row of 9 digits, that could be a hand-written phone number. Previous experiments show promising results in the network ability to recognize the isolated digits or the digits perturbed by surrounding context. However, a practical task would be to read the number from the input image, i.e. to design an algorithm that would take an input image and output a string of 9 recognized digits. This will be your task.
Download dataset phone_numbers.mat. The dataset contains two variables: images (28x250x1000) which contains 1000 images of phone numbers similar to the one above, and labels (1000×9) char array of corresponding image labels. More precisely, each row of this array correspond to the recognition of all 9 digits.
images
labels
The true labels are given for the first 50 images only, your task is to complete labels for the remaining 950 images (denoted by '?' symbol). It is completely up to you how you approach the problem. To motivate you to come up with a high quality solution, you will compete with your colleagues in a contest. You will be awarded with bonus points depending of your success.
We know the ground-truth labels for the 950 unlabeled images. We will measure your average number of digit misclassifications over the entire set of phone number images. I.e. the best possible error is 0 if no mistake is made, while the theoretical maximum is 9 for all the digits always recognized incorrectly.
You are supposed to upload into task 11_contest a zip-archive that contains the following files:
results.mat
approach.txt
contest.m
It is important that you strictly preserve the upload format, since the error is computed automatically immediately after you upload the results. The system will report your error. You can also see the current leaderboard, notice you must be logged in BRUTE in order to see the scoring table. You can re-upload up to 100 times before the deadline, however not after the deadline, when the contest is finished. Only your last results are considered for the contest.
The evaluation. You will get points based on the final ranking in the leaderboard. The winner, i.e. the student with rank 1 achieving minimum error, will get 7 points. Each next rank will get one point less (rank 2 - 6p, rank 3 - 5p, …, rank 7 - 1p etc.). Negative points are not given. All students who will upload results achieving the mean error better by 20% than the random guess will get 1 point, otherwise no point is given.
The rules. Please, read carefully following rules, since a violation may result in a disqualification.
You are supposed to upload function getBatchWithContext.m and your trained networks as files mnist/baseline/net-epoch-X.mat and mnist/context/net-epoch-Y.mat, where X and Y are the epochs where the best performance is reached for the baseline and the context network respectively. Your zip archive must contain only those two mat-files placed in the folders following the directory tree shown in figure 3. Do not forget to include all used non-standard functions you have created.
getBatchWithContext.m
mnist/baseline/net-epoch-X.mat
mnist/context/net-epoch-Y.mat
X
Y
Fig. 3: Example of directory tree.
For the phone reading contest, upload everything into '11_contest' task. The zip file will contain 'results.mat', 'approach.txt', 'contest.m' together with all non-standard functions you have created.
Jan Čech 2016/04/26 17:07