The machine learning task has two parts - **symbol classification** and **determination of the optimal classifier parameter**. Check the Upload system for the due date and notice that both tasks are submitted **separately**.

Date provided by Eyedea Recognition.

You have to implement your own methods and cannot use Matlab provided function used for Machine learning (e.g., fitcknn, predict)

Your task is to classify letters from car license plates. We assume the plate is found (see Fig 3 and 4). The number pictures are normalized to the side 10×10 pixels. You can use training data that were chosen at random. The brightness values of pixels are ordered into row vectors by columns, see Fig 2. Thus a line in the input MAT-file means one image.

The file `train.mat`

contains the feature vectors, the file `train_labels.mat`

contains the class labels. The pictures are only meant for a preview, the data contained in the MAT-files are the actually important data. The result of the classification should be in the same format as the variables in `train_labels.mat`

. The pictures in the lines of `train.mat`

are the ones labeled in `train_labels.mat`

.

This is the usual extent of the available data given by a client of Eyedea Recognition. After the company (now you) writes his code, the client can come up with testing data in order to test your solution.

Test data will be in `test.mat`

and `test_labels.mat`

. You can simulate this by dividing the training data for your own testing/training sets.

Use MATLAB for this assignment. The files are available here: kui_classification_students.zip.

Implement the following algorithms and use them to solve the above mentioned problem.

**Nearest neighbor classifier** Implement the classifier using the nearest neighbor principle. Test its properties on the training data.

**Bayes classification** Implement the naive Bayes classification. Assume the intensity of the pixels is independent. In each class therefore it holds:

$$P(\vec{x}|s)=P(x_1|s) \cdot P(x_2|s) \dotsc P(x_n|s)$$

where $x_i$ is the intensity of the $i$th pixel.

**Note** In order to avoid zero probabilities (caused by small training sets), you need to add a small positive value to $P(x_i|s)$ for each $i$. If you miss this, then even one zero value $P(x_i|s)$ causes $P(\vec{x}|s)=0$.

You have one example solution at your disposal, which uses a perceptron for classification. An example of how to use it can be found in the `main.m`

script.

In the script, it is shown how to load input files, train the perceptron, use the trained perceptron for classification, and get results (a confusion matrix) of the classification.

The basic functions are `perceptronLearn()`

, `perceptronClassify()`

and `confusionMatrix()`

.

Figure 1: *Examples of normalized letter images of registration plates.*

Figure 2: *Pixels are represented by a row vector of concatenated columns. First comes the first column, then the second etc. It is obvious that dark columns of the letter J, which are in the extreme right part of the image, are at the end of the data vector.*

The script `main`

calls learning and classification functions for all classifiers. It also loads input data and calls the function for printing results of the classifications. This script makes all needed callings of the functions. However, do not forget comment the callings of the functions that are not implemented at the beginning.

This is the concrete implementation of the Perceptron classifier. See the implementation and the comments for more details.

**Perceptron learning**

`perceptron = perceptronLearn(train, train_labels)`

- the inputs of the function are the training data and their labels. The output of the function is the structure with the learned classifier. The function firstly makes mapping from the char class labels to the number class labels (“conversion table”) and convert the char labels to the number labels. Finally there is the learning itself as per the following algorithm:

- Set $\vec{w}_y=\vec{0}$ and $b_y=0$ for all $y \in Y$ ($Y$ - set of all possible labels).
- Pick a random incorrectly classified input. If there is no such input, then STOP, because the learning has finished, by finding parameters for an errorfree classification of the input data.
- Let $\vec{x}_t,y_t$ is an improperly classified input and $\hat{y}$ is a classification of $\vec{x}_t$ using the current classifier. Adapt the parameters of the classifier according to the following formulae:

$ \eqalign{ \vec{w}_{y_t} &= \vec{w}_{y_t} + \vec{x}_t \cr b_{y_t} &= b_{y_t} + 1 \cr \vec{w}_{\hat{y}} &= \vec{w}_{\hat{y}} - \vec{x}_t \cr b_{\hat{y}}&= b_{\hat{y}} - 1 } $ - Continue with step 2.

**Perceptron classification**

`classLabels = perceptronClassify(perceptron, test)`

- the inputs of the function are the structure with the learned classifier and the testing data. The output of the function is an array with the class labels. Classification follows according to

$$\hat{y}=\arg \max\limits_{y \in Y} \vec{x}_t^\top\vec{w}_y+b_y$$

with the result being converted from the number labels to the char labels and then returned as an output.

`confusionMatrix()`

- the function prints result of the classification and the confusion matrix. The inputs of the function are given labels and the labels obtained by the classification.

The functions `nnLearn`

and `nnClassify`

for the 1-nearest neighbor classifier ready for implementation.

`nn = nnLearn(train, trainLabels)`

- the inputs of the function are the training data and their labels. The output of the function is the structure with the learned classifier.

`classLabels = nnClassify(nn,test)`

- the inputs of the function are the structure with the learned classifier and the testing data. The output of the function is an array with the class labels.

The functions `bayesLearn`

and `bayesClassify`

for the naive Bayes classifier ready for implementation.

`bayes = bayesLearn(train, train_labels)`

- the inputs of the function are the training data and their labels. The output of the function is the structure with the learned classifier.

`classLabels = bayesClassify(bayes, test)`

- the inputs of the function are the structure with the learned classifier and the testing data. The output of the function is an array with the class labels.

Fig. 3: *Automatic text localization from pictures. More information available at http://cmp.felk.cvut.cz/~zimmerk/lpd/index.html.*

Fig. 4: *Industry application for license plate recognition. Videos are available at http://cmp.felk.cvut.cz/cmp/courses/X33KUI/Videos/RP_recognition.*

Download the ZIP file kui_classification_students.zip. You are given a simple binary classifier for the letter “I” (file `simpleIClassif.m`

). The classification depends on the value of a parameter that is the input to the classifier. Your task is to determine the optimal value of this parameter. Use the files `dataI.mat`

(images of letters) and `dataI_labels.mat`

(labels for the data, whether the image contains I or not). The format of the files is similar to those you worked on in the previous section of this assignment.

Write a short **PDF report** (at most one A4) about the method how you choose the optimal parameter. Try to use proper terms like sensitivity, false positives or the ROC curve. Upload the report into Brute.

Christopher M. Bishop. *Pattern Recognition and Machine Learning.* Springer Science+Bussiness Media, New York, NY, 2006.

T.M. Cover and P.E. Hart. Nearest neighbor pattern classification. *IEEE Transactions on Information Theory,* 13(1):21–27, January 1967.

Richard O. Duda, Peter E. Hart, and David G. Stork. *Pattern classification.* Wiley Interscience Publication. John Wiley, New York, 2nd edition, 2001.

Vojtěch Franc and Václav Hlaváč. *Statistical pattern recognition toolbox for Matlab.* Research Report CTU–CMP–2004–08, Center for Machine Perception, K13133 FEE.
Czech Technical University, Prague, Czech Republic, June 2004. http://cmp.felk.cvut.cz/cmp/software/stprtool/index.html.

Michail I. Schlesinger and Václav Hlaváč. *Ten Lectures on Statistical and Structural Pattern Recognition.* Kluwer Academic Publishers, Dordrecht, The Netherlands, 2002.

Both tasks are evaluated separately

- Closest neighbor classifier (1-NN) is evaluated according to the table below. [0–3 points]
- The Naive Bayes classifier follows also the table below: [0–6 points]
- Code quality: [0–3 points]

1-NN | |
---|---|

correctly classified | points |

>95% | 3 |

>80% | 2 |

>60% | 1 |

=<60% | 0 |

Naive Bayes classifier | |
---|---|

correctly classified | points |

>82% | 6 |

>75% | 5 |

>70% | 4 |

>65% | 3 |

>60% | 2 |

>55% | 1 |

=<55% | 0 |

**Clean code** - you can follow rules for clean code even in Matlab. Look at the Datatool guide.

- Send a PDF report where you determine the optimal parameter. Based on the chosen parameter and the method of choosing the paramtere you receive the points. [0–3 points]

courses/be5b33kui/labs/machine_learning/start.txt · Last modified: 2018/06/04 17:57 by svarnpet