====== Principal Component Analysis (PCA) ====== {{ :courses:ae4b33rpz:labs:12_pca:data_set_sample_3d_pca_.jpg?nolink&700 |}}\\ Figure 1. Dimensionality reduction by PCA((Images taken from [[http://gael-varoquaux.info/scientific_computing/ica_pca/|Identifying relevant variables: PCA and ICA]])). So far, in this course, we have been using measurements of various dimensionality, sometimes the measurements had even dimensionality same as the number of pixels in the image. But are all those dimensions actually necessary? Isn't there a more compact representation of the data which we could use instead without sacrificing the expressiveness of the data? Have a look at Figure 1 for an example. All the data, though 3-dimensional lay actually in a 2D plane. If we were able to compute the orthogonal basis of this plane (red arrows in Figure 1, right), we could represent the data using just 2D measurements. Principal component analysis (PCA) is a method for such features space dimensionality reduction (also called feature extraction). In today's labs, the PCA method will be used to extract a new set of "more compact" measurements from the set of images of faces. We are given a set of images of faces of size 300x250, but in the following we will represent them rather as a column vector of intensities of length $n$ = 75000 (=300*250) - a vector of image columns stacked one after another. Thus, each image can be seen as a point in $n$-dimensional space. The goal of PCA is then to represent the images in $m$-dimensional space, where $m \ll n$. We will then use this compact representation to classify the faces of several individuals using the nearest neighbour classifier. ===== Theory ===== The input to PCA is a set $X=\{\mathbf{x}_i|\mathbf{x}_i \in \mathbb{R}^n, i=1,\ldots,k\}$ of $k$ training vectors, in our case $k$ vectorised images of faces. To be able to apply the PCA, the data should be centred in origin, i.e. having zero mean, $\bar{\mathbf{x}} = \frac{1}{k}\sum_{i=1}^{k} \mathbf{x}_i = \mathbf{0}$. This is usually not the case for real data, so the first step is always to centre the data by subtracting their mean: $\tilde{\mathbf{x}}_i = \mathbf{x}_i - \bar{\mathbf{x}}$. To simplify notation, we will still refer to $\mathbf{x}_i$ instead of $\tilde{\mathbf{x}}_i$ in the following and assuming that the data is already centered. **However, be aware, that every time you are using the data this conversion is implicitly present and you have to take it into account in the code!** PCA searches for a set $Y = \{\mathbf{y}_j|\mathbf{y}_j \in \mathbb{R}^n, j=1,\ldots,m \}$ of $m$ vectors ($m To fulfil this assignment, you need to submit these files (all packed in a single ''.zip'' file) into the [[https://cw.felk.cvut.cz/sou/ | upload system]]: * **''answers.txt''** - answers to the Assignment Questions * **''assignment_12.m''** - a script for data initialization, calling of the implemented functions and plotting of their results (for your convenience, will not be checked) * **''pca_basis.m''** - a function that computes the principal component vectors from the training data * **''compact_representation.m''** - a function which converts the original images to the vectors of weights * **''reconstruct_images.m''** - a function that perform linear combination in order to reconstruct the images * **''classify_nn_pca.m''** - a function that classify the reduced representation of the images by the nearest neighbor search * **''eigenfaces.png''**, **''cumulative_sum_graph.png''**, **''partial_reconstructions.png''** and **''error_vs_reduction_graph.png''** - images specified in the tasks Start by downloading the **[[https://cw.felk.cvut.cz/courses/RPZ/assignment_templates/assignment_pca_template.zip|template]] of the assignment**. - Implement function **''[Y, lambdas] = pca_basis(X)''** using the images from ''trn_data'' as input, i.e find the eigenvectors of the training samples (recall the trick introduced before). The input **''X''** is the matrix of vectorised images (with the mean already subtracted), the output **''Y''** is a matrix containing, as column vectors, the set of normalised eigenvectors and **''lambdas''** is the array of the corresponding normalised (sum up to one) eigenvalues for the components. Both arrays **''Y''** and **''lambdas''** must be sorted from the largest to the smallest eigenvalues. \\ \\ **Tip 1:** Eigenvectors and eigenvalues are in ''MATLAB'' computed by function ''eig''.\\ \\ **Tip 2:** Normalise the computed eigenvectors to unit length, since the projection of the data points is meaningful on the orientation of the principal components.\\ \\ **Tip 3:** ''eig()'' function returns different eigenvectors in different versions of Matlab. The difference is in their sign only, so all such solutions are valid. To unify our results, the template code enforces the first non-zero component of each eigenface vector to be positive. \\ \\ - Display the eigenvectors (eigenfaces) as images and generate the image file **''eigenfaces.png''**. Pick the first 10 eigenfaces corresponding to the greatest eigenvalues. \\ \\ **Tip:** Reshape vectors $\mathbf{y}_j$ (columns of **''Y''**) to matrices of the same size as the input images.\\ \\ {{ courses:ae4b33rpz:labs:12_pca:eigenfaces.png?800 }}\\ - Plot the cumulative sum of normalised eigenvalues (**''lambdas''**). Use the output of the function **''pca_basis''**. Empirically, one typically selects the number of eigenvectors used to approximate the data so that the cumulative sum of their respective eigenvalues is about 65%-90% of the sum of all eigenvalues. This usually works well, however it always depends on the problem. Save the plot as an image with name **''cumulative_sum_graph.png''**\\ \\ {{ :courses:ae4b33rpz:labs:11_pca:cumulative_sum_graph.png?500 |}} \\ - Implement the function **''W = compact_representation(X, Y, m)''** that computes the weights of the principal components. Inputs are **''X''**, the matrix with the images of faces, **''Y''**, the matrix with the orthogonal set of principal components in column vector format, and **''m''** is the number of components to be used (the most dominant ones). The output **''W''** is a matrix with the compact representation coefficients.\\ \\ - Implement the function **''Z = reconstruct_images(W, Y, x_mean)''** with inputs **''W''** and **''Y''** are the same as above and **''x_mean''** is the vectorised mean image of the faces. The output **''Z''** is a matrix containing the approximated vectorised images as columns.\\ \\ - Using the ''compact_representation'' and ''reconstruct_images'' functions generate partial reconstructions for $m=1,2,\ldots$. Save them as a single image **''partial_reconstructions.png''**. \\ {{ :courses:ae4b33rpz:labs:12_pca:partial_reconstructions.png?nolink&700 |}} - Build a nearest neighbour classifier for the ''tst_data'' in the reduced-representation space (use the function **''compact_representation''** before) inside the function **''[labels_tst] = classify_nn_pca(images, Y_trn, X_trn_mean, W_trn, labels_trn, m)''**. It takes the matrices obtained from the above functions plus the **''images''** (use directly ''tst_data.images'' here), and the value **''m''** of the number of components used for representation. It outputs the classification **''labels''** (integers). \\ \\ **Tip:** Do not use the test data mean ever!\\ \\ - Experiment with the value of **''m''** (from 1 to 57) and plot the classification error versus the number of eigenvectors in a graph and save it as **''error_vs_reduction_graph.png''**. Use labeling in ''labels_tst_gt'' to evaluate the error. \\ {{ :courses:ae4b33rpz:labs:12_pca:error_vs_reduction_graph.png?500 |}} ===== Assignment Questions ===== - What would be a good indicator in the data set for applying PCA? * a) Symmetric covariance matrix * b) Correlation between measurements * c) Data set was generated with respect to a given probability distribution * d) Measurements have very different nature - The principal components dimensionality is: * a) smaller than the original data dimensionality * b) the same as the original data dimensionality * c) larger than the original data dimensionality - An image as a 2D signal has a range of spatial frequencies (i.e. edges, homogeneous intensity regions, gradients, etc.), which frequency range is represented by the principal components with the highest eigenvalues? * a) High frequencies * b) Medium frequencies * c) Low frequencies ===== Bonus task ===== What to do next? With the knowledge gained this semester, you can * ... experiment with another (better?) classifier in the reduced-representation space of faces, * ... try to align the faces first, i.e. before computing the PCA. (Use another {{courses:ae4b33rpz:labs:13_pca:orl.tgz|data}} file, where the faces are not so tightly clipped, so you have some space to crop the images. For inspiration, see [[#references|[2]]]), * ... experiment with PCA on letters, which were used in previous labs (e.g. with the data from the [[courses:ae4b33rpz:labs:10_adaboost:start|AdaBoost labs]]), * ... implement a similar dimensionality reduction technique called Independent Component Analysis (ICA), * ... Any of the above problems is considered a bonus task. The formulations are intentionally vague and the precise task formulation and implementation is part of the task. ===== References ===== [1] {{courses:ae4b33rpz:labs:13_pca:kimo.pdf|Eigenfaces for recognition talk}}\\ \\ [2] [[http://www.stat.ucla.edu/%7Esczhu/Courses/UCLA/Stat_231/Face_modeling.html|Face alignment for increased PCA precision]]\\ \\ [3] Turk, M. and Pentland, A. (1991). Eigenfaces for recognition. The Journal of Cognitive Neuroscience, 3(1): 71-86.