Warning
This page is located in archive.

2015 XEP33SAM -- Understan​ding State of the Art Methods, Algorithms​, and Implementa​tions

Meeting time: Tuesdays 16:15 Location: G102A

First meeting 17/3 2015

Paper

Marius Muja and David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”, in International Conference on Computer Vision Theory and Applications (VISAPP'09), 2009 PDF software page

Task
  • Implement approximate k-means algorithm, use approximate NN instead of exact NN
  • Construct k-NN graph: a directed graph, vertex is a feature vector, from each vertex there are k edges to k nearest data vectors

Oxford 5k dataset: image thumbnails, descriptors (128 D vectors, single precision, stored one by one), and corresponing image names (one name per line, i-th name corresponds to i-th descriptors).

The following lines will read the descriptors and the image names in MATLAB:

fid = fopen('imagedesc.dat', 'r');
X = fread(fid, [128,inf], 'single⇒single');
fclose(fid);
Names = textread('imagenames.txt', '%s');

SIFT dataset: 2M SIFT descriptors are available here. The descriptors are 128D unsigned byte precision, the following Matlab lines will read the descriptors:

fid = fopen('SIFT.dat', 'r');
X = fread(fid, [128,inf], 'uint8⇒uint8');
fclose(fid);

Use the SIFT dataset for the approximate k-means. Use 32k cluster centers. Compare three different assignments to the nearest cluster (kd-forest, k-means tree, exact assignmet). For all three cases, start from identical inicialization. Compare the final results (up to say 30 iterations) in terms of sum of squared distances, that is Σ (X - f(X))^2, where f(X) is the assigned cluster center.

Looking forward to results on your own data too.

Second meeting 14/4 2015

Paper

Herve Jegou, Matthijs Douze, Cordelia Schmid: “Product quantization for nearest neighbor search”, PAMI 2011. PDF software page

(Do not get confused by the text on the page. The mex version is in the package.)

Task

Oxford 105k dataset: image thumbnails were distributed during the previous meeting,descriptors (128 D vectors, single precision, stored one by one), and corresponing image names (one name per line, i-th name corresponds to i-th descriptors, Oxford 5k image names are given without a directory, remaining filenames contain a directory corresponding to the distributed directory structure).

  • For each image find k-NN, visually verify the quality - script that for a selected image shows the k neighbours, etc.
  • Compare the quality and running time of product quantization and FLANN. Select 1000 images at random and find exact k-NN, for each of the algorithms compute an estimate of its precision.

Third meeting 19/5/2015

Paper

Wei Dong, Moses Charikar, Kai Li : “Efficient K-Nearest Neighbor Graph Construction for Generic Similarity Measures.” In Proceedings of the 20th international conference on World Wide Web (WWW). New York, NY. 2011. PDF software page

Use the new KGraph implementation.

Task

Create a k-NN graph on the Oxford 105k dataset and compare the precision and the run-time with the previous methods.

Fourth meeting 30/6/2015

Paper

A. Vedaldi and A. Zisserman : Efficient Additive Kernels via Explicit Feature Maps, Pattern Analysis and Machine Intellingence, 2011, PDF VLFeat software page

Task

Texture classification: compare accuracy and speed of SVM one against all classification of textures. Use your favourite SVM implementation, the one in VLFeat is suitable for linear SVM. Compare linear SVM on the raw descriptors, chi-square kernel, and linear SVM over chi-square feature map approximation.

Data

Please download the data from here. The mat files contain the data and labels (X, y for training, X_test, y_test for testing) and image names respecitvely. Images can be downloaded here. Don't forget to try to learn full kernel SVM and compare the results and the timings.

courses/xep33sam/2015.txt · Last modified: 2018/03/01 14:28 by chumondr