Search
Meeting time: ?? Location: G102A
First and second paper: Partitioning vs Embedding
Marius Muja and David G. Lowe: “Scalable Nearest Neighbor Algorithms for High Dimensional Data”. Pattern Analysis and Machine Intelligence (PAMI), Vol. 36, 2014. PDF Github page PyFLANN Python bindings
Oxford 5k dataset: image thumbnails, descriptors (128 D vectors, single precision, stored one by one), and corresponing image names (one name per line, i-th name corresponds to i-th descriptors).
The following lines will read the descriptors and the image names in MATLAB:
fid = fopen('imagedesc.dat', 'r'); X = fread(fid, [128,inf], 'single⇒single'); fclose(fid); Names = textread('imagenames.txt', '%s');
SIFT dataset: 2M SIFT descriptors are available here. The descriptors are 128D unsigned byte precision, the following Matlab lines will read the descriptors:
fid = fopen('SIFT.dat', 'r'); X = fread(fid, [128,inf], 'uint8⇒uint8'); fclose(fid);
Use the SIFT dataset for the approximate k-means. Use 32k (2^15) cluster centers. Compare three different assignments to the nearest cluster (kd-forest, k-means tree, exact assignmet). For all three cases, start from identical inicialization. Compare the final results (30 iterations of k-means) in terms of the distortion - the sum of squared distances, that is Σ (X - f(X))^2, where f(X) is the assigned cluster center. Ie, you need to run the k-means once for each method / parameter settings, each one for 30 iterations.
You are supposed to deliver a short pdf report at least 2 days prior to the meeting. The report should include, but is not limited to: graph of evolution of the distortion over iterations for all methods, timings, parameters used, reasoning for the choice of parameters, observations of what the differences are between the methods, timings for the k-NN graph construction (k = 16), observations and conclusions.
Looking forward to results on your own data too.