Search
A widely used image retrieval approach is based on the so-called Bag-of-Words (BoW) model. Each image is represented by a set of visual words from a visual codebook. In the previous labs, images were represented by a set of local descriptors extracted from image patches. We now create visual words in such a space of local descriptors. The visual codebook is obtained by quantization of the local descriptor space. In particular, this is performed via K-means on a large number of local descriptors extracted from many images. Each centroid is a visual word, which can be seen as the representative of a frequently occurring image patch. Given the visual codebook, each local descriptor of an image is represented by the id of the nearest centroid to it. The BoW approach represents an image by the set of visual words that appear in it. There is an analogy with the use of words in natural language and representing text documents by the set of words appearing in them.
For the purposes of this lab, we have created the visual codebook already and obtained the visual words for images. We provide you with the image files, the set of visual words in each image, and the corresponding geometry/shape of the respective local feature the visual word is obtained from.
In this lab we will implement a system to perform image retrieval.
In real images, visual words tend to appear with different frequencies. Similar to words in the text domain, some words are more frequent than others. To handle this, we use the TF-IDF weighting scheme TF-IDF (term frequency–inverse document frequency). The BoW image vector is a k-dimensional vector per image, where $k$ is the number of visual words. The value in the $i$-th dimension is equivalent to $\mathrm{tf}_{i} \cdot \mathrm{idf}_{i}$, where $\mathrm{tf}_{i}$ is the number of occurrences of visual word $i$ in the image, and $\mathrm{idf}_{i}$ is the importance of visual word $i$ according to the inverted-document-frequency give by
$$ \mathrm{idf_{i}} = \log \frac{N}{n_i},$$
where $N$ is the total number of images and $n_i$ is the number of images containing visual word $i$. Visual words that are not present in any image get assigned to 0 weight, $\mathrm{idf_{i}}=0$.
get_idf(image_visual_words, number_of_visual_words)
Each image is represented by the BoW TF-IDF vector. Similarity between two images is computed by the cosine similarity of the BoW vectors as:
$$\displaystyle\mathrm{score}(\mathbf{x},\mathbf{y}) = \cos{\varphi} = \frac{\mathbf{x}\cdot \mathbf{y}}{\|\mathbf{x}\|\|\mathbf{y}\|} = \frac{1}{\|\mathbf{x}\|\|\mathbf{y}\|}\sum_{i=1}^k x_i y_i = \sum_{i=1}^k \hat{x}_i \hat{y}_i, $$
where we take advantage of the fact that a part of the similarity can be computed ahead and L2 normalize the vectors elements (the L2 norm of the vector becomes equal to 1). After that, the similarity can be computed as a simple dot product of two vectors.
The visual vocabulary can be very big, often with a few thousands or even millions of visual words. To simplify the similarity estimation between such long and sparse vectors, the so called inverted file is used. Inverted file is a structure, which for each visual word (A, B, C, D) contains a list of images $\alpha, \beta,\gamma$ in which the visual word appears together with the corresponding TF-IDF weight. The example below represents that but for simplicity it does not include the IDF weights nor the L2 normalization.
We will represent the database as a sparse matrix. A sparse matrix is a structure for the memory-efficient representation of huge matrices where most of the entries are zeros. Therefore, it is an implementation of an inverted-file. We will use the scipy implementation ( see scipy documentation for details). Our inverted file will be a 2D sparse matrix, where columns are images, and rows are visual words, while columns should be L2 normalized.
create_db(image_visual_words, number_of_visual_words,idf)
number_of_visual_words x number_of_images
csr_matrix
Given a query image and its BoW TF-IDF vector, we can use the inverted file to estimate the similarity to all other images. The similarity is then sorted to perform the retrieval. We will use one of the images stored in the inverted-file as a query. As a sanity check, the query image should be ranked first with similariry 1.0 due to the L2 normalization.
retrieve(inverted_file, query_visual_words, idf)
idf
We are providing the overall pipeline in file retrieval.py and utils.py (found here) with some missing parts to be written by you (marked by “your code” in comments). Upload this script after adding your modifications. It performs retrieval from 1499 images using a visual codebook of 50,000 visual words. The data and images to run this script is found here and there.
retrieval.py
utils.py
An initial ranking of the database images is obtained according to the BoW similarity (first part of the lab). In this part, we will apply spatial verification on a short-list of shortlist_size top ranked images and re-rank them according to the number of inliers. These images are also called candidate images in the following.
shortlist_size
Spatial verification takes tentative correspondences between local features of the two images as input. If two local features are in the same visual word form then they form a correspondence. A tentative correspondence is defined as a pair of indices; the indices are indicating which local features of the query and the database image is the correspondence formed between.
get_tentative_correspondencies(query_visual_words, image_visual_words)
Our local features are represented by affine frames. For this reason, we choose affine transformation as the geometric model for spatial verification which allows us to generate a transformation hypothesis from a single tentative correspondence. The hypothesis from a single correspondence is the affine transform that transforms one affine frame to the other. There is no need to do random sampling anymore, we can go through all correspondences, verify each of them and save the best. Therefore, the algorithm becomes deterministic in contrast to RANSAC. We can verify all hypotheses (one for each tentative correspondence) and save the best one. The number of inliers will be the new score, which we will use for the re-ranking of the short-list.
ransac_affine(query_geometry, candidatelist_geometry, correspondences, inlier_threshold)