Warning
This page is located in archive.

Example question for A4M33MPV course

  1. Harris interest points - definition, algorithm for detection, parameters. Explain the motivation behind the definition. Describe the effects of the parameters on the number of detected points. To which transformation (geometric/photometric) is this detector invariant?
  2. Describe the algorithm for selection of interest point (region) scale using the Laplacian.
  3. Describe steps to generalize Harris detector to become affine invariant.
  4. Hessian and Difference of Gaussian interest points. Definition, properties.
  5. Define Maximally Stable Extremal Regions (MSER). Describe the algorithm for their detection. Properties of extremal regions end the maximally stable subset.
  6. The FAST interest point detector
  7. The SIFT descriptor. Describe the algorithm and its properties.
  8. Describe “Local Binary Patterns” like descriptors.
  9. How are local affine frames used for invariant description?
  10. Wide-baseline matching. Describe the steps for obtaining correspondences between a pair of images, which are taken from different viewpoints.
  11. How to find similar descriptors in sub-linear time?
  12. How does the “bag-of-words” method work?
  13. What is the “inverted file” and how it is used for the image retrieval?
  14. Define the tf-idf reweighting for visual words.
  15. Describe the “query expansion” mechanism for improving the recall of the image retrieval.
  16. Describe how the min-Hash method describes the images. Which properties it has?
  17. Describe the RANSAC algorithm, its properties, advantages and disadvantages. Which parameters it has?
  18. Describe the steps for object detection using “sliding windows” (“scanning windows”). How is the reasonable speed achieved?
  19. Describe how to use an integral image for computing the sum of the intensity and the intensity variance for a rectangular region.
  20. Why is the Adaboost algorithm often used for the “sliding window” methods? Give more than one reason.
  21. Describe the Hough transformation algorithm for detection or parametrized structure (line, circle, …). Discuss the properties of the algorithm (time and memory requirements, parameters).
  22. Compare the Hough transformation with a brute-force search algorithm.
  23. Compare the Hough transformation with RANSAC.
  24. For a static scene and viewing by camera with only horizontal movement. Draw a image patch, which will be useful for a tracking using a gradient method (KLT tracker). Which properties should has such image patch to be suitable for tracking?
  25. Which image patches are suitable for tracking? Why? Which patches are not suitable?
  26. Mean-shift algorithm. Describe the principles and simulate calculation for 1D example.
  27. Mean-shift algorithm. Color pixels [R,G,B] represented in 3D space. How you can reduce the color-space into 256 color-space?
  28. DCT - discriminative (kernel) correlation tracking. The algorithm, representation of the object, the search method.
  29. DCT tracking in the presence of rotation and scale change.
  30. Deep Neural Nets for image classification. Structure - convolutional, pooling and fully connected layers. Non-linearities.
  31. Deep Neural Nets for image classification. Learning - the cost function, the SGD (stochastic gradient method), drop-out, batch normalizaton. SGD parameters.
  32. Deep Neural Nets for detection. Proposal-based and end-to-end methods. Class label and bounding box prediction.
  33. Deep Neural Nets - applications in computer vision.
courses/ae4m33mpv/labs/exam_questions.txt · Last modified: 2017/05/24 09:31 by matas