Warning
This page is located in archive. Go to the latest version of this course pages. Go the latest version of this page.

Computer Vision Methods

Course Description

This course focuses on the following computer vision problems: finding correspondences between images using image features and their robust invariant descriptors, image retrieval, object detection and recognition, and visual tracking.

Pre-requisites

The course has no formal pre-requisits. However, certain skills and knowledge are assumed, and it is the responsibility of the student to get to the required level.

The assignments are implemented in the Python, numpy, pytorch computing environment, mostly in form of jupyter notebooks, and familiarity with it will help. The programing assignments, involving either implementing, modifying or testing computer vision methods, are a substantial part of the labs.

Knowledge of the basics of digital image processing as convolution, filtration, intensity transformations, image function interpolations and basic geometric transformations of the image (see the first lab) is assumed. Knowledge of linear algebra and probability theory is needed to understand the presented computer vision methods.

Lectures: Monday 9:15 - 10:45, KN:E-107

Lecturers: JM Jiří Matas, JC Jan Čech, DM Dmytro Mishkin, GT Giorgos Tolias, OD Ondřej Drbohlav, MS Milan Šulc

Note: some of the lectures may change, but the 2021 recordings mostly provide a good idea about the content.
Lectures will be streamed on YouTube, link: https://www.youtube.com/playlist?list=PLQL6z4JeTTQnv27IWAY6NLafP6xiflmHe, and the recordings will be available in a playlist at https://www.youtube.com/playlist?list=PLQL6z4JeTTQl_HfTuIkuCltZ97inYDQT5

For online feedback, connect via zoom link: https://feectu.zoom.us/j/97922104602
according the schedule, as listed below (from 9:15)

Week Date Lecturer Slides Topic
1 14.2. JCDeep learning
recording 2021 recording 2022
A shallow introduction into the deep machine learning. Convolutional Neural Networks. Principles, layers, architectures for image recognition.
2 21.2. JCDeep learning II
recording 2020 recording 2022
Deep architectures object detection and semantic segmentation. Further insights into the deep nets. Generative models (GANs).
3 28.2.JM, DM Correspondence 1st lecture slides,
recording 2021, recording 2022
Correspondences and wide baseline stereo. Motivation and applications. Interest point and distinguished regions detection: Harris operator (corner detection)
4 7.3.DM Correspondence 2nd lecture slides, recording 2021 recording 2022 Laplace operator and its approximation by difference of Gaussians, Hessian detector, affine covariant version, Maximally Stable Extremal Regions (MSER). Descriptors of measurement regions: SIFT (scale invariant feature transform), RootSIFT, shape context. LBP (local binary patterns), Matching.
5 14.3.DM Correspondence 3rd lecture slides, recording 2021 Deep learned features: HardNet, R2D2, SuperPoint, AffNet.
6 21.3.DM, JM RANSAC recording 2021 RANSAC.
7 28.3. MS slides Computer Vision Applications: From Species Recognition to Business Documents.
8 4.4. GT Retrieval-part1 recoding2022 Retrieval: task formulation, evaluation metrics, Bag-of-Words, VlAD, ASMK, spatial verification
9 11.4. GT Retrieval-part2 - Deep-retrieval-part1 recording 2022 Retrieval: query expansion, special retrieval objectives: zoom in/out, details. Deep retrieval: FCN representation, global pooling methods, DELF
10 18.4. Easter Monday
11 25.4. GT Deep-retrieval-part2 - recording 2022 Deep retrieval: loss function, training labels, other tasks, descriptor whitening
12 2.5. JM KLT, Mean Shiftrecording 2021 Tracking I. Introduction. Kanade-Lucas-Tomasi tracker. Mean Shift
13 9.5. JM KCF Tracking TLD, Tracking_by_Segmentationrecording 2021 Tracking II. KCF Kernel Correlation Filter. Long-term Tracking, TLD: Tracking-Learning-Detection, Tracking by Segmentation. Introduction to KCF lab task.
14 16.5. JM Viola-Jones face detector, Waldboost,Hough Transform
recording 2021

</fc> Update of course slide material */

Evaluation

Work during the semester 50%, written part of the exam 40%, oral part of the exam 10%. For this semester, the “normalization factor” for your points gained during the semester is 68. That means, points which contribute to your exam, are (your total number of points from semester including bonus points)/68.0 * 50.

Exam

Examples of exam questions. There will be 4-5 similar questions at the written part of the exam. The oral part of the questions takes place after the written part and will focused on discussion of your answers.

[New]Please find the assignment of students to exam time slots here. The order in the lists is the order in which students will be examined.

Literature

Lecture slides constitute the main source of study literature in this course.

Further Info

Further information is available in next sections of this page. We would appreciate your feedback on the contents and organization on the discussion forum of the course.



Good luck to all participants of the course.

Lecturers
jm_ct2008.11-3.jpg jancech.jpeg gtolias.jpeg dmytro.jpeg ondrej_drbohlav.jpg
Jiří Matas Jan Čech Giorgos Tolias Dmytro Mishkin Ondřej Drbohlav

Consultations are possible upon request.

courses/mpv/start.txt · Last modified: 2022/06/13 18:34 by drbohlav