[[https://www.fel.cvut.cz/cz/education/rozvrhy-ng.B171/public/html/predmety/43/58/p4358506.html|RPZ Schedule]]
[[https://www.fel.cvut.cz/cz/education/rozvrhy-ng.B171/public/html/paralelky/P43/58/par4358506.1.html|RPZ students]]
[[https://cw.felk.cvut.cz/forum/forum-1446.html|Discussion forum]]
====== Jan 23rd exam test results ======
{{:courses:a4b33rpz:rpz_2018_exam.pdf|Results}}.
**The exam will proceed in the indicated order. You can assume about 12 people being through in one hour. Everybody with 6 or less points gets classified F automatically. The rest continues to the oral exam.**
===== Summary =====
This course introduces statistical decision theory and surveys canonical and advanced classifiers such as perceptrons, AdaBoost, support vector machines, and neural nets.
===== Basic info =====
** Winter semester 2017/2018 **
** Where and when:** KN:G-205 at [[ http://cyber.felk.cvut.cz/contact/#maps | Building G, Karlovo namesti]], Monday 14:30-16:00
** Teaching: **
[[http://cmp.felk.cvut.cz/~matas | Jiří Matas]] (JM) ,
[[http://cmp.felk.cvut.cz/~drbohlav| Ondřej Drbohlav]] (OD) ,
[[http://cmp.felk.cvut.cz/~xfrancv | Vojtěch Franc]] (VF) ,
[[http://cmp.felk.cvut.cz/~flachbor| Boris Flach]] (BF) .
===== Lecture plan 2017/2018 =====
^ Week ^ Date ^ Lect. ^ Slides ^ Topic ^ Wiki ^ Extra ^
| 1 | 2.10.|JM| {{:courses:a4b33rpz:pr_01_intro_and_bayes_2017_10_06.pdf|pdf}} | Introduction. Basic notions. The Bayesian recognition problem |[[https://en.wikipedia.org/wiki/Machine_learning|Machine_learning]] [[https://en.wikipedia.org/wiki/Naive_Bayes_classifier|Naive_Bayes_classifier]]| {{:courses:a4b33rpz:pr_01_extra.pdf|solved problems}} |
| 2 | 9.10. |JM| {{:courses:ae4b33rpz:lectures:pr_02_non_bayes_2016_10_10.pdf|pdf}} | Non-Bayesian tasks |[[https://en.wikipedia.org/wiki/Minimax|Minimax]] | |
| 3 | 16.10.|JM| {{:courses:ae4b33rpz:lectures:pr_03_parameter_estimation_2016_10_17.pdf|pdf}} | Parameter estimation of probabilistic models. Maximum likelihood method |[[http://en.wikipedia.org/wiki/Maximum_likelihood|Maximum_likelihood]] | |
| 4 | 23.10.|OD| {{:courses:ae4b33rpz:lectures:pr_04_nonparametric_methods_knn_2016_11_03.pdf|pdf}} | Nearest neighbour method. Non-parametric density estimation. | [[http://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm|K-nearest_neighbor_algorithm]] | |
| 5 | 30.10.|JM| {{:courses:a4b33rpz:pr_05_logistic_regression_2017.pdf|pdf}} | Logistic regression |[[https://en.wikipedia.org/wiki/Logistic_regression|Logistic_regression]] | |
| 6 | 6.11.|JM| {{:courses:ae4b33rpz:lectures:pr_06_perceptron_2017.pdf|pdf}} | Classifier training. Linear classifier. Perceptron. |[[https://en.wikipedia.org/wiki/Linear_classifier|Linear_classifier]] [[https://en.wikipedia.org/wiki/Perceptron|Perceptron]] | |
| 7 | 13.11. |JM| {{:courses:a4b33rpz:pr_07_svm_2017.pdf|pdf}} | SVM classifier |[[https://en.wikipedia.org/wiki/Support_vector_machine|Support_vector_machine]] | {{:courses:a4b33rpz:svm_extra.pdf|pdf}} |
| 8 | 20.11. |JM| {{:courses:ae4b33rpz:lectures:pr_08_adaboost_2017.pdf|pdf}} | Adaboost learning |[[https://en.wikipedia.org/wiki/AdaBoost|Adaboost]] | |
| 9 | 27.11. |JM| {{:courses:a4b33rpz:pr_09_nn_2015_11_27.pdf|pdf}} | Neural networks. Backpropagation |[[https://en.wikipedia.org/wiki/Artificial_neural_network|Artificial_neural_network]] | {{flach-2013-neural_nets.pdf|Flach}}, {{:courses:a4b33rpz:2010.11.19-neural_nets.pdf|ver1}} |
| 10 | 4.12. |JM| {{:courses:a4b33rpz:pr_10_k_means_2015_12_04.pdf|pdf}} | Cluster analysis, k-means method |[[https://en.wikipedia.org/wiki/K-means_clustering|K-means_clustering]] [[https://en.wikipedia.org/wiki/K-means%2B%2B|K-means++]] | |
| 11 | 11.12. |JM| {{:courses:ae4b33rpz:lectures:pr_11_em2017b.pdf|pdf}} | Unsupervised learning. EM (Expectation Maximization) algorithm. |[[https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm|Expectation_maximization_algorithm]] |{{:courses:a4b33rpz:2010.12.10-em-hoffmann.pdf|Hoffmann}},{{:courses:a4b33rpz:2010.12.10-em-bishop.pdf|Bishop}}, {{:courses:ae4b33rpz:lectures:flach-2013.12.02-em_algorithm.pdf|Flach}}|
| 12 | 18.12. |JM| {{:courses:ae4b33rpz:lectures:pr_12_pca_2017_01_02.pdf|pdf}} | Feature selection and extraction. PCA, LDA. |[[https://en.wikipedia.org/wiki/Principal_component_analysis|Principal_component_analysis]] [[https://en.wikipedia.org/wiki/Linear_discriminant_analysis|Linear_discriminant_analysis]]| [[http://www.csd.uwo.ca/~olga/Courses/CS434a_541a/Lecture8.pdf|Veksler]], {{:courses:ae4b33rpz:lectures:pca-2016.01.15-franc.pdf|Franc}}, {{:courses:ae4b33rpz:lectures:lda_2014_06_08.pdf |ver1}} |
| 13 | 1.1. |--| | (holiday, no lecture) |
| 14 | 8.1. |JM| {{:courses:ae4b33rpz:lectures:pr_13_dec_trees_2017_01_09.pdf|pdf}} | Decision trees. |[[https://en.wikipedia.org/wiki/Decision_tree|Decision_tree]] [[https://en.wikipedia.org/wiki/Decision_tree_learning|Decision_tree_learning]] |[[http://ocw.mit.edu/courses/sloan-school-of-management/15-097-prediction-machine-learning-and-statistics-spring-2012/lecture-notes/MIT15_097S12_lec08.pdf|Rudin@MIT]] |
/*
| 14 | 15.1. | JM | Optional/irregular. Friday 11:00 KN:E-301, Basic notions recapitulation, links between methods, answers to exam questions )|
*/
===== Recommended literature =====
* Duda R.O., Hart, P.E.,Stork, D.G.: Pattern Classification, John Willey and Sons, 2nd edition, New York, 2001
* Schlesinger M.I., Hlaváč V.: Ten Lectures on Statistical and Structural Pattern Recognition, Springer, 2002
* Bishop, C.: Pattern Recognition and Machine Learning, Springer, 2011
* Goodfellow, I., Bengio, Y. and Courville, A.: Deep Learning, MIT Press, 2016. [[http://www.deeplearningbook.org/|www]]
/*
** Further resources **
* [[http://cmp.felk.cvut.cz/%7Ehlavac/Public/TeachingLectures/ROD-UvodRozpozn.pdf|Introduction to recognition]], V. Hlaváč (Czech only)
* [[http://cmp.felk.cvut.cz/%7Ehlavac/Public/TeachingLectures/PravdepStatistikaOpakov.ppt|Probability and statistics overview]] (Czech only)
* [[http://cmp.felk.cvut.cz/%7Ehlavac/Public/TeachingLectures/ReceiverOperCharact.pdf|ROC curve]]
* [[http://cmp.felk.cvut.cz/%7Ehlavac/Public/TeachingLectures/LP-Theory.ppt|Linear programming]]
* [[http://cmp.felk.cvut.cz/%7Ehlavac/Public/TeachingLectures/DualTaskLinearProgramming.pdf|Dual tasks in linear programming]]
* [[http://cmp.felk.cvut.cz/%7Ehlavac/Public/TeachingLectures/P33ROD-2StatModels.pdf|Two statistical models]]
* [[http://cmp.felk.cvut.cz/%7Ehlavac/Public/TeachingLectures/UceniBezUcitele.pdf|Unsupervised learning]]
* [[http://cmp.felk.cvut.cz/%7Ehlavac/Public/TeachingLectures/Bishop-ECCV-04-tutorial-B.ppt|Unsupervised learning]] (lecture by Ch. Bishop)
* [[http://cmp.felk.cvut.cz/%7Ehlavac/Public/TeachingLectures/UmeleNN.ppt|Artificial neural networks]]
* [[http://cmp.felk.cvut.cz/%7Ehlavac/Public/TeachingLectures/MarkovianPR.pdf|Markov sequences]]
* [[http://cmp.felk.cvut.cz/%7Ehlavac/Public/TeachingLectures/HMMalaRabiner.pdf|Markov sequences]] (tutorial by Rabiner)
*/
===== Exam =====
* Only students that receive all credits from the lab work and receive ("zápočet") can be examined. The labs contribute 50% to your final evaluation. Any extra credits beyond 50% will be considered at the final evaluation and may improve your mark.
* The exam consists of two parts: written test and oral exam.
* The written test lasts 60-90 minutes and contributes 40% to the final evaluation.
* The questions used in the test are available [[http://cmp.felk.cvut.cz/cmp/courses/recognition/Exam-questions|here]] (if one can solve these questions, one will likely do well on the exam)
* Oral part starts approximately 2 hours after the end of the test (the interim time is used to correct the tests). It contributes to the final evaluation by 10%.
* To get grade "A" for the course, "A" or "B" result of the final written exam is required.
* Oral exam questions are available [[http://cmp.felk.cvut.cz/cmp/courses/recognition/Exam-questions/exam-questions-eng.pdf|here]].