===== Syllabus ===== ^Lecture ^Date ^Topic ^Lecturer ^Pdf ^Notes ^ |1.|20. 9.| **Introduction** | BF | {{:courses:be4m33ssu:stat_mach_learn_l01.pdf| }} | | |2.|27. 9.| **Predictor evaluation and learning via using empirical risk** | VF | {{ :courses:be4m33ssu:er_ws2022.pdf | }}, to print: {{ :courses:be4m33ssu:er_ws2022_print.pdf | }}| [1] Chap 2, [2] Chap 7 | |3.|4. 10.| **Empirical risk minimization** | VF | {{ :courses:be4m33ssu:erm_ws2022.pdf | }}, to print: {{ :courses:be4m33ssu:erm_ws2022_print.pdf | }} | [1] Chap 2, [2] Chap 7 | |4.|11. 10.| **Empirical risk minimization II** | VF | {{ :courses:be4m33ssu:erm2_ws2022.pdf | }}, to print: {{ :courses:be4m33ssu:erm2_ws2022_print.pdf | }} | [1] Chap 4, [2] Chap 12 | |5.|18. 10.| **Structured Output Support Vector Machines** | VF | {{ :courses:be4m33ssu:sosvm_ws2022.pdf | }}, to print: {{ :courses:be4m33ssu:sosvm_ws2022_print.pdf | }} | [1] Chap 5, [2] Chap 12 | |6.|25. 10.| **Supervised learning for deep networks** | JD | {{ :courses:be4m33ssu:anns_ws2022.pdf | }} | | |7.|1. 11.| **SGD, Deep (convolutional) networks** | JD | {{ :courses:be4m33ssu:sgd_ws2022.pdf |SGD}} {{ :courses:be4m33ssu:deep_anns_ws2022.pdf | Deep ANNs}} | | |8.|8. 11.| **Generative learning, Maximum Likelihood estimator** | BF | {{ :courses:be4m33ssu:gener_ml.pdf | }} | | |9.|15. 11.| **EM algorithm, Bayesian learning** | BF | {{ :courses:be4m33ssu:em_bayesian-ws2022.pdf | }}| |10.|22. 11.| **Hidden Markov Models I** | BF | {{ :courses:be4m33ssu:hmms.pdf | }}| This lecture is held at Dejvice campus, room T2_C3-340 | |11.|29. 11.| **Hidden Markov Models II** | BF | {{ :courses:be4m33ssu:hmms2.pdf | }}| | |12.|6. 12.| **Ensembling I** | JD | {{ :courses:be4m33ssu:ensembling_ws2022.pdf | }} | [4] | |13.|13. 12.| **Ensembling II** | JD | | [2] Chap 10 | |14.|10. 1.| **Q&A** | All | | |