===== Syllabus ===== ^Lecture ^Date ^Topic ^Lecturer ^Pdf ^Notes ^ |1.|26. 9.| **Introduction** | BF | {{:courses:be4m33ssu:stat_mach_learn_l01.pdf| }} | | |2.|3. 10.| **Predictor evaluation** | VF | {{ :courses:be4m33ssu:er_ws2023.pdf | }}, print: {{ :courses:be4m33ssu:er_ws2023_print.pdf | }}| [1] Chap 2, [2] Chap 7 | |3.|10. 10.| **Empirical risk minimization** | VF | {{ :courses:be4m33ssu:erm_ws2023.pdf | }}, print: {{ :courses:be4m33ssu:erm_ws2023_print.pdf | }} | [1] Chap 2, [2] Chap 7 | |4.|17. 10.| **Probably Approximately Correct Learning** | VF |{{ :courses:be4m33ssu:pac_ws2023.pdf | }}, print: {{ :courses:be4m33ssu:pac_ws2023_print.pdf | }} | [1] Chap 4, [2] Chap 12 | |5.|24. 10.| **Structured Output Support Vector Machines** | VF | {{ :courses:be4m33ssu:sosvm_ws2023.pdf | }}, print: {{ :courses:be4m33ssu:sosvm_ws2023_print.pdf | }} | [1] Chap 5, [2] Chap 12 | |6.|31. 10.| **Supervised learning for deep networks** | JD | {{ :courses:be4m33ssu:anns_ws2023.pdf | }} | | |7.|7. 11.| **SGD, Deep (convolutional) networks** | JD | {{ :courses:be4m33ssu:sgd_ws2023.pdf | }} {{ :courses:be4m33ssu:deep_anns_ws2023.pdf | }} | | |8.|14. 11.| **Generative learning, Maximum Likelihood estimator** | BF | {{ :courses:be4m33ssu:gener_ml.pdf | }} |[[https://www.stat.cmu.edu/~larry/=stat705/Lecture12a.pdf|L. Wasserman, Exp. Fam. ]] | |9.|21. 11.| **EM algorithm, Bayesian learning** | BF | {{ :courses:be4m33ssu:em_bayesian.pdf | }}|will be held in KN:A-320 | |10.|28. 11.| **Hidden Markov Models I** | BF | {{ :courses:be4m33ssu:hmms.pdf | }}| | |11.|5. 12.| **Hidden Markov Models II** | BF | {{ :courses:be4m33ssu:hmms2.pdf | }}| | |12.|12. 12.| **Ensembling I** | JD | {{ :courses:be4m33ssu:ensembling_ws2023.pdf | }} | [4] | |13.|19. 12.| **Ensembling II** | JD | | [2] Chap 10 | |14.|9. 1.| **Q&A** | All | | |