===== Syllabus ===== ^Week ^Date ^Topic ^Lecturer ^Pdf ^Notes ^ |1.|24. 9.| **Introduction** | VF | {{ :courses:be4m33ssu:intro_ws2024.pdf | }} | | |2.|1. 10.| **Supervised learning for deep networks** | JD | | | |3.|8. 10.| **Predictor evaluation** | VF | | [1] Chap 2, [2] Chap 7 | |4.|15. 10.| **Empirical risk minimization** | VF | | [1] Chap 2, [2] Chap 7 | |5.|22. 10.| **Probably Approximately Correct Learning** | VF | | [1] Chap 4, [2] Chap 12 | |6.|29. 10.| //dean's day// | | | | |7.|5. 11.| **Support Vector Machines** | VF | | | |8.|12. 11.| **SGD, Deep (convolutional) networks** | JD | | | |9.|19. 11.| **Generative learning, Maximum Likelihood estimator** | VF | |[[https://www.stat.cmu.edu/~larry/=stat705/Lecture12a.pdf|L. Wasserman, Exp. Fam. ]] | |10.|26. 11.| **EM algorithm, Bayesian learning** | VF | |???will be held in KN:A-320 | |11.|3. 12.| **Hidden Markov Models I** | JD | | | |12.|10. 12.| **Hidden Markov Models II** | JD | | | |13.|17. 12.| **Ensembling I** | JD | | [4] | |14.|7. 1.| **Ensembling II** | JD | | [2] Chap 10 |