===== Syllabus ===== ^Lecture ^Date ^Topic ^Lecturer ^Pdf ^Notes ^ |1.|24. 9.| Introduction | BF | {{:courses:be4m33ssu:stat_mach_learn_l01_ws19.pdf| }} | | |2.|1. 10.| Empirical risk | VF | {{ :courses:be4m33ssu:erm1_ws2019_full.pdf | }}, ( printable: {{ :courses:be4m33ssu:erm1_ws2019.pdf | }}) | ch 2 in [1] | |3.|8. 10.| Empirical risk minimization | VF | {{ :courses:be4m33ssu:erm2_ws2019_full.pdf | }}, ( printable: {{ :courses:be4m33ssu:erm2_ws2019.pdf | }}) | ch 2 in [1], ch 3 in [1] | |4.|15. 10.| Support Vector Machines I | VF | {{ :courses:be4m33ssu:svm1_ws2019_full.pdf | }}, ( printable: {{ :courses:be4m33ssu:svm1_ws2019.pdf | }}) | ch 4 in [1], ch 12 in [2] | |5.|22. 10.| Support Vector Machines II | VF | {{ :courses:be4m33ssu:svm2_ws2019_full.pdf | }}, (printable: {{ :courses:be4m33ssu:svm2_ws2019.pdf | }}) | ch 5 in [1] | |6.|29. 10.| Supervised learning for deep networks | JD | {{ :courses:be4m33ssu:anns_ws19.pdf | }} | | |7.|5. 11.| Deep (convolutional) networks | JD | {{ :courses:be4m33ssu:deep_anns_ws19.pdf | }}, {{ :courses:be4m33ssu:sgd_ws19.pdf |SGD}} | | |8.|12. 11.| Unsupervised learning, EM algorithm, mixture models | BF | {{ :courses:be4m33ssu:emalg_ws2019.pdf | }} | | |9.|19. 11.| Bayesian learning | BF | {{ :courses:be4m33ssu:bayes-learn-ws2019.pdf | }} | | |10.|26. 11.| Hidden Markov Models | BF | {{ :courses:be4m33ssu:hmms-ws2019.pdf | }} | | |11.|3. 12.| Markov Random Fields | BF | {{ :courses:be4m33ssu:mrfs-ws2019.pdf | }} | for additional reading (not part of the exam)| |12.|10. 12.| Ensembling I | JD | {{ :courses:be4m33ssu:ensembling-ws2019.pdf | }} | | |13.|17. 12.| Ensembling II | JD | | | |14.|7. 1.| reserve | | | |