===== Syllabus ===== ^Lecture ^Date ^Topic ^Lecturer ^Pdf ^Notes ^ |1.|2.10.| Introduction | BF | {{:courses:be4m33ssu:stat_mach_learn_l01_ws18.pdf| }} | | |2.|9.10.| Empirical risk minimization I| VF | {{ :courses:be4m33ssu:erm1_ws18.pdf | }} | chap 2 in [1] | |3.|16.10.| Empirical risk minimization II | VF | {{ :courses:be4m33ssu:erm2_ws2018.pdf | }} | chap 3 in [1] | |4.|23.10.| Support Vector Machines | VF | {{ :courses:be4m33ssu:svm_ws18.pdf | }} | chap 4, 5 in [1] | |5.|30.10.| Supervised learning for deep networks | JD | {{ :courses:be4m33ssu:anns_ws18.pdf | }} | | |6.|6.11.| Deep (convolutional) networks | JD | {{ :courses:be4m33ssu:deep_anns_ws18.pdf | }} | | |7.|13.11.| Unsupervised learning, EM algorithm, mixture models | BF | {{ :courses:be4m33ssu:emalg_ws2018.pdf | }} | | |8.|20.11.| Bayesian learning | BF | {{ :courses:be4m33ssu:bayes-learn-ws2018.pdf | }} | | |9.|27.11.| Hidden Markov Models | BF | {{ :courses:be4m33ssu:hmms-ws2017.pdf | }} | | |10.|4.12.| Structured output SVMs| VF | {{ :courses:be4m33ssu:sosvm_ws2018.pdf | }} | | |11.|11.12.| Markov Random Fields | BF | {{ :courses:be4m33ssu:mrfs-ws2018.pdf | }} | | |12.|18.12| Ensembling I | JD | {{ :courses:be4m33ssu:ensembling_ws2018.pdf | }} | | |13.|8.1.| Ensembling II | JD | | |