=== Syllabus === ^ Lect. ^ Topic ^ Pdf ^ | 01 | Markov chains, equivalent representations, ergodicity, convergence theorem for homogeneous Markov chains. | {{:courses:xep33gmm:materials:lecture_01.pdf| }}| | 02 | Hidden Markov Models on chains for speech recognition: pre-processing, dynamic time warping, HMM-s. | {{:courses:xep33gmm:materials:lecture_02.pdf| }}| | 03 | Inference tasks for Hidden Markov Models |{{:courses:xep33gmm:materials:lecture_03.pdf| }} | | 04 | HMMs as exponential families, supervised learning: maximum likelihood estimator| {{:courses:xep33gmm:materials:lecture_04.pdf| }}| | 05 | Supervised learning: Emprirical risk minimisation for HMMs; Unsupervised learning: EM algorithm for HMMs|{{:courses:xep33gmm:materials:lecture_05.pdf| }} | | 06 | Supervised learning: Emprirical risk minimisation for HMMs; Unsupervised learning: EM algorithm for HMMs(cont'd)| | | 07 | Extensions of Markov models and HMMs: acyclic graphs, uncountable feature and state spaces | {{:courses:xep33gmm:materials:lecture_06.pdf| }}| | 08 | Markov Random Fields - Markov models on general graphs. Equivalence to Gibbs models | {{:courses:xep33gmm:materials:lecture_07.pdf| }}| | 09 | Searching the most probable state configuration: transforming the task into a MinCut-problem for the submodular case. | {{:courses:xep33gmm:materials:lecture_08.pdf| }}| | 10 | Searching the most probable state configuration: approximation algorithms for the general case.| {{:courses:xep33gmm:materials:lecture_09.pdf| }}| | 11 | Searching the most probable state configuration: approximation algorithms for the general case. (cont'd) | | | 12 | The partition function and marginal probabilities: approximation algorithms for their estimation. | {{:courses:xep33gmm:materials:lecture_10.pdf| }}| | 13 | Parameter learning for Gibbs random fields| {{:courses:xep33gmm:materials:lecture_11.pdf| }}| | 14 | Reserve | |