Warning
This page is located in archive. Go to the latest version of this course pages. Go the latest version of this page.

Syllabus

Lect. Topic Pdf
01 Markov chains, equivalent representations, ergodicity, convergence theorem for homogeneous Markov chains.
02 Hidden Markov Models on chains for speech recognition: pre-processing, dynamic time warping, HMM-s.
03 Inference tasks for Hidden Markov Models
04 HMMs as exponential families, supervised learning: maximum likelihood estimator
05 Supervised learning: Emprirical risk minimisation for HMMs
06 Unsupervised learning: EM algorithm for HMMs
07 Extensions of Markov models and HMMs: acyclic graphs, uncountable feature and state spaces
08 Extensions of Markov models and HMMs: acyclic graphs, uncountable feature and state spaces (cont'd)
09 Markov Random Fields - Markov models on general graphs. Equivalence to Gibbs models
10 Searching the most probable state configuration: transforming the task into a MinCut-problem for the submodular case.
11 Searching the most probable state configuration: approximation algorithms for the general case.
12 Searching the most probable state configuration: approximation algorithms for the general case. (cont'd)
13 The partition function and marginal probabilities: approximation algorithms for their estimation.
14 Parameter learning for Gibbs random fields
courses/xep33gmm/materials/lectures.txt · Last modified: 2018/01/03 10:44 by flachbor