Warning
This page is located in archive. Go to the latest version of this course pages. Go the latest version of this page.

Syllabus

Lect. Topic Pdf
01 Markov chains, equivalent representations, ergodicity, convergence theorem for homogeneous Markov chains.
02 Hidden Markov Models on chains for speech recognition: pre-processing, dynamic time warping, HMM-s.
03 Inference tasks for Hidden Markov Models
04 HMMs as exponential families, supervised learning: maximum likelihood estimator
05 Supervised learning: Emprirical risk minimisation for HMMs; Unsupervised learning: EM algorithm for HMMs
06 Supervised learning: Emprirical risk minimisation for HMMs; Unsupervised learning: EM algorithm for HMMs(cont'd)
07 Extensions of Markov models and HMMs: acyclic graphs, uncountable feature and state spaces
08 Markov Random Fields - Markov models on general graphs. Equivalence to Gibbs models
09 Searching the most probable state configuration: transforming the task into a MinCut-problem for the submodular case.
10 Searching the most probable state configuration: approximation algorithms for the general case.
11 Searching the most probable state configuration: approximation algorithms for the general case. (cont'd)
12 The partition function and marginal probabilities: approximation algorithms for their estimation.
13 Parameter learning for Gibbs random fields
14 Reserve
courses/xep33gmm/materials/lectures.txt · Last modified: 2018/12/19 10:46 by flachbor