Warning
This page is located in archive.

Syllabus

Lect. Topic Pdf
01 Markov chains, equivalent representations, ergodicity, convergence theorem for homogeneous Markov chains.
02 Hidden Markov Models on chains for speech recognition: pre-processing, dynamic time warping, HMM-s.
03 Inference tasks for Hidden Markov Models
04 HMMs as exponential families, supervised learning: maximum likelihood estimator
05 Supervised learning: Empirical risk minimisation for HMMs; Unsupervised learning: EM algorithm for HMMs
06 Extensions of Markov models and HMMs: acyclic graphs, uncountable feature and state spaces (additional reading)
07 Markov Random Fields - Markov models on general graphs. Equivalence to Gibbs models
08 Searching the most probable state configuration: transforming the task into a MinCut-problem for the submodular case.
09 Searching the most probable state configuration: approximation algorithms for the general case.
10 The partition function and marginal probabilities: approximation algorithms for their estimation.
11 Parameter learning for Gibbs random fields
12 Q&A
courses/xep33gmm/materials/lectures.txt · Last modified: 2024/01/10 10:31 by flachbor