Warning
This page is located in archive.

Schedule

Lectures are given every week on Monday 14:30-16:00 in KN:E-127.

Syllabus

Lect. Topic
01 Markov chains, equivalent representations, ergodicity, convergence theorem for homogeneous Markov chains
02 Hidden Markov Models on chains for speech recognition: pre-processing, dynamic time warping, HMM-s
03 Recognising the generating model – calculating the emission probability for a measured signal sequence.
04 Recognising the most probable sequence of hidden states and the sequence of most probable states.
05 Possible formulations for supervised and unsupervised learning tasks (parameter estimation).
06 Supervised and unsupervised learning according to the Maximum-Likelihood principle, the Expectation Maximisation algorithm.
07 Hidden Markov models on acyclic graphs (trees). Estimating the graph structure.
08 Hidden Markov models with continuous state spaces. Kalman filter and particle filters.
09 Markov Random Fields - Markov models on general graphs. Equivalence to Gibbs models, Examples from Computer Vision.
10 Relations to Constraint Satisfaction Problems and Energy Minimisation tasks, unified formulation, semi-rings.
11 Searching the most probable state configuration: transforming the task into a MinCut-problem for the submodular case.
12 Searching the most probable state configuration: approximative algorithms for the general case.
13 The partition function and marginal probabilities: Approximative algorithms for their estimation.
14 Duality between marginal probabilities and Gibbs potentials. The Expectation Maximisation algorithm for parameter learning.
courses/ae4m33gmm/materials/lectures.txt · Last modified: 2013/10/04 13:02 (external edit)