Warning

This page is located in archive.
Go to the latest version of this course pages.
Go the latest version of this page.

Lectures are given in person. Videos will be provided, but not necessarily directly from the lectures (the form is at the discretion of the individual lecturers).

In the lecture descriptions below, we refer to this **supplementary course material**:

**Relevant**

- RL:
*R. S. Sutton, A. G. Barto: Reinforcement learning: An introduction. MIT press, 2018.* - NLP:
*D. Jurafsky & J. H. Martin: Speech and Language Processing - 3rd edition draft* - COLT:
*M. J. Kearns, U. Vazirani: An Introduction to Computational Learning Theory, MIT Press 1994*

RL & NLP are available online.

You are strongly discouraged from using this course's materials from previous years as you would run into confusions.

The RL part of the course is heavily based on the RL course of prof Emma Brunskill. The relevant lectures from prof Brunskill's course are: Lecture 1,Lecture 2, Lecture 3, Lecture 4, Lecture 5, Lecture 6, Lecture 11.

There are nice materials by Volodymyr Kuleshov and Stefano Ermon on probabilistic graphical models (for the Bayesian networks part of the course): https://ermongroup.github.io/cs228-notes/. The relevant chapters are: https://ermongroup.github.io/cs228-notes/representation/directed/, https://ermongroup.github.io/cs228-notes/inference/ve/, https://ermongroup.github.io/cs228-notes/inference/sampling/.

The NLP part of the course is heavily based on the NLP course(s) from Dan Jurafsky (Stanford), following his book: Speech and Language Processing (see NLP above) - particularly its 3rd edition draft (2nd ed. is insufficient!). The relevant chapters for us are 3, 6, 7 and 9. There are also some nice related materials and videos

For the COLT part: besides the monograph by Kearns et al linked above, the Wikipedia page has pointers to two COLT survey papers (Angluin, Haussler) which are relevant to the PAC part. There are also external courses with lecture material available; for example, 8803 Machine Learning Theory at Georgia Tech covers all COLT topics of SMU (there are subtle differences in the algorithms and proofs). Video footage of the lectures available here.

**Slides:** lecture_1.pdf

**Videos:** Markov Processes, Markov Reward Processes, Markov Decision Processes. **Coming soon:** Proofs

**Slides:** lecture_2.pdf

**Videos:** Introduction, Statistical Properties of Estimators, Monte Carlo Value Evaluation. **Coming soon:** Temporal Difference Learning

**Slides:** lecture_4.pdf

**Slides:** lecture_5.pdf

courses/smu/lectures.txt · Last modified: 2022/06/05 19:51 by zelezny