Search
Consider the following MDP. Assume that reward is in the form $r(e,y)$, i.e., $r: E \times Y \mapsto \mathbb{R}$. Set $\gamma = \frac{1}{2}$. Suppose that you have seen the following sequence of states, actions, and rewards: $$ e_1, \mathrm{switch}, e_2, \mathrm{stay}, +1, e_2, \mathrm{stay}, +1, e_2, \mathrm{switch}, e_1, \mathrm{stay}, e_1, \mathrm{switch}, e_1, \mathrm{switch}, e_1, \mathrm{stay}, e_1, \mathrm{switch}, e_2, \mathrm{stay}, +1, {\color{lightgray} e_{\color{lightgray} 2}} $$
Decide whether the following statement is true or false: If a policy $\pi$ is greedy with respect to its own value function $U^{\pi}$, then this policy is an optimal policy.
Do the following exploration/exploitation schemes fulfill the 'infinite exploration' and 'greedy in limit' conditions? Which lead to convergence of $Q$-values in $Q$-learning and which lead to convergence of $Q$-values in SARSA. Does anything change if we are interested in the convergence of policy? $N_{e,y}$ denotes the number of times when action $y$ was taken in state $e$. $N_e$ is defined similarly.
Consider the following MDP with $\gamma=0.8$, $r(5) = 100$, $r(\cdot) = 0$. The initial matrix of $Q$-values is $$ Q(e,y) = \begin{bmatrix} - & - & - & - & 0 & - \\ - & - & - & 0 & - & 0 \\ - & - & - & 0 & - & - \\ - & 0 & 0 & - & 0 & - \\ 0 & - & - & 0 & - & 0 \\ - & 0 & - & - & 0 & 0 \end{bmatrix}. $$ Consider path $1-5-1-3$ and constant learning rate $\alpha = 0.1$. Show changes in $Q$ values after the agent-environment interaction.
Problems 1,2, and 4 were adapted from Richard Sutton's 609 course. You may find the related materials on http://www.incompleteideas.net/book/the-book-2nd.html.