Search
Download the updated kuimaze package kuimaze.zip (Updated 2023.03.21).
Your task will be to implement the functions find_policy_via_value_iteration(…) and find_policy_via_policy_iteration(…) with the following inputs:
find_policy_via_value_iteration(…)
find_policy_via_policy_iteration(…)
find_policy_via_value_iteration(problem, discount_factor, epsilon)
find_policy_via_policy_iteration(problem, discount_factor)
where:
problem
kuimaze.MDPMaze
discount_factor
(0,1)
epsilon
The expected output is a dictionary where the key word is a tuple (x,y) and the value is the optimal action (only the accessible states, in cases where the key is a terminal state it is enough return None).
None
You implement the methods in the file mdp_agent.py and upload it to the Upload system. There is no need to alter any other file.
mdp_agent.py
You can look at mdp_sandbox.py in the downloaded kuimaze.zip package. It shows the basic work with MDPMaze and you can use it for inspiration.
mdp_sandbox.py
Timeout: each run of value/policy iteration in a given instance of a problem is limited to at most 30s.
The deadline is again in the Upload system.
Evaluation is divided into:
Automatic evaluation:
Code quality (1 point):
You can follow PEP8, although we do not check all PEP8 demands. Most of the IDEs (certainly PyCharm) point out mishaps with regards to PEP8. You can also read some other sources for inspiration about clean code (e.g., here) or about idiomatic python (e.g., medium, python.net).
Description of the variable state:
state
kuimaze.State
state.x
state.y
State
For the communication with the MDPMaze environment you can use the following methods:
MDPMaze
get_all_states() : returns all the accessible states (i.e. without wall separated tiles). These states are instances of the State class.
get_all_states()
is_terminal_state(state) : returns 'True' if the given state is a terminal state. (However, it does not differentiate between the positively valued terminal state or the negatively valued terminal state, simply if it is any terminal state).
is_terminal_state(state)
get_reward(state): Returns the reward for the given state. The reward is obtained only when leaving the state, not when it is reached.
get_reward(state)
get_actions(state) : For a given state, returns an enum of all possible actions, see an example in mdp_sandbox.py, or in the examples below. To get a list of possible actions, you can use list(get_actions(state))
get_actions(state)
list(get_actions(state))
get_next_states_and_probs(state, action) : For a given state and action, returns the list of pairs (<State>, probabilities) corresponding to future possible states and the probability to end up in each of them; ; eg [(State(x=1, y=0), 0.8), (State(x=2, y=0), 0.1), (State(x=0, y=0), 0.1)]
get_next_states_and_probs(state, action)
visualise(dictlist=None) : Without a parameter it visualizes the usual maze. Otherwise it expects a list of dictionaries in the form {'x': x_coord, 'y': y_coord, 'value: val'}. The value val can be either a scalar value or a list/tuple with four elements. You can specifically visualize:
visualise(dictlist=None)
{'x': x_coord, 'y': y_coord, 'value: val'}
val
env.visualise(get_visualisation_values(utils))
env.visualise(get_visualisation_values(policy))
The previously encountered render(), reset() methods are also available.
render(), reset()
Once again, you can look at mdp_sandbox.py to see how to use these methods, but write your code into mdp_agent.py.
Create a simple maze map:
>>> EMPTY = (255, 255, 255) >>> WALL = (0, 0, 0) >>> GOAL = (255, 0, 0) >>> START = (0, 0, 255) >>> MAP1 = ((EMPTY, START, EMPTY, EMPTY, EMPTY), (EMPTY, WALL, WALL, WALL, EMPTY), (EMPTY, EMPTY, EMPTY, WALL, GOAL))
Import the kuimaze package and the State class:
>>> import kuimaze >>> from kuimaze import State
Creating an environment, deterministic at first:
>>> env = kuimaze.MDPMaze(MAP1)
If we want to create an intermediate non-deterministic environment (and we usually do in the case of MDP), we need to specify the transition probabilities:
env2 = kuimaze.MDPMaze(MAP1, probs=(0.8, 0.1, 0.1, 0.0))
List of all valid states in the environment:
>>> env.get_all_states() [(x=0, y=0), (x=0, y=1), (x=0, y=2), (x=1, y=0), (x=1, y=2), (x=2, y=0), (x=2, y=2), (x=3, y=0), (x=4, y=0), (x=4, y=1), (x=4, y=2)]
Determining if a state is terminal:
>>> env.is_terminal_state(State(0, 0)), env.is_terminal_state(State(4, 2)) (False, True)
What rewards are associated with each state? Note that rewards are obtained when leaving the state.
>>> env.get_reward(State(1,0)), env.get_reward(State(4,2)) (-0.04, 1.0)
What actions are allowed in the state? In our environment, all 4 actions are always allowed, but the agent hits a wall, it'll stay in the current state.
>>> actions = tuple(env.get_actions(State(1,0))) >>> actions (<ACTION.UP: 0>, <ACTION.RIGHT: 1>, <ACTION.DOWN: 2>, <ACTION.LEFT: 3>)
To which states, and with which probabilities, can I get if I perform the given action in the current state?
>>> env.get_next_states_and_probs(State(1,0), actions[0]) [((x=1, y=0), 1), ((x=2, y=0), 0), ((x=0, y=0), 0)]
>>> env2.get_next_states_and_probs(State(1,0), actions[0]) [((x=1, y=0), 0.8), ((x=2, y=0), 0.1), ((x=0, y=0), 0.1)]