====== Markov decision process ====== Download the updated kuimaze package {{ :courses:b3b33kui:cviceni:sekvencni_rozhodovani:kuimaze_mdp.zip |}}. Your task will be to implement the functions ''find_policy_via_value_iteration(...)'' and ''find_policy_via_policy_iteration(...)'' with the following **inputs**: * ''find_policy_via_value_iteration(problem, discount_factor, epsilon)'' * ''find_policy_via_policy_iteration(problem, discount_factor)'' where: * ''problem'' is the environment, an object of the type ''kuimaze.MDPMaze'' * ''discount_factor'' is a number from the range ''(0,1)'' * ''epsilon'' is the maximum permitted error for the value of each state (only in the case of value iteration) The expected **output** is a dictionary where the key word is a tuple (x,y) and the value is the optimal action (only the accessible states, in cases where the key is a terminal state it is enough return ''None''). You implement the methods in the file ''mdp_agent.py'' and upload it to the [[https://cw.felk.cvut.cz/upload/|Upload system]]. **There is no need to alter any other file.** You can look at ''mdp_sandbox.py'' in the downloaded kuimaze.zip package. It shows the basic work with MDPMaze and you can use it for inspiration. Timeout: each run of value/policy iteration in a given instance of a problem is limited to at most 30s. ====== Points and deadline ====== The deadline is again in the [[https://cw.felk.cvut.cz/upload/|Upload system]]. Evaluation is divided into: * Automatic evaluation that check the correctness of the policy ( matching the correct and your actions with all the states) on a given sample of the environment, possibly checking different discount factors. * Manual evaluation for clean code. ^ Evaluated criteria ^ min ^ max ^ note ^ | Algorithm quality - value iteration | 0 | 3 | Evaluated by the AE. | | Algorithm quality - policy iteration | 0 | 3 | Evaluated by the AE. | | Code quality | 0 | 2| Comments, structure, cleanliness, ... | | Pass bonus | 0 | 2| In order to compensate the missing exercises. | Automatic evaluation: * match with policy 95% and more (average on n-tested mazes): 3 points * match with policy 90%-95% : 2.5 points * match with policy 85%-90% : 2 points * match with policy 80%-85% : 1.5 point * match with policy 75%-80% : 1 point * match with policy 70%-75% : 0.5 point * less than 75% match: 0 points Code quality (2 points): * suitable comments, or clear code for understanding * reasonable length of methods/functions * properly named functions (verbs) and variables (nouns) * no repeated, no copy-pasted code * reasonable processing space/time * consistent naming convention used for all methods, variables * consistent and easily readable structure * ... You can follow [[https://www.python.org/dev/peps/pep-0008/|PEP8]], although we do not check all PEP8 demands. Most of the IDEs (certainly PyCharm) point out mishaps with regards to PEP8. You can also read some other sources for inspiration about clean code (e.g., [[https://github.com/zedr/clean-code-python|here]]) or about idiomatic python (e.g., [[https://medium.com/the-andela-way/idiomatic-python-coding-the-smart-way-cc560fa5f1d6|medium]], [[http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html|python.net]]). ===== Basic methods ===== For the communication with the MDPMaze environment you can use the following methods: ''get_all_states()'' : returns all the accessible states (i.e. without wall separated tiles). These states are weighted with rewards (state.x, state.y, state.reward). ''is_terminal_state(state)'' : 'True' if the given state is a terminal state. (However, it does not differentiate between the positively valued terminal state or the negatively valued terminal state, simply if it is any terminal state). ''get_actions(state)'' : For a given state it returns a list of possible actions. The result is [[https://docs.python.org/3.4/library/enum.html|enum]], see an example in ''mdp_sandbox.py'' ''get_state_reward(state)'': Usually not necessary, the input is the named tuple (state.x, state.y). ''get_next_states_and_probs(state, action)'' : Returns for a given state and action a list of tuples and probabilities ((state.x, state.y), probability); e.g.: [((2,3), 0.3), ((3,3), 0.2)] ''state'' is a [[https://docs.python.org/3/library/collections.html#collections.namedtuple|named tuple]], either (state.x, state.y, state.reward) or (state.x, state.y). ''visualise(dictlist=None)'' : without a parameter it visualizes the usual maze. Otherwise it expects a list of dictionaries in the form ''{'x': x_coord, 'y': y_coord, 'value: val'}''. The value ''val'' can be either a scalar value or a list/tuple with four members. The previously encountered ''render(), reset()'' methods are also available. Once again, you can look at ''mdp_sandbox.py'' to see how to use these methods, but write your code into ''mdp_agent.py''.