# FF-Replan Tutorial: Robot Emil

Show your successful implementation to your TA in class for bonus points.

• Implement agent to control robot Emil (visualized as blue filled circle) in an environment with obstacles (visualized as gray boxes) and stochastic action execution so that it reaches the cell with gold (visualized as yellow filled circle).
• Use FF-replan strategy with the most-likely-effect determinization strategy. Plan the path for the robot assuming that all actions will yield the most likely outcome. Execute the plan and monitor the if the action execution behaves as planned. If we detect that one of the action resulted in a different outcome than we planned, replan from the current state to the goal. Repeat until the goal is reached.
• Instead of using FF, you can implement Dijsktra's algorithm or A* algorithm to find plan in determinized environment.
• Your code should be implemented in RobotEmilAgent.java, in method RobotEmilAgent.nextStep(x,y, map).

## Evaluation

• Run RobotEmilCreator to simulate the execution of the robot. In fact, 10 simulations with different random seeds will be executed. You need to successfully reach the goal in all simulations to pass.
• The simulation finishes as unsuccessful after 200 steps.

## Environment

• Robot starts at (0, 0)
• Robot can execute following actions with stochastic effects (class Action):
• NORTH (0, -1) – Actual effect: 80% NORTH, 10% EAST, 10% WEST
• SOUTH (0, +1) – Actual effect: 80% SOUTH, 10% EAST, 10% WEST
• EAST (+1, 0) – Actual effect: 80% EAST, 10% NORTH, 10% SOUTH
• WEST (-1, 0) – Actual effect: 80% WEST, 10% NORTH, 10% SOUTH
• The environment is a matrix 20×20, where the first index represents columns (x-coordinate) and the second index represents rows (y-coordinate). The columns (rows) are indexed starting from 0, i.e. we have columns (rows) 0,1,…,19.
• Each cell can contain (class CellContent):
• EMPTY
• OBSTACLE
• GOLD

## Robust replanning

Download slightly modified environment. In this environment, using the planning algorithm used in previous task, implement iterative planning from plan failure nodes, the basic step of the Rubust FF:

1. Run planner to find plan in determinized environment. Save this plan as partial policy.
2. Find failure nodes of current partial policy.
3. Create plans from the failure nodes found in previous step.
4. Add plans from the previous step to the partial policy. Go back to 2.

Report plan failure rate as dependent on the number of passes over steps 2. - 3. using 1000 runs in the environment.

Use class PlanFailuere to report to the simulator failure of the plan. Use agent method resetAgent() to restore agent to its original state. You can retain plan from previous simulation runs.

## Tips & Tricks

• You can use javax.vecmath.Point2i class to represent a pair of integers.
• You can speed-up/slow-down the simulation on line 25 in RobotEmilCreator.java by changing the constant SIMULATION_STEP_DELAY, which represents the delay between two actions in miliseconds.