## T1b-react - Reactive obstacle avoidance

The main task is to implement a function that will steer the simulated robot towards a given goal while reactively avoiding obstacles.

 Deadline 11. October 2020, 23:59 PST Points 3 Label in BRUTE t1b-react Files to submit archive with HexapodController.py file Resources B4M36UIR_t1_resource_pack [Updated 05.10.2020]

#### Assignment

In class HexapodController.py implement the goto_reactive function. The purpose of the function is to produce the steering command that will drive the simulated hexapod robot towards the goal position based on the current odometry of the robot, the current collision state of the robot, and the laser scan perceived by the robot. The input parameters of the function are:

1. goal - Pose message - only position.x and position.y parameters are used as the goal position
2. odometry - Odometry message - pose.position.x, pose.position.y and pose.orientation.quaternion encodes the current robot absolute position in the environment
3. collision - boolean - True if the robot collides with some obstacle, False otherwise
4. laser_scan - LaserScan message - contains the current laser scan data perceived by the robot.

The function returns:

1. Zero command cmd_msg = Twist() when any of the input data are invalid
2. None when the robot currently collides with any obstacle.
3. None when the robot has reached the given goal (note that in real world the exact position is impossible to reach by the robot, hence the goal is considered reached when the robot is in the DELTA_DISTANCE vicinity of the goal position).
4. Otherwise, the function returns the steering command in form of a velocity command (Twist message) consisting of linear and angular parts that steers the robot towards the goal position while avoiding obstacles.

The goto_reactive function has the following prescription

    def goto_reactive(self, goal, odometry, collision, laser_scan):
"""Method to steer the robot towards the goal position while avoiding
contact with the obstacles given its current odometry, collision
status and laser scan data
Args:
goal: Pose of the robot goal
odometry: Perceived odometry of the robot
collision: bool of the robot collision status
laser_scan: LaserScan data perceived by the robot
Returns:
cmd: Twist steering command
"""

In class HexapodController.py you can change whatever you want. In evaluation, the given interfaces are fixed and the evaluation script is fixed.

#### Approach

The recommended approach for the reactive obstacle avoidance uses simple AI cognitive model of Braitenberg vehicles described in Lab02 - Exteroceptive sensing, Mapping and Reactive-based Obstacle Avoidance.

The direct sensory-motor mapping has to combine:

While the continuous navigation function towards the goal location can be represented by love (Vehicle 3a) or aggression (Vehicle 2b), the reactive obstacle avoidance is best modelled as the robot fear (Vehicle 2a) the obstacles in the environment perceived by the laser scanner with nonlinear activation functions.

Hence the reactive obstacle avoidance can be achieved using the following continuous navigation function (pseudocode).

while not goal_reached:
dphi = the difference between the current heading and the heading towards the target

scan_left = distance to the closest obstacle to the left of the robot
scan_right = distance to the closest obstacle to the right of the robot
repulsive_force = 1/scan_left - 1/scan_right

linear_speed = distance towards target
angular_speed_avoidance_component = repulsive_force*C_AVOID_SPEED
angular_speed = angular_speed_navigation_component + angular_speed_avoidance_component
Where C_AVOID_SPEED is a constant that defines the robot fear of the obstacle and C_TURNING_SPEED is a constant that defines the aggression with which the robot will turn towards the desired heading.

An example operation of the controller can be seen in the following video (4x speed up):

#### Evaluation

The evaluation focus on the ability of the robot to reach the given goal locations while not crash into the obstacles. It is supposed to introduce you to the work with the robot exteroceptive sensing.

The code can be evaluated using the following script (also attached as t1b-react.py).

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import matplotlib.pyplot as plt

import sys
import os
import math
import numpy as np

sys.path.append('hexapod_robot')

#import hexapod robot
import HexapodRobot as hexapod

#import communication messages
from messages import *

if __name__=="__main__":

robot = hexapod.HexapodRobot(0)

#turn on the robot
robot.turn_on()

goals = [
Pose(Vector3(3.5,3.5,0),Quaternion(1,0,0,0)),
Pose(Vector3(0,0,0),Quaternion(1,0,0,0)),
]

path = Path()
#go from goal to goal
for goal in goals:
robot.goto_reactive(goal)
#sample the current odometry
if robot.odometry_ is not None:
path.poses.append(robot.odometry_.pose)
#wait
time.sleep(0.1)

#check the robot distance to goal
odom = robot.odometry_.pose
#compensate for the height of the robot as we are interested only in achieved planar distance
odom.position.z = 0
#calculate the distance
dist = goal.dist(odom)
print("[t1b_eval] distance to goal: %.2f" % dist)

plt.show()
The expected output is the print of the distance readings below or equal the DELTA_DISTANCE threshold and the plot of the robot path between the individual goals similar to the following figures. Note that the resulting path may be completely different, the main focus of this task is not to collide with the obstacles, hence, any collision-free trajectory is valid.