Search
The main task is to implement a function to steer the simulated robot toward a given goal while reactively avoiding obstacles.
ControllerReactive.py
t1b-react.py
plain.ttt
blocks.ttt
Controller.goto_reactive()
Controller.goto()
In class ControllerReactive.py implement the goto_reactive function. The function is to produce the steering command to drive the simulated hexapod robot toward the goal position based on the current odometry of the robot, the current collision state of the robot, and the laser scan perceived by the robot. The input parameters of the function are:
goto_reactive
goal
position.x
position.y
odometry
pose.position.x
pose.position.y
pose.orientation.quaternion
collision
True
False
laser_scan
The function returns:
cmd_msg = Twist()
None
DELTA_DISTANCE
The goto_reactive function has the following prescription
def goto_reactive(self, goal, odometry, collision, laser_scan): """Method to steer the robot toward the goal position while avoiding contact with the obstacles given its current odometry, collision status and laser scan data Args: goal: Pose of the robot goal odometry: Perceived odometry of the robot collision: bool of the robot collision status laser_scan: LaserScan data perceived by the robot Returns: cmd: Twist steering command """
The recommended approach for reactive obstacle avoidance uses a simple AI cognitive model of Braitenberg vehicles described in Exteroceptive sensing, Mapping, and Reactive-based Obstacle Avoidance.
Direct sensory-motor mapping has to combine:
While the continuous navigation function toward the goal location can be represented by love (Vehicle 3a) or aggression (Vehicle 2b), the reactive obstacle avoidance is best modeled as the robot's fear (Vehicle 2a) of the obstacles in the environment perceived by the laser scanner with nonlinear activation functions.
Hence reactive obstacle avoidance can be achieved using the following continuous navigation function (pseudocode).
while not goal_reached: dphi = the difference between the current heading and the heading toward the target scan_left = distance to the closest obstacle to the left of the robot scan_right = distance to the closest obstacle to the right of the robot repulsive_force = 1/scan_left - 1/scan_right linear_speed = distance toward the target angular_speed_navigation_component = dphi*C_TURNING_SPEED angular_speed_avoidance_component = repulsive_force*C_AVOID_SPEED angular_speed = angular_speed_navigation_component + angular_speed_avoidance_component
C_AVOID_SPEED
C_TURNING_SPEED
An example operation of the controller can be seen in the following video (4x speed up):
The evaluation focuses on the ability of the robot to reach the given goal locations while not crashing into the obstacles. It is aimed to introduce you to work with the robot exteroceptive sensing.
The code can be evaluated using the following script (also attached as t1b-react.py).
#!/usr/bin/env python3 # -*- coding: utf-8 -*- import matplotlib.pyplot as plt import sys import os import math import numpy as np # We assume the file is placed in the t1a-ctrl directory at the same # level as the supporting library redcp, where the hexapod_robot library is. sys.path.append('../') sys.path.append('../redcp/hexapod_robot') from redcp import * # import data types import HexapodRobot as hexapod #import hexapod robot from ControllerReactive import Controller #import reactive controller def plot_robot_path(path): fig, ax = plt.subplots() path.plot(ax, 30) plt.xlabel('x[m]') plt.ylabel('y[m]') plt.axis('equal') plt.show() if __name__=="__main__": robot = HexapodRobot(Controller()) robot.turn_on() #turn on the robot robot.start_navigation() #start navigation thread time.sleep(3) # wait for a while goals = [ #assign goal for navigation Pose(Vector3(3.5, 3.5, 0), Quaternion(1, 0, 0, 0)), Pose(Vector3(0, 0, 0), Quaternion(1, 0, 0, 0)), ] path = Path() for goal in goals: #go from goal to goal robot.goto_reactive(goal) while robot.navigation_goal is not None: #sample the current odometry if robot.odometry_ is not None: path.poses.append(robot.odometry_.pose) odom = robot.odometry_.pose #check the robot distance to goal odom.position.z = 0 #compensate for the height of the robot as we are interested only in achieved planar distance dist = goal.dist(odom) #calculate the distance sys.stdout.write("\r[t1b-react] actual distance to goal: %.2f" % dist) time.sleep(0.1) #wait for a while print("[t1b-react] distance to goal: %.2f" % dist) robot.stop_navigation() robot.turn_off() plot_robot_path(patH)