t1b-react - Reactive obstacle avoidance

The main task is to implement a function to steer the simulated robot toward a given goal while reactively avoiding obstacles.

Files ControllerReactive.py - File to be implemented
t1b-react.py - Evaluation script
Resources Introduction to CoppeliaSim/V-REP and Open-Loop Robot Locomotion Control
Exteroceptive sensing and Reactive-based Obstacle Avoidance
redcp.zip, scenes.zip
t1b-react.zip - Initial files and evaluation script

Assignment

In class ControllerReactive.py implement the goto_reactive function. The function is to produce the steering command to drive the simulated hexapod robot toward the goal position based on the current odometry of the robot, the current collision state of the robot, and the laser scan perceived by the robot. The input parameters of the function are:

  1. goal - Pose - only position.x and position.y parameters are used as the goal position
  2. odometry - Odometry - pose.position.x, pose.position.y and pose.orientation.quaternion encode the current robot absolute position in the environment
  3. collision - boolean - True if the robot collides with some obstacle, False otherwise
  4. laser_scan - LaserScan - contains the current laser scan data perceived by the robot.

The function returns:

  1. Zero command cmd_msg = Twist() when any of the input data are invalid
  2. None when the robot collides with any obstacle.
  3. None when the robot has reached the given goal (note that in the real world, the exact position is impossible to reach by the robot; hence the goal is considered reached when the robot is in the DELTA_DISTANCE vicinity of the goal position).
  4. Otherwise, the function returns the steering command in the form of a velocity command (Twist) consisting of linear and angular parts that steer the robot toward the goal position while avoiding obstacles.

The goto_reactive function has the following prescription

    def goto_reactive(self, goal, odometry, collision, laser_scan):
        """Method to steer the robot toward the goal position while avoiding 
           contact with the obstacles given its current odometry, collision 
           status and laser scan data
        Args:
            goal: Pose of the robot goal
            odometry: Perceived odometry of the robot
            collision: bool of the robot collision status
            laser_scan: LaserScan data perceived by the robot
        Returns:
            cmd: Twist steering command
        """

In the class ControllerReactive.py, you can change whatever you want. In the evaluation, the given interfaces are fixed, and the evaluation script is fixed.

Approach

The recommended approach for reactive obstacle avoidance uses a simple AI cognitive model of Braitenberg vehicles described in Exteroceptive sensing, Mapping, and Reactive-based Obstacle Avoidance.

Direct sensory-motor mapping has to combine:

While the continuous navigation function toward the goal location can be represented by love (Vehicle 3a) or aggression (Vehicle 2b), the reactive obstacle avoidance is best modeled as the robot's fear (Vehicle 2a) of the obstacles in the environment perceived by the laser scanner with nonlinear activation functions.

Hence reactive obstacle avoidance can be achieved using the following continuous navigation function (pseudocode).

while not goal_reached:
    dphi = the difference between the current heading and the heading toward the target
 
    scan_left = distance to the closest obstacle to the left of the robot
    scan_right = distance to the closest obstacle to the right of the robot
    repulsive_force = 1/scan_left - 1/scan_right
 
    linear_speed = distance toward the target
    angular_speed_navigation_component = dphi*C_TURNING_SPEED  
    angular_speed_avoidance_component = repulsive_force*C_AVOID_SPEED
    angular_speed = angular_speed_navigation_component + angular_speed_avoidance_component
Where C_AVOID_SPEED is a constant that defines the robot fear of the obstacle and C_TURNING_SPEED is a constant that defines the aggression with which the robot will turn toward the desired heading.

An example operation of the controller can be seen in the following video (4x speed up):


Evaluation

The evaluation focuses on the ability of the robot to reach the given goal locations while not crashing into the obstacles. It is aimed to introduce you to work with the robot exteroceptive sensing.

The code can be evaluated using the following script (also attached as t1b-react.py).

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
 
import matplotlib.pyplot as plt
 
import sys
import os
import math
import numpy as np
 
# We assume the file is placed in the t1a-ctrl directory at the same 
# level as the supporting library redcp, where the hexapod_robot library is.
sys.path.append('../') 
sys.path.append('../redcp/hexapod_robot')
 
from redcp import * # import data types
import HexapodRobot as hexapod #import hexapod robot 
 
from ControllerReactive import Controller #import reactive controller
 
def plot_robot_path(path):
    fig, ax = plt.subplots()
    path.plot(ax, 30)
    plt.xlabel('x[m]')
    plt.ylabel('y[m]')
    plt.axis('equal')
    plt.show()
 
if __name__=="__main__":
 
    robot = HexapodRobot(Controller())
    robot.turn_on() #turn on the robot 
    robot.start_navigation() #start navigation thread
    time.sleep(3) # wait for a while
 
    goals = [ #assign goal for navigation
        Pose(Vector3(3.5, 3.5, 0), Quaternion(1, 0, 0, 0)), 
        Pose(Vector3(0, 0, 0), Quaternion(1, 0, 0, 0)), 
    ]
 
    path = Path()
    for goal in goals: #go from goal to goal
        robot.goto_reactive(goal)
        while robot.navigation_goal is not None: #sample the current odometry
            if robot.odometry_ is not None:
                path.poses.append(robot.odometry_.pose)
                odom = robot.odometry_.pose #check the robot distance to goal
                odom.position.z = 0 #compensate for the height of the robot as we are interested only in achieved planar distance 
                dist = goal.dist(odom) #calculate the distance
                sys.stdout.write("\r[t1b-react] actual distance to goal: %.2f" % dist)
            time.sleep(0.1) #wait for a while
        print("[t1b-react] distance to goal: %.2f" % dist)
    robot.stop_navigation()
    robot.turn_off()
    plot_robot_path(patH)
The expected output is the print of the distance readings below or equal to the DELTA_DISTANCE threshold and the plot of the robot path between the individual goals similar to the following figures. Note that the resulting path may be completely different. The main focus of this task is not to collide with the obstacles; hence, any collision-free trajectory is valid.

Note that the reactive controller might eventually get stacked based on the sensing. Try it when the robot is placed in the corner. The behavior might be improved; however, the basic functionality is sufficient for the course and its usage in the [[courses:crl-courses:redcp:tasks:t1f-exploration|].
courses/crl-courses/redcp/tasks/t1b-react.txt · Last modified: 2022/11/25 07:39 by faiglj