Search
The main task is to implement a function that will fuse the laser scan data into the occupancy grid map. You will learn the Bayessian update of the grid map and the practical limitations of its usage. Further you will learn how to create a dataset with sensoric data to speed-up your workflow.
HexapodExplorer.py
In class HexapodExplorer.py implement the fuse_laser_scan function. The purpose of the function is to fuse new data into the occupancy grid map as the robot traverses the environment. The laser scan measurements shall be fused to the grid using the Bayesian update described in Lab04 - Grid-based Path Planning. The input parameters of the function are:
fuse_laser_scan
grid_map
laser_scan
odometry
pose.position.x
pose.position.y
pose.orientation.quaternion
The function returns:
The fuse_laser_scan function in the HexapodExplorer.py class has a following prescription:
def fuse_laser_scan(self, grid_map, laser_scan, odometry): """ Method to fuse the laser scan data sampled by the robot with a given odometry into the probabilistic occupancy grid map Args: grid_map: OccupancyGrid - gridmap to fuse the laser scan to laser_scan: LaserScan - laser scan perceived by the robot odometry: Odometry - perceived odometry of the robot Returns: grid_map_update: OccupancyGrid - gridmap updated with the laser scan data """
The recommended approach for the occupancy map building using Bayesian approach is described in Lab03 - Grid-based Path Planning.
The OccupancyGrid represents a probabilistic representation of the word. In particular the variable “OccupancyGrid.data” holds the probabilities of individual states to be occupied. The map in the OccupancyGrid message is further parameterized by the metadata.
OccupancyGrid
resolution
width
height
origin
data
Hence the correct approach to the fusion of the laser scan data into the occupancy grid map is as follows
$\qquad\bullet\,$ Detailed description and guidance for the step (click to view)
* The laser scan data are represented as an array of distance measurements where each distance measurement $d_i$ is adjacent to a given scan angle $\theta_i$ with respect to the robot heading. Hence, the projection $\mathbf{p}_i$ of each scanned point is given as: $$\mathbf{p}_i = (x_i,y_i)^T = (d_i \cos(\theta_i), d_i \sin(\theta_i))^T$$
plt.scatter(x,y)
matplotlib
LaserScan
grid_map.origin
grid_map.resolution
free_points
occupied_points
def bresenham_line(self, start, goal): """Bresenham's line algorithm Args: start: (float64, float64) - start coordinate goal: (float64, float64) - goal coordinate Returns: interlying points between the start and goal coordinate """ (x0, y0) = start (x1, y1) = goal line = [] dx = abs(x1 - x0) dy = abs(y1 - y0) x, y = x0, y0 sx = -1 if x0 > x1 else 1 sy = -1 if y0 > y1 else 1 if dx > dy: err = dx / 2.0 while x != x1: line.append((x,y)) err -= dy if err < 0: y += sy err += dx x += sx else: err = dy / 2.0 while y != y1: line.append((x,y)) err -= dx if err < 0: x += sx err += dy y += sy x = goal[0] y = goal[1] return line
start
goal
#get the position of the robot in the map coordinates odom_map = world_to_map(odometry.position) #get the laser scan points in the map coordinates laser_scan_points_map = world_to_map(laser_scan_points_world) free_points = [] occupied_points = [] for pt in laser_scan_points_map: #raytrace the points pts = bresenham_line(odom_map, pt) #save the coordinate of free space cells free_points.extend(pts) #save the coordinate of occupied cell occupied_points.append(pt)
$$ P(m_i = occupied \vert z) = \dfrac{p(z \vert m_i = occupied)P(m_i = occupied)}{p(z \vert m_i = occupied)P(m_i = occupied) + p(z \vert m_i = free)P(m_i = free)}, $$ where $P(m_i = occupied \vert z)$ is the probability of cell $m_i$ being occupied after the fusion of the sensory data; $P(m_i = occupied)$ is the previous probability of the cell being occupied and $p(z \vert m_i = occupied)$ is the model of the sensor which is usually modeled as: $$ p(z \vert m_i = occupied) = \dfrac{1 + S^z_{occupied} - S^z_{free}}{2}, $$ $$ p(z \vert m_i = free) = \dfrac{1 - S^z_{occupied} + S^z_{free}}{2} = 1 - p(z \vert m_i = occupied), $$ where $S^z_{occupied}$ and $S^z_{free}$ are the sensory models for the occupied and free space respectively.
For the simulated LIDAR sensor, we will use the below stated model $$ S^z_{occupied} = \begin{cases} 0.95 \qquad\text{for}\qquad d = r\\ 0 \qquad\text{otherwise} \end{cases}, \qquad\qquad S^z_{free} = \begin{cases} 0.95 \qquad\text{for}\qquad d \in [0,r)\\ 0 \qquad\text{otherwise} \end{cases}, $$ where $r$ is the distance of the measured point and $\epsilon$ is the precision of the sensor. I.e. it is assumed that the space between the sensor and the measured point is free and the close vicinity of the measured point given by the sensor precision is occupied. * the recommended approach is to define two functions to help you with the Bayesian update
def update_free(P_mi): """method to calculate the Bayesian update of the free cell with the current occupancy probability value P_mi Args: P_mi: float64 - current probability of the cell being occupied Returns: p_mi: float64 - updated probability of the cell being occupied """ p_mi = #TODO #never let p_mi get to 0 return p_mi def update_occupied(P_mi) """method to calculate the Bayesian update of the occupied cell with the current occupancy probability value P_mi Args: P_mi: float64 - current probability of the cell being occupied Returns: p_mi: float64 - updated probability of the cell being occupied """ p_mi = #TODO #never let p_mi get to 1 return p_mi
#construct the 2D ggrid data = grid_map.data.reshape(grid_map_update.height, grid_map_update.width) #fill in the cell probability values #e.g.: data[y,x] = update_free(data[y,x]) #serialize the data back (!watch for the correct width and height settings if you are doing the harder assignment) grid_map.data = data.flatten()
t1d-map
t1f-map
The evaluation focus on the ability of the robot to do the correct probabilistic update. You can choose between two variants: simple one, that initializes the gridmap to a sufficient size at the beginning of the evaluation, and more advanced version which initializes the empty gridmap and it is your responsibility to correctly handle the size changes of the map. The hard variant has no effect on scoring of the task T1d-map - Map building; however, it will be beneficial in the forthcoming Semestral project on mobile robot exploration.
gridmap
The code can be evaluated using the following script (also attached as t1d-map.py).
t1d-map.py
#!/usr/bin/env python3 # -*- coding: utf-8 -*- import matplotlib.pyplot as plt import sys import os import math import numpy as np sys.path.append('hexapod_robot') sys.path.append('hexapod_explorer') #import hexapod robot and explorer import HexapodRobot as hexapod import HexapodExplorer as explorer #import communication messages from messages import * #switch to select between the simple variant or the hard variant SIMPLE_VARIANT = True if __name__=="__main__": robot = hexapod.HexapodRobot(0) explor = explorer.HexapodExplorer() #turn on the robot robot.turn_on() #start navigation thread robot.start_navigation() #assign goal for navigation goals = [ Pose(Vector3(3.5,3.5,0),Quaternion(1,0,0,0)), Pose(Vector3(0.0,0.0,0),Quaternion(1,0,0,0)), ] #prepare the online plot plt.ion() fig, ax = plt.subplots() #prepare the gridmap gridmap = OccupancyGrid() gridmap.resolution = 0.1 if SIMPLE_VARIANT: gridmap.width = 100 gridmap.height = 100 gridmap.origin = Pose(Vector3(-5.0,-5.0,0.0), Quaternion(1,0,0,0)) gridmap.data = 0.5*np.ones((gridmap.height*gridmap.width)) #go from goal to goal for goal in goals: robot.goto_reactive(goal) while robot.navigation_goal is not None: plt.cla() #get the current laser scan and odometry and fuse them to the map gridmap = explor.fuse_laser_scan(gridmap, robot.laser_scan_, robot.odometry_) #plot the map gridmap.plot(ax) plt.xlabel('x[m]') plt.ylabel('y[m]') #plt.axis('square') plt.show() plt.pause(0.1) robot.stop_navigation() robot.turn_off()