Search
The main task is to implement a function to fuse the laser scan data into the occupancy grid map. You will learn the Bayesian update of the grid map and the practical limitations of its usage. Further, you will experience a dataset with sensory data to speed up your workflow.
Mapper.py
bresenham.py
utils.py
data
t1d-map.py
t1d-map-navsim.py
Mapper.fuse_laser_scan()
Mapper.update_free()
Map.update_occupied()
ControllerReactive
ControllerReactive.py
In Mapper.py, implement the fuse_laser_scan function to fuse new data into the occupancy grid map as the robot traverses the environment. The laser scan measurements shall be fused to the grid using the Bayesian update described in Map Building. The input parameters of the function are:
fuse_laser_scan
grid_map
laser_scan
odometry
pose.position.x
pose.position.y
pose.orientation.quaternion
The function returns:
The fuse_laser_scan function in the Mapper.py class has the following prescription.
def fuse_laser_scan(self, grid_map: OccupancyGrid, laser_scan: LaserScan, odometry: Odometry) -> OccupancyGrid: """ Method to fuse the laser scan data sampled by the robot with a given odometry into the probabilistic occupancy grid map Args: grid_map: OccupancyGrid - grid map to fuse the laser scan to laser_scan: LaserScan - laser scan perceived by the robot odometry: Odometry - perceived odometry of the robot Returns: grid_map_update: OccupancyGrid - grid map updated with the laser scan data """
The recommended approach for the occupancy map building using the Bayesian approach is to follow Map Building. The OccupancyGrid represents a probabilistic representation of the word. In particular, the variable “OccupancyGrid.data” holds the probabilities of individual states to be occupied. The metadata further parameterizes the map in the OccupancyGrid:
OccupancyGrid
resolution
width
height
origin
Hence the approach to the fusion the laser scan data into the occupancy grid map can be as follows.
$\qquad\bullet\,$ Detailed description and guidance for the step (click to view)
* The laser scan data are represented as an array of distance measurements where each distance measurement $d_i$ is adjacent to a given scan angle $\theta_i$ with respect to the robot heading. Hence, the projection $\mathbf{p}_i$ of each scanned point is given as $$\mathbf{p}_i = (x_i,y_i)^T = (d_i \cos(\theta_i), d_i \sin(\theta_i))^T$$.
plt.scatter(x,y)
matplotlib
LaserScan
grid_map.origin
grid_map.resolution
free_points
occupied_points
def bresenham_line(self, start: (int, int), goal: (int, int)) -> (int, int): """Bresenham's line algorithm Args: start: (int, int) - start coordinate goal: (int, int) - goal coordinate Returns: Interlining points between the start and goal coordinate """ (x0, y0) = start (x1, y1) = goal line = [] dx = abs(x1 - x0) dy = abs(y1 - y0) x, y = x0, y0 sx = -1 if x0 > x1 else 1 sy = -1 if y0 > y1 else 1 if dx > dy: err = dx / 2.0 while x != x1: line.append((x,y)) err -= dy if err < 0: y += sy err += dx x += sx else: err = dy / 2.0 while y != y1: line.append((x,y)) err -= dx if err < 0: x += sx err += dy y += sy x = goal[0] y = goal[1] return line
start
goal
#get the position of the robot in the map coordinates odom_map = world_to_map(odometry.position) #get the laser scan points in the map coordinates laser_scan_points_map = world_to_map(laser_scan_points_world) free_points = [] occupied_points = [] for pt in laser_scan_points_map: #raytrace the points pts = bresenham_line(odom_map, pt) #save the coordinate of free space cells free_points.extend(pts) #save the coordinate of occupied cell occupied_points.append(pt)
$$ P(m_i = occupied \vert z) = \dfrac{p(z \vert m_i = occupied)P(m_i = occupied)}{p(z \vert m_i = occupied)P(m_i = occupied) + p(z \vert m_i = free)P(m_i = free)}, $$ where $P(m_i = occupied \vert z)$ is the probability of cell $m_i$ being occupied after the fusion of the sensory data; $P(m_i = occupied)$ is the previous probability of the cell being occupied and $p(z \vert m_i = occupied)$ is the model of the sensor which is usually modeled as: $$ p(z \vert m_i = occupied) = \dfrac{1 + S^z_{occupied} - S^z_{free}}{2}, $$ $$ p(z \vert m_i = free) = \dfrac{1 - S^z_{occupied} + S^z_{free}}{2} = 1 - p(z \vert m_i = occupied), $$ where $S^z_{occupied}$ and $S^z_{free}$ are the sensory models for the occupied and free space, respectively.
* The recommended approach is to define two functions for the Bayesian update.
def update_free(p_mi): """method to calculate the Bayesian update of the free cell with the current occupancy probability value p_mi Args: p_mi: float64 - the current probability of the cell being occupied Returns: p_mi: float64 - the updated probability of the cell being occupied """ p_mi = #TODO #never let p_mi get to 0 return p_mi def update_occupied(p_mi) """method to calculate the Bayesian update of the occupied cell with the current occupancy probability value p_mi Args: p_mi: float64 - the current probability of the cell being occupied Returns: p_mi: float64 - the updated probability of the cell being occupied """ p_mi = #TODO #never let p_mi get to 1 return p_mi
#construct the 2D grid data = grid_map.data.reshape(grid_map_update.height, grid_map_update.width) #fill in the cell probability values #e.g.: data[y,x] = update_free(data[y,x]) #serialize the data back (!watch for the correct width and height settings if you are doing the harder assignment) grid_map.data = data.flatten()
The probability values shall never reach 0 or 1!!! The effect of letting it happen is the inability of the map to dynamically add/delete obstacles, as demonstrated in the following example.
t1d-map
The evaluation focus on the ability of the robot to do the correct probabilistic update. Two evaluation scripts are provided, t1d-map.py using recorded dataset data/scan_dataset.pkl and data/grid_map.pkl. The second script, t1d-map-navsim.py, utilizes ControllerReactive from t1b-react for navigation of the robot in the simulation environment.
data/scan_dataset.pkl
data/grid_map.pkl
The code can be evaluated using the following script (also attached as t1d-map.py).
dataset = pickle.load( open( "data/scan_dataset.pkl", "rb" ) ) reference = pickle.load( open( "data/gridmap.pkl", "rb" ) ) print('Loaded dataset and reference') scan_n = 0 for scan, odometry in dataset: #process the dataset sys.stdout.write('\rFusing scan %3d' % scan_n) scan_n += 1 gridmap = mapper.fuse_laser_scan(gridmap, scan, odometry) #get the current laser scan and odometry and fuse them to the map print('\nAll scan fused compare the map with reference') fig = plot_map(gridmap, 'Map - Created map') image_file = RESULT_DIR + '/map-fused.png' save_image(fig, 'Save created map to ' + image_file, image_file)
Furthermore, the evaluation using simulation t1d-map-navsim.py is as follows.
robot = hexapod.HexapodRobot(Controller()) mapper = Mapper() robot.turn_on() #turn on the robot robot.start_navigation() #start navigation thread goals = [ #assign goal for navigation Pose(Vector3(3.5, 3.5, 0), Quaternion(1, 0, 0, 0)), Pose(Vector3(0.0, 0.0, 0), Quaternion(1, 0, 0, 0)), ] plt.ion() #prepare the online plot fig, ax = plt.subplots() gridmap = OccupancyGrid() #prepare the gridmap gridmap.resolution = 0.1 gridmap.width = 100 gridmap.height = 100 gridmap.origin = Pose(Vector3(-5.0, -5.0, 0.0), Quaternion(1, 0, 0, 0)) gridmap.data = 0.5*np.ones((gridmap.height*gridmap.width)) for goal in goals: #go from goal to goal robot.goto_reactive(goal) while robot.navigation_goal is not None: plt.cla() #get the current laser scan and odometry and fuse them to the map gridmap = mapper.fuse_laser_scan(gridmap, robot.laser_scan_, robot.odometry_) redraw_map(ax, gridmap) #plot the map robot.stop_navigation() robot.turn_off()
The expected output is the visually correct environment map, even for dynamic events.