Warning
This page is located in archive.

T1c-map - Map building

The main task is to implement a function that will fuse the laser scan data into the occupancy grid map. You will learn the Bayessian update of the grid map and the practical limitations of its usage. Further you will learn how to create a dataset with sensoric data to speed-up your workflow.

Deadline 25. October 2020, 23:59 PST
Points 2
Label in BRUTE t1c-map
Files to submit archive with HexapodExplorer.py file
Resources B4M36UIR_t1_resource_pack [Updated 12.10.2020]
OccupancyGrid.py [Updated 12.10.2020] - fixed error with switched x-y coordinates in plotting
t1c_map.py [Updated 12.10.2020] - added option for advanced assignment

Assignment

In class HexapodExplorer.py implement the fuse_laser_scan function. The purpose of the function is to fuse new data into the occupancy grid map as the robot traverses the environment. The laser scan measurements shall be fused to the grid using the Bayesian update described in Lab03 - Grid-based Path Planning. The input parameters of the function are:

  1. grid_map - OccupancyGrid message - grid map representation
  2. laser_scan - LaserScan message - contains the current laser scan data perceived by the robot.
  3. odometry - Odometry message - pose.position.x, pose.position.y and pose.orientation.quaternion encodes the current robot absolute position in the environment

The function returns:

  1. the OccupancyGrid message with the grid map fused with the new laser scan data, when the laser scan or the odometry data is invalid, the function just mirrors the grid map on the output

The fuse_laser_scan function in the HexapodExplorer.py class has a following prescription:

    def fuse_laser_scan(self, grid_map, laser_scan, odometry):
        """ Method to fuse the laser scan data sampled by the robot with a given 
            odometry into the probabilistic occupancy grid map
        Args:
            grid_map: OccupancyGrid - gridmap to fuse the laser scan to
            laser_scan: LaserScan - laser scan perceived by the robot
            odometry: Odometry - perceived odometry of the robot
        Returns:
            grid_map_update: OccupancyGrid - gridmap updated with the laser scan data
        """

In class HexapodExplorer.py you can change whatever you want. In evaluation, the given interfaces are fixed and the evaluation script is fixed.

Approach

The recommended approach for the occupancy map building using Bayesian approach is described in Lab03 - Grid-based Path Planning.

The OccupancyGrid represents a probabilistic representation of the word. In particular the variable “OccupancyGrid.data” holds the probabilities of individual states to be occupied. The map in the OccupancyGrid message is further parameterized by the metadata.

  • resolution - size of the single cell in the real (simulated) world
  • width - width of the map
  • height - height of the map
  • origin - the real-world pose of the cell (0,0) in the map.
  • data - the actual probabilistic map data (in row-major order)

Hence the correct approach to the fusion of the laser scan data into the occupancy grid map is as follows

  • Project the laser scan points to x,y plane with respect to the robot heading

$\qquad\bullet\,$ Detailed description and guidance for the step (click to view)

  • Compensate for the robot odometry
    • Compensate for the heading of the robot by rotating the scanned points based on the current odometry
    • Compensate for the position of the robot by offsetting the scanned points to the robot's coordinates

$\qquad\bullet\,$ Detailed description and guidance for the step (click to view)

  • Transfer the points from the world coordinates to the map coordinates (compensate for the map offset and resolution)

$\qquad\bullet\,$ Detailed description and guidance for the step (click to view)

  • Raytrace individual scanned points to give you coordinates of the cells which occupancy probability should be updated

$\qquad\bullet\,$ Detailed description and guidance for the step (click to view)

  • Update the occupancy grid using the Bayesian update and the simplified laser scan sensor model with $\epsilon = 0$, i.e., update all the points $d$ lying between the position of the robot (given by its odometry) and each of the reflection points: $(d \in [0,r))$ and update their probability (being free) and only the reflection point $(d = r)$ being occupied

$\qquad\bullet\,$ Detailed description and guidance for the step (click to view)

Note, when using the sensory model as it is described in Lab03 - Grid-based Path Planning, once the grid cell probability is set to 0, it stays 0, hence, all the nice features of the probabilistic mapping vanishes. Therefore it is recommended to use a different sensory model with: $$ S^z_{occupied} = \begin{cases} 0.95 \qquad\text{for}\qquad d = r\\ 0 \qquad\text{otherwise} \end{cases}, $$ $$ S^z_{free} = \begin{cases} 0.95 \qquad\text{for}\qquad d \in [0,r)\\ 0 \qquad\text{otherwise} \end{cases}, $$
Note that in the evaluation script of the t1c-map task the grid map is predefined with the size $100\times100$ points and correct map origin of $(-5,-5, 0)^T$ and orientation $(0,0,0,1)$ which is prepared exactly for seamless finishing of the t1c-map assignment; however, for the increased difficulty that will pay of during the work on the Project assignment you are encouraged to generalize your solution to allow for dynamicaly growing map.

Example behavior


Evaluation

The evaluation focus on the ability of the robot to do the correct probabilistic update. You can choose between two variants: simple one, that initializes the gridmap to a sufficient size at the beginning of the evaluation, and more advanced version which initializes the empty gridmap and it is your responsibility to correctly handle the size changes of the map. The hard variant has no effect on scoring of the task T1c-map - Map building; however, it will be beneficial in the forthcoming Semestral project on mobile robot exploration.

The code can be evaluated using the following script (also attached as t1c-map.py).

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
 
import matplotlib.pyplot as plt
 
import sys
import os
import math
import numpy as np
 
sys.path.append('hexapod_robot')
sys.path.append('hexapod_explorer')
 
#import hexapod robot and explorer
import HexapodRobot as hexapod
import HexapodExplorer as explorer
 
#import communication messages
from messages import *
 
#switch to select between the simple variant or the hard variant
SIMPLE_VARIANT = True
 
if __name__=="__main__":
 
    robot = hexapod.HexapodRobot(0)
    explor = explorer.HexapodExplorer()
 
    #turn on the robot 
    robot.turn_on()
 
    #start navigation thread
    robot.start_navigation()
 
    #assign goal for navigation
    goals = [
        Pose(Vector3(3.5,3.5,0),Quaternion(1,0,0,0)),
        Pose(Vector3(0.0,0.0,0),Quaternion(1,0,0,0)),
    ]
 
    #prepare the online plot
    plt.ion()
    fig, ax = plt.subplots()
 
    #prepare the gridmap
    gridmap = OccupancyGrid()
    gridmap.resolution = 0.1
 
    if SIMPLE_VARIANT:
        gridmap.width = 100
        gridmap.height = 100
        gridmap.origin = Pose(Vector3(-5.0,-5.0,0.0), Quaternion(1,0,0,0))
        gridmap.data = 0.5*np.ones((gridmap.height*gridmap.width))
 
    #go from goal to goal
    for goal in goals:
        robot.goto_reactive(goal)
        while robot.navigation_goal is not None:
            plt.cla()
 
            #get the current laser scan and odometry and fuse them to the map
            gridmap = explor.fuse_laser_scan(gridmap, robot.laser_scan_, robot.odometry_)
 
            #plot the map
            gridmap.plot(ax)
            plt.xlabel('x[m]')
            plt.ylabel('y[m]')
            #plt.axis('square')
            plt.show()
 
            plt.pause(0.1)
 
    robot.stop_navigation()
    robot.turn_off()
The expected output is the visually correct map of the environment even in presence of dynamic events.

courses/b4m36uir/hw/t1c-map.txt · Last modified: 2020/10/12 13:37 by cizekpe6