{{indexmenu_n>4}} ===== t1d-map - Map building ====== The main task is to implement a function to fuse the laser scan data into the occupancy grid map. You will learn the Bayesian update of the grid map and the practical limitations of its usage. Further, you will experience a dataset with sensory data to speed up your workflow. |**Files** | ''Mapper.py'' - File to be implemented \\ ''bresenham.py'' - Implementation of the Bresenham's line algorithm \\ ''utils.py'' - Helpers for evaluation, drawing, and saving figures \\ ''data'' - Scenarios and reference solutions in pkl files \\ ''t1d-map.py'' - Evaluation script \\ ''t1d-map-navsim.py'' - Evaluation script for simulation | |**Resources** | [[courses:crl-courses:redcp:tutorials:simloc|Introduction to CoppeliaSim/V-REP and Open-Loop Robot Locomotion Control]] \\ [[courses:crl-courses:redcp:tutorials:reactive|Exteroceptive sensing and Reactive-based Obstacle Avoidance]] \\ {{ :courses:crl-courses:redcp:tasks:redcp.zip | redcp.zip}}, \\ {{ :courses:crl-courses:redcp:tasks:t1d-map.zip |}} - **Initial files and evaluation script** | Implement and revise * ''Mapper.fuse_laser_scan()'' in ''Mapper.py'', * ''Mapper.update_free()'' in ''Mapper.py'', * ''Map.update_occupied()'' in ''Mapper.py''. * You can use ''ControllerReactive'' from [[:courses:crl-courses:tasks:t1b-react|]] or the provided ''ControllerReactive.py''. ---- ===Assignment=== In ''Mapper.py'', implement the ''fuse_laser_scan'' function to fuse new data into the occupancy grid map as the robot traverses the environment. The laser scan measurements shall be fused to the grid using the Bayesian update described in [[:courses:crl-courses:redcp:tutorials:mapping|Map Building]]. The input parameters of the function are: - ''grid_map'' - [[:courses:crl-courses:redcp:tasks:data_types|OccupancyGrid]] - grid map representation; - ''laser_scan'' - [[:courses:crl-courses:redcp:tasks:data_types|LaserScan]] - contains the current laser scan data perceived by the robot.; - ''odometry'' - [[:courses:crl-courses:redcp:tasks:data_types|Odometry]] - ''pose.position.x'', ''pose.position.y'' and ''pose.orientation.quaternion'' encode the current robot absolute position in the environment. The function returns: - the [[:courses:crl-courses:redcp:tasks:data_types|OccupancyGrid]] with the grid map fused with the new laser scan data; when the laser scan or the odometry data is invalid, the function returns a copy of the grid map on the output. The ''fuse_laser_scan'' function in the ''Mapper.py'' class has the following prescription. def fuse_laser_scan(self, grid_map: OccupancyGrid, laser_scan: LaserScan, odometry: Odometry) -> OccupancyGrid: """ Method to fuse the laser scan data sampled by the robot with a given odometry into the probabilistic occupancy grid map Args: grid_map: OccupancyGrid - grid map to fuse the laser scan to laser_scan: LaserScan - laser scan perceived by the robot odometry: Odometry - perceived odometry of the robot Returns: grid_map_update: OccupancyGrid - grid map updated with the laser scan data """ ---- === Approach === The recommended approach for the occupancy map building using the Bayesian approach is to follow [[:courses:crl-courses:redcp:tutorials:mapping|Map Building]]. The ''OccupancyGrid'' represents a probabilistic representation of the word. In particular, the variable "OccupancyGrid.data" holds the probabilities of individual states to be occupied. The metadata further parameterizes the map in the ''OccupancyGrid'': * ''resolution'' - the size of the single cell in the real (simulated) world; * ''width'' - width of the map; * ''height'' - height of the map; * ''origin'' - the real-world pose of the cell (0,0) in the map; * ''data'' - the actual probabilistic map data (in row-major order). Hence the approach to the fusion the laser scan data into the occupancy grid map can be as follows. * **Project the laser scan points to $x,y$ plane with respect to (w.r.t.) the robot heading**. ++++ $\qquad\bullet\,$ Detailed description and guidance for the step (click to view)| * The laser scan data are represented as an array of distance measurements where each distance measurement $d_i$ is adjacent to a given scan angle $\theta_i$ with respect to the robot heading. Hence, the projection $\mathbf{p}_i$ of each scanned point is given as $$\mathbf{p}_i = (x_i,y_i)^T = (d_i \cos(\theta_i), d_i \sin(\theta_i))^T$$. * The expected output of the step should be similar to the following video (8x speed-up)\\ {{::courses:crl-courses:redcp:tasks:map1.gif?600|}} The laser scan is in the relative coordinates w.r.t. the robot base, where the $x$-axis corresponds to the robot's heading, and the $y$-axis is perpendicular to the robot heading. You can use plotting function such as ''plt.scatter(x,y)'' from ''matplotlib'' library to help you debug the mathematical operations and visualize the projected point cloud similar to the sample codes in ''LaserScan'' class. ++++ * **Compensate for the robot odometry** * Compensate the robot's heading by rotating the scanned points based on the current odometry. * Compensate for the robot's position by offsetting the scanned points to the robot's coordinates. ++++ $\qquad\bullet\,$ Detailed description and guidance for the step (click to view) | * For each projected scan point $\mathbf{p}_i$, the odometry compensated point $\mathbf{o}_i$ is calculated according to the equation: $$\mathbf{o}_i = \mathbf{R}\cdot \mathbf{p}_i + \mathbf{T},$$ where $\mathbf{R}$ is the odometry orientation component represented by the rotation matrix (the orientation of the robot) and $\mathbf{T}$ is the odometry translational component (the position of the robot). * The expected output of the step should be similar to the following video (8x speed-up)\\ {{::courses:crl-courses:redcp:tasks:map2.gif?600|}} ++++ * **Transfer the points from the world coordinates to the map coordinates** (compensate for the map offset and resolution). ++++ $\qquad\bullet\,$ Detailed description and guidance for the step (click to view) | * The map is in the map coordinates that are given by the map origin $\mathbf{x}_\mathrm{origin}$ (encoded in Pose ''grid_map.origin'') and the map resolution $\Delta_\mathrm{res}$ (''grid_map.resolution''), where each point in the map represents an area of $\Delta_\mathrm{res}\times\Delta_\mathrm{res}~\mathrm{m}$. Therefore we need to apply a similar approach to the previous step and also divide the real-world coordinates by the resolution $\Delta_\mathrm{res}$ and round it to the closest integer to obtain the grid map indices $\mathbf{m}_i$ as:$$\mathbf{m}_i = round((\mathbf{o}_i - \mathbf{x}_\mathrm{origin})/\Delta_\mathrm{res})$$ * You should also watch for any points that fall outside the map and act accordingly. Either extend the map and move its origin (more demanding) or trim the outlying points (sufficient for the course tasks). * The expected output of the step should be similar to the following video (8x speed-up)\\ {{::courses:crl-courses:redcp:tasks:map3.gif?600|}} ++++ * **Raytrace individual scanned points** to give you coordinates of the cells that occupancy probability should be updated ++++ $\qquad\bullet\,$ Detailed description and guidance for the step (click to view) | * You need to obtain the list of all the grid map points to be updated (both ''free_points'' and ''occupied_points''). It can be done using raycasting. The straightforward approach for the raytracing on a 2D grid is Bresenham's line algorithm: def bresenham_line(self, start: (int, int), goal: (int, int)) -> (int, int): """Bresenham's line algorithm Args: start: (int, int) - start coordinate goal: (int, int) - goal coordinate Returns: Interlining points between the start and goal coordinate """ (x0, y0) = start (x1, y1) = goal line = [] dx = abs(x1 - x0) dy = abs(y1 - y0) x, y = x0, y0 sx = -1 if x0 > x1 else 1 sy = -1 if y0 > y1 else 1 if dx > dy: err = dx / 2.0 while x != x1: line.append((x,y)) err -= dy if err < 0: y += sy err += dx x += sx else: err = dy / 2.0 while y != y1: line.append((x,y)) err -= dx if err < 0: x += sx err += dy y += sy x = goal[0] y = goal[1] return line * The algorithm gives you a list of all points between the ''start'' coordinate and ''goal'' coordinate. Where the ''start'' coordinate is for all the scan points in the robot's odometry, and the ''goal'' coordinate is each scan projected scan point $\mathbf{m}_i$. It is enough to store the appropriate points in the lists, such as #get the position of the robot in the map coordinates odom_map = world_to_map(odometry.position) #get the laser scan points in the map coordinates laser_scan_points_map = world_to_map(laser_scan_points_world) free_points = [] occupied_points = [] for pt in laser_scan_points_map: #raytrace the points pts = bresenham_line(odom_map, pt) #save the coordinate of free space cells free_points.extend(pts) #save the coordinate of occupied cell occupied_points.append(pt) * The expected output of the step should be similar to the video (8x speed-up)\\ {{::courses:crl-courses:redcp:tasks:map4.gif?600|}} ++++ * **Update the occupancy grid using the Bayesian update and the simplified laser scan sensor model** with $\epsilon = 0$, i.e., update all the points $d$ lying between the position of the robot (given by its ''odometry'') and each of the reflection points: $(d \in [0,r))$ and update their probability (being free) and only the reflection point $(d = r)$ being occupied ++++ $\qquad\bullet\,$ Detailed description and guidance for the step (click to view) | * In this step, the Bayesian update of the map is performed according to the description in [[:courses:crl-courses:redcp:tutorials:mapping|Map Building]]. * The Bayesian occupancy grid update can be defined as: $$ P(m_i = occupied \vert z) = \dfrac{p(z \vert m_i = occupied)P(m_i = occupied)}{p(z \vert m_i = occupied)P(m_i = occupied) + p(z \vert m_i = free)P(m_i = free)}, $$ where $P(m_i = occupied \vert z)$ is the probability of cell $m_i$ being occupied after the fusion of the sensory data; $P(m_i = occupied)$ is the previous probability of the cell being occupied and $p(z \vert m_i = occupied)$ is the model of the sensor which is usually modeled as: $$ p(z \vert m_i = occupied) = \dfrac{1 + S^z_{occupied} - S^z_{free}}{2}, $$ $$ p(z \vert m_i = free) = \dfrac{1 - S^z_{occupied} + S^z_{free}}{2} = 1 - p(z \vert m_i = occupied), $$ where $S^z_{occupied}$ and $S^z_{free}$ are the sensory models for the occupied and free space, respectively. * The recommended approach is to define two functions for the Bayesian update. def update_free(p_mi): """method to calculate the Bayesian update of the free cell with the current occupancy probability value p_mi Args: p_mi: float64 - the current probability of the cell being occupied Returns: p_mi: float64 - the updated probability of the cell being occupied """ p_mi = #TODO #never let p_mi get to 0 return p_mi def update_occupied(p_mi) """method to calculate the Bayesian update of the occupied cell with the current occupancy probability value p_mi Args: p_mi: float64 - the current probability of the cell being occupied Returns: p_mi: float64 - the updated probability of the cell being occupied """ p_mi = #TODO #never let p_mi get to 1 return p_mi * The following needs to be carefully watched. * It is necessary to correctly set the cell values given the row-major order of the ''OccupancyGrid'' representation. HINT code: #construct the 2D grid data = grid_map.data.reshape(grid_map_update.height, grid_map_update.width) #fill in the cell probability values #e.g.: data[y,x] = update_free(data[y,x]) #serialize the data back (!watch for the correct width and height settings if you are doing the harder assignment) grid_map.data = data.flatten() ++++ **The probability values shall never reach 0 or 1!!!** **The probability values shall never reach 0 or 1!!!** The effect of letting it happen is the inability of the map to dynamically add/delete obstacles, as demonstrated in the following example. | {{courses:crl-courses:redcp:tasks:map_ok.gif?600|}} | {{courses:crl-courses:redcp:tasks:map_bad.gif?600|}} | | Correct behavior | Wrong behavior | In the evaluation script ''t1d-map'', the grid map is predefined with the size $100\times100$ points and correct map origin of $(-5,-5, 0)^T$ and orientation $(0,0,0,1)$, which is prepared exactly for seamless finishing of the ''t1d-map'' assignment. However, consider generalizing your solution to allow for a dynamically growing map. === Example behavior === {{::courses:crl-courses:redcp:tasks:map5.gif?960|}} ---- === Evaluation === The evaluation focus on the ability of the robot to do the correct probabilistic update. Two evaluation scripts are provided, ''t1d-map.py'' using recorded dataset ''data/scan_dataset.pkl'' and ''data/grid_map.pkl''. The second script, ''t1d-map-navsim.py'', utilizes ''ControllerReactive'' from [[:courses:crl-courses:redcp:tasks:t1b-react|]] for navigation of the robot in the simulation environment. The code can be evaluated using the following script (also attached as ''t1d-map.py''). dataset = pickle.load( open( "data/scan_dataset.pkl", "rb" ) ) reference = pickle.load( open( "data/gridmap.pkl", "rb" ) ) print('Loaded dataset and reference') scan_n = 0 for scan, odometry in dataset: #process the dataset sys.stdout.write('\rFusing scan %3d' % scan_n) scan_n += 1 gridmap = mapper.fuse_laser_scan(gridmap, scan, odometry) #get the current laser scan and odometry and fuse them to the map print('\nAll scan fused compare the map with reference') fig = plot_map(gridmap, 'Map - Created map') image_file = RESULT_DIR + '/map-fused.png' save_image(fig, 'Save created map to ' + image_file, image_file) Furthermore, the evaluation using simulation ''t1d-map-navsim.py'' is as follows. robot = hexapod.HexapodRobot(Controller()) mapper = Mapper() robot.turn_on() #turn on the robot robot.start_navigation() #start navigation thread goals = [ #assign goal for navigation Pose(Vector3(3.5, 3.5, 0), Quaternion(1, 0, 0, 0)), Pose(Vector3(0.0, 0.0, 0), Quaternion(1, 0, 0, 0)), ] plt.ion() #prepare the online plot fig, ax = plt.subplots() gridmap = OccupancyGrid() #prepare the gridmap gridmap.resolution = 0.1 gridmap.width = 100 gridmap.height = 100 gridmap.origin = Pose(Vector3(-5.0, -5.0, 0.0), Quaternion(1, 0, 0, 0)) gridmap.data = 0.5*np.ones((gridmap.height*gridmap.width)) for goal in goals: #go from goal to goal robot.goto_reactive(goal) while robot.navigation_goal is not None: plt.cla() #get the current laser scan and odometry and fuse them to the map gridmap = mapper.fuse_laser_scan(gridmap, robot.laser_scan_, robot.odometry_) redraw_map(ax, gridmap) #plot the map robot.stop_navigation() robot.turn_off() The expected output is the visually correct environment map, even for dynamic events.