In class HexapodExplorer.py implement the fuse_laser_scan function. The purpose of the function is to fuse new data into the occupancy grid map as the robot traverses the environment. The laser scan measurements shall be fused to the grid using the Bayesian update described in Lab04 - Grid-based Path Planning.
The input parameters of the function are:
- 
-  - laser_scan-  -  LaserScan message-  - contains the current laser scan data perceived by the robot. 
 
-  - odometry-  -  Odometry message-  -  - pose.position.x- ,  - pose.position.y-  and  - pose.orientation.quaternion-  encodes the current robot absolute position in the environment 
 
The function returns:
- 
 the  OccupancyGrid message-  with the grid map fused with the new laser scan data, when the laser scan or the odometry data is invalid, the function just mirrors the grid map on the output
 
The fuse_laser_scan function in the HexapodExplorer.py class has a following prescription:
    def fuse_laser_scan(self, grid_map, laser_scan, odometry):
        """ Method to fuse the laser scan data sampled by the robot with a given 
            odometry into the probabilistic occupancy grid map
        Args:
            grid_map: OccupancyGrid - gridmap to fuse the laser scan to
            laser_scan: LaserScan - laser scan perceived by the robot
            odometry: Odometry - perceived odometry of the robot
        Returns:
            grid_map_update: OccupancyGrid - gridmap updated with the laser scan data
        """
In class HexapodExplorer.py you can change whatever you want. In evaluation, the given interfaces are fixed and the evaluation script is fixed.
 
The recommended approach for the occupancy map building using Bayesian approach is described in Lab03 - Grid-based Path Planning.
The OccupancyGrid represents a probabilistic representation of the word. In particular the variable “OccupancyGrid.data” holds the probabilities of individual states to be occupied. The map in the OccupancyGrid message is further parameterized by the metadata.
- 
 - resolution- size of the single cell in the real (simulated) world
 
- 
 - width- width of the map
 
- 
 - height- height of the map
 
- 
 - origin- the real-world pose of the cell (0,0) in the map.
 
- 
 - data- the actual probabilistic map data (in row-major order)
 
Hence the correct approach to the fusion of the laser scan data into the occupancy grid map is as follows
$\qquad\bullet\,$ Detailed description and guidance for the step (click to view) 
* The laser scan data are represented as an array of distance measurements where each distance measurement $d_i$ is adjacent to a given scan angle $\theta_i$ with respect to the robot heading. Hence, the projection $\mathbf{p}_i$ of each scanned point is given as: $$\mathbf{p}_i = (x_i,y_i)^T = (d_i \cos(\theta_i), d_i \sin(\theta_i))^T$$
- 
 The expected output of this step should be similar to the following video (8x speed-up)   
The laser scan is in relative coordinates w.r.t. the hexapod base where the x axis corresponds to the heading of the robot and y axis is perpendicular to the robot heading.
You can use some plotting function, e.g., plt.scatter(x,y) from matplotlib library to help you debug the mathematical operations and visualize the projected point cloud similar to the sample codes in LaserScan class.
 
$\qquad\bullet\,$ Detailed description and guidance for the step (click to view) 
-  For each projected scan point $\mathbf{p}_i$, the odometry compensated point $\mathbf{o}_i$ is calculated according to the equation: $$\mathbf{o}_i = \mathbf{R}\cdot \mathbf{p}_i + \mathbf{T},$$ where $\mathbf{R}$ is the odometry orientation component represented by the rotation matrix (i.e. the orientation of the robot) and $\mathbf{T}$ is the odometry translational component (i.e. the position of the robot) 
-  The expected output of this step should be similar to the following video (8x speed-up)  
$\qquad\bullet\,$ Detailed description and guidance for the step (click to view) 
-  The map is in map coordinates that are given by the map origin $\mathbf{x}_\mathrm{origin}$ (encoded in Pose message - grid_map.origin) and the map resolution $\Delta_\mathrm{res}$ (- grid_map.resolution), i.e., each point in the map represents an area of $\Delta_\mathrm{res}\times\Delta_\mathrm{res}~\mathrm{m}$. Therefore we need to apply a similar approach to the previous step and also divide the real-world coordinates by the resolution $\Delta_\mathrm{res}$ and round it to the closest integer to obtain the grid map indices $\mathbf{m}_i$ as:$$\mathbf{m}_i = round((\mathbf{o}_i - \mathbf{x}_\mathrm{origin})/\Delta_\mathrm{res})$$
 
-  You should also watch for any points that fall outside of the map and act accordingly. Either you need to extend the map and perhaps move its origin (harder version of this task), or simply trim the outlying points (sufficient for passing this task) 
-  The expected output of this step should be similar to the following video (8x speed-up)  
$\qquad\bullet\,$ Detailed description and guidance for the step (click to view) 
-  You need to obtain the list of all the grid map points to be updated (both  - free_points-  and  - occupied_points- ). This should be done using the raytracing. The simplest approach for raytracing on 2D grid is the Bresenham's line algorithm:  -     def bresenham_line(self, start, goal):
        """Bresenham's line algorithm
        Args:
            start: (float64, float64) - start coordinate
            goal: (float64, float64) - goal coordinate
        Returns:
            interlying points between the start and goal coordinate
        """
        (x0, y0) = start
        (x1, y1) = goal
        line = []
        dx = abs(x1 - x0)
        dy = abs(y1 - y0)
        x, y = x0, y0
        sx = -1 if x0 > x1 else 1
        sy = -1 if y0 > y1 else 1
        if dx > dy:
            err = dx / 2.0
            while x != x1:
                line.append((x,y))
                err -= dy
                if err < 0:
                    y += sy
                    err += dx
                x += sx
        else:
            err = dy / 2.0
            while y != y1:
                line.append((x,y))
                err -= dx
                if err < 0:
                    x += sx
                    err += dy
                y += sy
        x = goal[0]
        y = goal[1]
        return line
-  The algorithm will give you a list of all points between the  - start-  coordinate and  - goal-  coordinate. Where the  - start-  coordinate is for all the scan points the odometry of the robot, and  - goal-  coordinate is each scan projected scan point $\mathbf{m}_i$. It is enough to store the appropriate points in the lists, e.g.: - #get the position of the robot in the map coordinates
odom_map = world_to_map(odometry.position)
#get the laser scan points in the map coordinates
laser_scan_points_map = world_to_map(laser_scan_points_world)
 
free_points = []
occupied_points = []
 
for pt in laser_scan_points_map:
  #raytrace the points
  pts = bresenham_line(odom_map, pt)
 
  #save the coordinate of free space cells
  free_points.extend(pts)
  #save the coordinate of occupied cell 
  occupied_points.append(pt) 
-  The expected output of this step should be similar to the following video (8x speed-up)  
- 
 Update the occupancy grid using the Bayesian update and the simplified laser scan sensor model with $\epsilon = 0$, i.e., update all the points $d$ lying between the position of the robot (given by its - odometry) and each of the reflection points: $(d \in [0,r))$ and update their probability (being free) and only the reflection point $(d = r)$ being occupied
 
$\qquad\bullet\,$ Detailed description and guidance for the step (click to view) 
$$
P(m_i = occupied \vert z) = \dfrac{p(z \vert m_i = occupied)P(m_i = occupied)}{p(z \vert m_i = occupied)P(m_i = occupied) + p(z \vert m_i = free)P(m_i = free)},
$$
where $P(m_i = occupied \vert z)$ is the probability of cell $m_i$ being occupied after the fusion of the sensory data; $P(m_i = occupied)$ is the previous probability of the cell being occupied and $p(z \vert m_i = occupied)$ is the model of the sensor which is usually modeled as:
$$
p(z \vert m_i = occupied) = \dfrac{1 + S^z_{occupied} - S^z_{free}}{2},
$$
$$
p(z \vert m_i = free) =  \dfrac{1 - S^z_{occupied} + S^z_{free}}{2} = 1 - p(z \vert m_i = occupied),
$$
where $S^z_{occupied}$ and $S^z_{free}$ are the sensory models for the occupied and free space respectively.
For the simulated LIDAR sensor, we will use the below stated model
$$
  S^z_{occupied} = \begin{cases} 0.95 \qquad\text{for}\qquad d = r\\ 0 \qquad\text{otherwise} \end{cases}, \qquad\qquad S^z_{free} = \begin{cases} 0.95 \qquad\text{for}\qquad d \in [0,r)\\ 0 \qquad\text{otherwise} \end{cases},
$$
where $r$ is the distance of the measured point and $\epsilon$ is the precision of the sensor. I.e. it is assumed that the space between the sensor and the measured point is free and the close vicinity of the measured point given by the sensor precision is occupied. 
 * the recommended approach is to define two functions to help you with the Bayesian update 
        def update_free(P_mi):
            """method to calculate the Bayesian update of the free cell with the current occupancy probability value P_mi 
            Args:
                P_mi: float64 - current probability of the cell being occupied
            Returns:
                p_mi: float64 - updated probability of the cell being occupied
            """
            p_mi = #TODO
            #never let p_mi get to 0
            return p_mi
 
        def update_occupied(P_mi)
            """method to calculate the Bayesian update of the occupied cell with the current occupancy probability value P_mi
            Args:
                P_mi: float64 - current probability of the cell being occupied
            Returns:
                p_mi: float64 - updated probability of the cell being occupied
            """
            p_mi = #TODO
            #never let p_mi get to 1
            return p_mi
 * There are two things to watch for during this step.
-  First, it is necessary to correctly set the cell values given the row-major order of  - OccupancyGrid-  representation. HINT code: -         #construct the 2D ggrid
        data = grid_map.data.reshape(grid_map_update.height, grid_map_update.width)
 
        #fill in the cell probability values
        #e.g.: data[y,x] = update_free(data[y,x])
 
        #serialize the data back (!watch for the correct width and height settings if you are doing the harder assignment)
        grid_map.data = data.flatten()
 
 
 Note, when using the sensory model as it is described in 
Lab03 - Grid-based Path Planning, once the grid cell probability is set to 0, it stays 0, hence, all the nice features of the probabilistic mapping vanishes. Therefore it is recommended to use a different sensory model with: 
$$
  S^z_{occupied} = \begin{cases} 0.95 \qquad\text{for}\qquad d = r\\ 0 \qquad\text{otherwise} \end{cases},        
$$
$$
   S^z_{free} = \begin{cases} 0.95 \qquad\text{for}\qquad d \in [0,r)\\ 0 \qquad\text{otherwise} \end{cases},
$$
Note that in the evaluation script of the 
t1d-map task the grid map is predefined with the size $100\times100$ points and correct map origin of $(-5,-5, 0)^T$ and orientation $(0,0,0,1)$ which is prepared exactly for seamless finishing of the 
t1f-map assignment; however, for the increased difficulty that will pay of during the work on the 
Project assignment you are encouraged to generalize your solution to allow for dynamicaly growing map.
 
The evaluation focus on the ability of the robot to do the correct probabilistic update. You can choose between two variants: simple one, that initializes the gridmap to a sufficient size at the beginning of the evaluation, and more advanced version which initializes the empty gridmap and it is your responsibility to correctly handle the size changes of the map. The hard variant has no effect on scoring of the task T1d-map - Map building; however, it will be beneficial in the forthcoming Semestral project on mobile robot exploration.
The code can be evaluated using the following script (also attached as t1d-map.py).
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
 
import matplotlib.pyplot as plt
 
import sys
import os
import math
import numpy as np
 
sys.path.append('hexapod_robot')
sys.path.append('hexapod_explorer')
 
#import hexapod robot and explorer
import HexapodRobot as hexapod
import HexapodExplorer as explorer
 
#import communication messages
from messages import *
 
#switch to select between the simple variant or the hard variant
SIMPLE_VARIANT = True
 
if __name__=="__main__":
 
    robot = hexapod.HexapodRobot(0)
    explor = explorer.HexapodExplorer()
 
    #turn on the robot 
    robot.turn_on()
 
    #start navigation thread
    robot.start_navigation()
 
    #assign goal for navigation
    goals = [
        Pose(Vector3(3.5,3.5,0),Quaternion(1,0,0,0)),
        Pose(Vector3(0.0,0.0,0),Quaternion(1,0,0,0)),
    ]
 
    #prepare the online plot
    plt.ion()
    fig, ax = plt.subplots()
 
    #prepare the gridmap
    gridmap = OccupancyGrid()
    gridmap.resolution = 0.1
 
    if SIMPLE_VARIANT:
        gridmap.width = 100
        gridmap.height = 100
        gridmap.origin = Pose(Vector3(-5.0,-5.0,0.0), Quaternion(1,0,0,0))
        gridmap.data = 0.5*np.ones((gridmap.height*gridmap.width))
 
    #go from goal to goal
    for goal in goals:
        robot.goto_reactive(goal)
        while robot.navigation_goal is not None:
            plt.cla()
 
            #get the current laser scan and odometry and fuse them to the map
            gridmap = explor.fuse_laser_scan(gridmap, robot.laser_scan_, robot.odometry_)
 
            #plot the map
            gridmap.plot(ax)
            plt.xlabel('x[m]')
            plt.ylabel('y[m]')
            #plt.axis('square')
            plt.show()
 
            plt.pause(0.1)
 
    robot.stop_navigation()
    robot.turn_off()
The expected output is the visually correct map of the environment even in presence of dynamic events.