Search
Semestral project on mobile robot exploration.
The aim of the project is to leverage on the steps implemented in T1a-ctrl, T1c-map, T1d-plan, and T1e-expl and create an integrated solution for autonomous robotic exploration. The exploration strategy is implemented in a runnable python script file Explorer.py (and other files described below).
Explorer.py
The solution implemented in file Explorer.py is expected to connect to the simulated robot in the CoppeliaSim simulator and autonomously navigate the robot (or a team of robots) through the environment while avoiding collisions, and collecting the metric map of the environment. The success metric of the mobile (multi-)robotic exploration is the environment coverage (percentage of the explored area) in time. Hence, the task is to collect as much of spatial information about the environment as possible in the shortest time. For that purpose, intelligent decision making in the problem of navigation towards frontier goals is necessary.
HexapodController.py
HexapodExplorer.py
pdf
An example behavior of the Explorer is shown in the following videos.
Explorer
The proposed scoring presents possible extensions of the base implementation that consolidates implementation of the T1a-ctrl, T1c-map, T1d-plan, and T1e-expl tasks. You may combine the following features in pursuing your own exploration system or come up with your own features (in that case please consult the features with your lab teacher). The implementation features include but are not limited to: scalable mapping, different methods of frontiers detection and navigation goals selection, multi-robot exploration with simple, or advanced task allocation including full or just limited communication with known or unknown base robot odometry, etc.
report
code_style
m1
OccupancyGrid
m2
f1
f2
f3
p1
p2
p3
a1
a2
a3
a4
a5
The table lists the possible extension ideas. The detailed description of the individual considered features follows in section Approach.
The exploration strategy is implemented in a runnable python script file Explorer.py. The Explorer.py file is expected to fit into the current project files with the following directory structure (also check the Project resource pack):
Project
hexapod_explorer
hexapod_robot
HexapodRobot.py
cpg
hexapod_sim
The projects are evaluated by the lab teachers. The submitted files are expected to be copied into the provided resource pack. Therefore, besides the report, only the Explorer.py file is mandatory for submission, and you can leave out unmodified files, e.g., the simulator library files. However, to ensure that the project performs as intended, it is recommended to submit the whole project folder.
The project files are submitted to the BRUTE system. Afterwards, a demonstration of the working robotic exploration in CoppelliaSim simulator is evaluated by the lab instructors after the submission deadline.
Implementation of the mobile robot exploration consists of building blocks, that cope to a large extent with the assignments in the T1a-ctrl, T1c-map, T1d-plan, and T1e-expl tasks. In general, the system has the following inputs given by the environment sensing:
odometry
pose.position.x
pose.position.y
pose.orientation.quaternion
collision
True
False
laser_scan
And the following outputs:
cmd_msg = Twist()
Hence, the robot has to be able to collect the laser scan measurements, fuse them into the map, select navigation goals in this map and navigate towards these temporary goals to discover the unknown parts of the environment.
Here you will find the recommendations and guidelines for implementation of the basic exploration pipeline and for each of the scored items described above.
To kickstart your awesome muli-robot information theoretic-based exploration project, few guidelines on how to organize your code and connect the t1 tasks into a working exploration pipeline follows.
Read More
As stated above, the robot has to perform mapping, frontier discovery and goal selection as a part of planning, and trajectory following. The structure of the basic implementation that conforms to all of these steps may look like this.
#!/usr/bin/env python3 # -*- coding: utf-8 -*- import sys import time import threading as thread import numpy as np import matplotlib.pyplot as plt sys.path.append('../') sys.path.append('hexapod_robot') sys.path.append('hexapod_explorer') #import hexapod robot and explorer import HexapodRobot import HexapodExplorer #import communication messages from messages import * class Explorer: """ Class to represent an exploration agent """ def __init__(self, robotID = 0): """ VARIABLES """ #occupancy grid map of the robot ... possibly extended initialization needed in case of 'm1' assignment self.gridmap = OccupancyGrid() self.gridmap.resolution = 0.1 #current frontiers self.frontiers = None #current path self.path = None #stopping condition self.stop = False """Connecting the simulator """ #instantiate the robot self.robot = HexapodRobot.HexapodRobot(robotID) #...and the explorer used in task t1c-t1e self.explor = HexapodExplorer.HexapodExplorer() def start(self): """ method to connect to the simulated robot and start the navigation, localization, mapping and planning """ #turn on the robot self.robot.turn_on() #start navigation thread self.robot.start_navigation() #start the mapping thread try: mapping_thread = thread.Thread(target=self.mapping) mapping_thread.start() except: print("Error: unable to start mapping thread") sys.exit(1) #start planning thread try: planning_thread = thread.Thread(target=self.planning) planning_thread.start() except: print("Error: unable to start planning thread") sys.exit(1) #start trajectory following try: traj_follow_thread = thread.Thread(target=self.trajectory_following) traj_follow_thread.start() except: print("Error: unable to start planning thread") sys.exit(1) def __del__(self): #turn off the robot self.robot.stop_navigation() self.robot.turn_off() def mapping(self): """ Mapping thread for fusing the laser scans into the grid map """ while not self.stop: #fuse the laser scan laser_scan = self.robot.laser_scan_ #... def planning(self): """ Planning thread that takes the constructed gridmap, find frontiers, and select the next goal with the navigation path """ while not self.stop: #obstacle growing #... #frontier calculation #... #path planning and goal selection odometry = self.robot.odometry_ #... self.path = Path() def trajectory_following(self): """trajectory following thread that assigns new goals to the robot navigation thread """ while not self.stop: #... if self.robot.navigation_goal is None: #fetch the new navigation goal nav_goal = path_nav.pop(0) #give it to the robot self.robot.goto(nav_goal) #... if __name__ == "__main__": #instantiate the robot ex0 = Explorer() #start the locomotion ex0.start() #continuously plot the map, targets and plan (once per second) fig, ax = plt.subplots() plt.ion() while(1): plt.cla() #plot the gridmap if ex0.gridmap.data is not None: ex0.gridmap.plot(ax) #plot the navigation path if ex0.path is not None: ex0.path.plot(ax) plt.xlabel('x[m]') plt.ylabel('y[m]') ax.set_aspect('equal', 'box') plt.show() #to throttle the plotting pause for 1s plt.pause(1)
$\qquad\bullet\,$ Read more
The purpose of the report is to briefly summarize the student's approach to the assigned problem. There are no specific requirements on the formatting of the report. The report should contain following information:
The coding style is an important part of the software development. Obfuscated, hard to read, and undocumented code is very hard to be reused and it is more prone to syntactic and semantic errors. Therefore, the code quality will be assessed as a part of the evaluation. We are not going to enforce any specific style guide, e.g., google Python style guide, however, consider knowing style guides for respective programming languages a good programming practice. Good code quality also helps in assessing your programming skills by a potential employer. In your project code you should mainly adhere to the following recommendations.
Straightforward usage of T1c-map. The OccupancyGrid grid map passed to the laser scan fusion is initialized with given dimensions and resolution, e.g., for the scene blocks or blocks-multirobot the map may be initialized as follows:
blocks
blocks-multirobot
gridmap = OccupancyGrid() gridmap.resolution = 0.1 gridmap.width = 100 gridmap.height = 100 gridmap.origin = Pose(Vector3(-5.0,-5.0,0.0), Quaternion(1,0,0,0)) gridmap.data = 0.5*np.ones((gridmap.height*gridmap.width))
Extended version of the T1c-map. The map is initialized only using the following code:
gridmap = OccupancyGrid() gridmap.resolution = 0.1
gridmap.width
gridmap.height
new_width = max(gridmap.width, x_coordinates) - min(0, x_coordinates) + 1
gridmap.origin
(min(0, x_coordinates),min(0,y_coordinates))*gridmap.resolution)
gridmap.data
data
data = 0.5*np.ones( (new_height,new_width) )
data = data.reshape( (new_height, new_width) )
data[-y_shift:-y_shift+previous_height,-x_shift:-x_shift+previous_width] = grid_map.data.reshape(h,w)
The correct behavior of the dynamic map update can be seen in the following video.
The direct usage of the T1e-expl. No adjustments necessary
The free-edge frontiers in assignment f1 may contain large segments of free-edge cells clustered into a single frontier. This may result in frontier being outside of the free area and that the robot cannot observe the whole free-edge from a single spot considering the maximal sensor range. Therefore it is beneficial to split large free-edge clusters to smaller ones and select multiple representatives as follows.
The number of representatives of a single frontier component may be computed as1)
$$n_r = 1 + \lfloor \frac{f}{D} + 0.5 \rfloor,$$
where $f$ is the number of frontier cells and $D$ is the sensor range in grid cell size. Then, the K-means spatial clustering can be used to distribute the representatives in the connected component. Note for K-means clustering you may use the scipy library and its K-means implementation:
scipy
from sklearn.cluster import KMeans
The difference between the frontiers found according to the f1 assignment and f2 assignment can be seen on the following figures for $D=100$.
On an occupancy grid, each cell $m_i$ carries its probability of being an obstacles $p( m_i \vert z_{1:t} , x_{1:t} )$ based on the so-far obtained measurements $z_{1:t}$, see Lab 03 for details. Therefore, the utility of observing an arbitrary cell can be understood as the information gained by this observation, and this mutual information is bounded by the entropy of the binary distribution of the cell's occupancy. Hence, the exploration locations are determined as areas from where the robot gains the most information about occupancy.
Approximating the Mutual Information
The mutual information gained by observing cell $m_i$ is bounded by the entropy of the its occupancy distribution, and thus the information gain is considered as $$I(m_i) \approx H(m_i) ,$$ where the entropy of the binary occupancy distribution at the cell $m_i$ is $$H(m_i) = - p \log p - (1-p) \log (1-p) ,$$ and $p = p( m_i \vert z_{1:t} , x_{1:t} )$ is the probability the cell $m_i$ is occupied based on the so-far sampled measurements.
Furthermore, since the occupancy of the individual cells is assumed to be independent, the mutual information gained by a robot observing its surroundings is defined as the sum of the respective mutual information of the cells visible from its position as follows.
Mutual Information and Robot Actions
The mutual information gained by observing the robot surroundings from the cell $m_o$ is the sum of the information gained by observing the individual cells as2) $$I_{action}(a(m_o)) = \sum_{m_i \in R(a(m_o))} I (m_i),$$ where $a(m_o)$ is the action of observing from cell $m_o$, $R(a)$ is the set of cells that is observed as result of action $a$, and $I(m_i)$ is the mutual information regarding the occupancy of cell $m_i$.
The exact shape of the observed area depends on the properties of the used sensor. In the case of the robot used in the project assignment, a precise approximation needs to consider the robot's minimal and maximal sensor range, and the robot's orientation and angular range of the sensor. On the other hand, a simpler approach exploits a particular upper bound for the observed area are all the cells within the circular area around the robot with the radius of the robot's maximum sensor range. In practice, such a simplified model is best for robots equipped with omni-directional sensors, or robots that can turn round while staying at the sampling location.
Planning towards closest frontier is realized by planning the shortest path towards every frontier and selecting the closest one, i.e., it consists of the following steps:
robot.odometry_
It is an extension of f3 when the route is planned towards the goal with the highest utility. In case of draw, the closest frontier according to p1 is selected. Alternatively, weighting between close frontiers and more informative frontiers can be done based on the performance.
There are usually multiple frontiers during the exploration. The TSP-like formulation of exploration considers that the robot will visit all of the frontiers, thus it may reduce unwanted oscillations of the exploration. Following steps are considered in TSP-like formulation 3):
This task is only a slightly more complicated then the p1 assignment by calculating the whole distance matrix and solving the TSP. You should leverage on what you have learned at the T2b-dtspn task and use the LKH solver to calculate the shortest tour.
Basic implementation of the multi-robot system when two or more robots explore the unknown environment. When implemented in a single script the robots may only serve the purpuse of mobile sensors and their behavior is governed centrally. There are also no problems with communication and knowledge sharing between the robots. The basic multi-robot setup can be achieved as follows (note, with the CoppeliaSim/V-REP simulation it strongly depends on the order of operations to connect to multiple robots):
... class Explorer: ... if __name__ == "__main__": #instantiate the robots ex0 = Explorer(0) ex1 = Explorer(1) #connect the robots and start their locomotion ex0.start() ex1.start() #main loop to plot the map of the environment and the current plan of the robots ...
This assignment focus on decentralized decision making. The main idea of the MinPos strategy is task allocation between the robots to effectively assign frontiers for exploration. The assignment follows the description of the MinPos in Lecture 4. Robotic information gathering - Mobile robot exploration.
This is a baseline multirobot implementation similar to a1. It is supposed that there is only one running process (script) at a time.
Typically, individual robots represent individual autonomous entities both in software and hardware. Such a situation can be modeled by each robot being a single process, i.e., to run three robots simultaneously the Explorer.py script has to be run three times (presumably with different input parameters indicating the robot number). This assignment focus on solving the issues related to interprocess communication and subsequent event synchronization between the agents.
This is an advanced assignment that closely cope to the real-world deployment of the robots. In real-world applications it is common that the robots does not have precise localization available which we consider in this course as granted. Moreover, each robot estimate its odometry within its own reference frame, which origin usually coincide with the place where the robot has been turned on. Hence, each robot has its own coordinate frame and they have to first synchronize their coordinate frame to be able to merge the maps. In this assignment the task is for the robots to synchronize their coordinate frames, i.e., find the SE(3) transformation between their respective odometry frames. This may be achieved,e.g., using particle filter, or iterative closest point algorithm. If you want to try this assignment please contact the lab teacher to give you guidelines.