Project
Semestral project on mobile robot exploration.
Deadline | Submit at least 24 hours before your exam |
| January 13, 2025 |
| January 20, 2025 |
| February 04, 2025 |
| February 11, 2025 |
Points | Up to 30 |
Required minimum points | 10 |
Label in BRUTE | project |
Files to submit | The .zip archive of the project files |
| A short report on the used solution in pdf |
Resources | B4M36UIR_project_resource_pack |
The aim of the project is to leverage on the steps implemented in T1a-ctrl, T1c-plan, T1d-map and T1e-expl and create an integrated solution for autonomous robotic exploration.
The exploration strategy is implemented in a runnable python script file Explorer.py
(and other files described below).
The solution implemented in file Explorer.py
is expected to connect to the simulated robot in the CoppeliaSim simulator and autonomously navigate the robot (or a team of robots) through the environment while avoiding collisions, and collecting the metric map of the environment.
The success metric of the mobile (multi-)robotic exploration is the environment coverage (percentage of the explored area) in time. Hence, the task is to collect as much of spatial information about the environment as possible in the shortest time. For that purpose, intelligent decision making in the problem of navigation towards frontier goals is necessary.
Assignment requirements
The solution is implemented in file
Explorer.py
that is runnable, additional files with implementation are allowed and encouraged (e.g. the implementation of navigation and exploration-related functions in
HexapodController.py
and
HexapodExplorer.py
files) - the expected directory structure of the submitted solution is detailed in section
Submission requirements.
The correct operation of the submitted solution is demonstrated in the realistic robotic simulator CoppeliaSim (formarly V-REP).
During runtime, the implementation shall continuously plot the current occupancy grid map of the environment and the current navigation goal of the robot together with the path towards the goal.
The script finishes correctly when there are no more accessible frontier goals.
The submission is accompanied by a brief report of the used solution in pdf
. There are no specific requirements on the formatting of the report. Its purpose is to briefly summarize the student's approach to the assigned problem.
The scoring is based on the quality and complexity of the submitted solution. Expected scoring of the implementation features is provided in section
Scoring. Implementation features outside the listed ones are encouraged; however, consult your approach with the lab teachers prior spending a lot of time implementing something that might not be worthy.
An example behavior of the Explorer
is shown in the following videos.
Scoring
The proposed scoring presents possible extensions of the base implementation that consolidates implementation of the T1a-ctrl, T1c-plan, T1d-map, and T1e-expl tasks. You may combine the following features in pursuing your own exploration system or come up with your own features (in that case please consult the features with your lab teacher). The implementation features include but are not limited to: scalable mapping, different methods of frontiers detection and navigation goals selection, multi-robot exploration with simple, or advanced task allocation including full or just limited communication with known or unknown base robot odometry, etc.
Designation | Feature | Description | Points | Notes |
report | Report | Short pdf summary of the used solution. | 1 Point | |
code_style | Coding style | The submitted code quality | 2 Points | |
Mapping |
m1 | Map static size | The OccupancyGrid map container has a fixed size from the beginning of the run | 1 Point | |
m2 | Map dynamic size | The OccupancyGrid map container dynamically grows with the explored area | +2 Points | |
Frontier detection |
f1 | Free-edge cluster frontiers | Similar to T1e-expl - frontiers are selected as centroids of segmented free-edge cells. | 1 Point | |
f2 | Multiple-representative free-edge cluster frontiers | Possible extension of f1 . Divide free-edges to multiple clusters and select their representatives when appropriate. | +2 Points | |
f3 | Mutual information at frontiers | Possible extension of f1 and f2 . Compute the mutual information which can be gained by observing the environment from the frontier (computed in f1 , or f2 ), thus assigning each frontier a utility value based on the information which can be gained about occupancy. | +7 Points | |
Planning |
p1 | Planning towards the closest frontier | The closest frontier is selected as a goal for the exploration. | 1 Point | |
p2 | Planning with highest utility | The goals are selected as places with the highest mutual information gain. | 2 Points | mutually exclusive to p1 and p3 |
p3 | Planning considering visitation of all goals | The goals are selected with as a solution of the TSP. | 4 Points | mutually exclusive to p1 and p2 |
Additional extensions |
a1 | Multi-robot simple task-allocation | Multiple-robots exploring the environment creating the fused map. Greedy goal assignment - fully centralized system. | +5 Points | |
a2 | Multi-robot advanced task-allocation | For example, Min-pos decentralized multi-robot exploration strategy. | +8 Points | |
a3 | Multi-robot without interprocess communication | All robots are initialized in a single script. (baseline) | +0 Points | mutually exclusive to a4 |
a4 | Multi-robot with interprocess communication | Each robot is run as a single instance of the script. It is necessary to transfer data between the robots using interprocess communication. | +4 Points | mutually exclusive to a3 |
a5 | Multi-robot without common coordinate frame | The robots have different coordinate frames given by their initial position. They need to first synchronize their coordinate frame before they are able to merge their maps of the environment. | +10 Points | Read more You can read a bit more about this assignment down at the guidelines description. If you want to pursue this assignment, please contact the lab-teacher to get precise guidelines. |
The table lists the possible extension ideas. The detailed description of the individual considered features follows in section Approach.
Submission requirements
The exploration strategy is implemented in a runnable python script file Explorer.py
.
The Explorer.py
file is expected to fit into the current project files with the following directory structure (also check the Project resource pack):
The projects are evaluated by the lab teachers. The submitted files are expected to be copied into the provided resource pack. Therefore, besides the report, only the Explorer.py
file is mandatory for submission, and you can leave out unmodified files, e.g., the simulator library files. However, to ensure that the project performs as intended, it is recommended to submit the whole project folder.
Submission evaluation
The project files are submitted to the BRUTE system before the student's exam.
During the exam, the student demonstrates the working robotic exploration in CoppelliaSim simulator to the lab instructors, who evaluate the project.
Approach
Implementation of the mobile robot exploration consists of building blocks, that cope to a large extent with the assignments in the T1a-ctrl, T1c-plan, T1d-map, and T1e-expl tasks.
In general, the system has the following inputs given by the environment sensing:
odometry
-
Odometry message -
pose.position.x
,
pose.position.y
and
pose.orientation.quaternion
encodes the current robot absolute position in the environment
collision
- boolean - True
if the robot collides with some obstacle, False
otherwise
laser_scan
-
LaserScan message - contains the current laser scan data perceived by the robot.
And the following outputs:
velocity command cmd_msg = Twist()
for steering of the robot
Hence, the robot has to be able to collect the laser scan measurements, fuse them into the map, select navigation goals in this map and navigate towards these temporary goals to discover the unknown parts of the environment.
Guidelines
Here you will find the recommendations and guidelines for implementation of the basic exploration pipeline and for each of the scored items described above.
Basic implementation
To kickstart your awesome muli-robot information theoretic-based exploration project, few guidelines on how to organize your code and connect the t1 tasks into a working exploration pipeline follows.
Read More
As stated above, the robot has to perform mapping, frontier discovery and goal selection as a part of planning, and trajectory following. The structure of the basic implementation that conforms to all of these steps may look like this.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
import time
import threading as thread
import numpy as np
import matplotlib.pyplot as plt
sys.path.append('../')
sys.path.append('hexapod_robot')
sys.path.append('hexapod_explorer')
#import hexapod robot and explorer
import HexapodRobot
import HexapodExplorer
#import communication messages
from messages import *
class Explorer:
""" Class to represent an exploration agent
"""
def __init__(self, robotID = 0):
""" VARIABLES
"""
#occupancy grid map of the robot ... possibly extended initialization needed in case of 'm1' assignment
self.gridmap = OccupancyGrid()
self.gridmap.resolution = 0.1
#current frontiers
self.frontiers = None
#current path
self.path = None
#stopping condition
self.stop = False
"""Connecting the simulator
"""
#instantiate the robot
self.robot = HexapodRobot.HexapodRobot(robotID)
#...and the explorer used in task t1c-t1e
self.explor = HexapodExplorer.HexapodExplorer()
def start(self):
""" method to connect to the simulated robot and start the navigation, localization, mapping and planning
"""
#turn on the robot
self.robot.turn_on()
#start navigation thread
self.robot.start_navigation()
#start the mapping thread
try:
mapping_thread = thread.Thread(target=self.mapping)
mapping_thread.start()
except:
print("Error: unable to start mapping thread")
sys.exit(1)
#start planning thread
try:
planning_thread = thread.Thread(target=self.planning)
planning_thread.start()
except:
print("Error: unable to start planning thread")
sys.exit(1)
#start trajectory following
try:
traj_follow_thread = thread.Thread(target=self.trajectory_following)
traj_follow_thread.start()
except:
print("Error: unable to start planning thread")
sys.exit(1)
def __del__(self):
#turn off the robot
self.robot.stop_navigation()
self.robot.turn_off()
def mapping(self):
""" Mapping thread for fusing the laser scans into the grid map
"""
while not self.stop:
#fuse the laser scan
laser_scan = self.robot.laser_scan_
#...
def planning(self):
""" Planning thread that takes the constructed gridmap, find frontiers, and select the next goal with the navigation path
"""
while not self.stop:
#obstacle growing
#...
#frontier calculation
#...
#path planning and goal selection
odometry = self.robot.odometry_
#...
self.path = Path()
def trajectory_following(self):
"""trajectory following thread that assigns new goals to the robot navigation thread
"""
while not self.stop:
#...
if self.robot.navigation_goal is None:
#fetch the new navigation goal
nav_goal = path_nav.pop(0)
#give it to the robot
self.robot.goto(nav_goal)
#...
if __name__ == "__main__":
#instantiate the robot
ex0 = Explorer()
#start the locomotion
ex0.start()
#continuously plot the map, targets and plan (once per second)
fig, ax = plt.subplots()
plt.ion()
while(1):
plt.cla()
#plot the gridmap
if ex0.gridmap.data is not None:
ex0.gridmap.plot(ax)
#plot the navigation path
if ex0.path is not None:
ex0.path.plot(ax)
plt.xlabel('x[m]')
plt.ylabel('y[m]')
ax.set_aspect('equal', 'box')
plt.show()
#to throttle the plotting pause for 1s
plt.pause(1)
$\qquad\bullet\,$ Read more
The purpose of the report is to briefly summarize the student's approach to the assigned problem.
There are no specific requirements on the formatting of the report.
The report
should contain following information:
Short description of the used solution
Description of how to run the project
Description of (possible) parameters and how to change them
$\qquad\bullet\,$ Read more
The coding style is an important part of the software development. Obfuscated, hard to read, and undocumented code is very hard to be reused and it is more prone to syntactic and semantic errors.
Therefore, the code quality will be assessed as a part of the evaluation. We are not going to enforce any specific style guide, e.g., google Python style guide, however, consider knowing style guides for respective programming languages a good programming practice. Good code quality also helps in assessing your programming skills by a potential employer.
In your project code you should mainly adhere to the following recommendations.
Use descriptive names of variables, functions, and classes.
Use adequate breaking of the code into functions and modules. Prefer small and focused functions. The code should be readable and easy to understand.
Provide sufficient level of comments in your code.
Consistency. Be consistent in naming, usage of quotes, usage of parentheses, spaces, print formating, etc.
Thread-safe code. When you are using multi-threading, you shall secure safe passing of variables between threads and/or processes.
Avoid using magic numbers and magic constants. If there is necessary parametrization, you should make it easy to change the parametrization, e.g., setting the map size for m1
assignment. When appropriate, this can be done, e.g., using input arguments of the Explorer.py
file.
Mapping
$\qquad\bullet\,$ Read more
Straightforward usage of T1d-map. The OccupancyGrid
grid map passed to the laser scan fusion is initialized with given dimensions and resolution, e.g., for the scene blocks
or blocks-multirobot
the map may be initialized as follows:
gridmap = OccupancyGrid()
gridmap.resolution = 0.1
gridmap.width = 100
gridmap.height = 100
gridmap.origin = Pose(Vector3(-5.0,-5.0,0.0), Quaternion(1,0,0,0))
gridmap.data = 0.5*np.ones((gridmap.height*gridmap.width))
Note, the code should allow for easy change of the map dimensions either using input argument, or meaningful parametrization.
$\qquad\bullet\,$ Read more
Extended version of the T1d-map. The map is initialized only using the following code:
gridmap = OccupancyGrid()
gridmap.resolution = 0.1
The rest has to be achieved by the map fusing code. The recommended approach for dynamic scaling of the map consists of the following steps.
Project the laser scan points to x,y plane with respect to the robot heading.
Similar to the approach described in
T1d-map.
Compensate for the robot odometry.
Similar to the approach described in
T1d-map.
Transfer the points from the world coordinates to the map coordinates (compensate for the map offset and resolution).
Adjust the map dimension and offset.
If any of the laser scan points obtained in the previous step falls outside the map range, i.e., $x$ or $y$ map coordinates are lower than 0 or higher than gridmap.width
or gridmap.height
respectively, it is necessary to enlarge the map. Hence, you need to:
Increase
gridmap.width
accordingly.
Hint new_width = max(gridmap.width, x_coordinates) - min(0, x_coordinates) + 1
, i.e., the span of the data including the current size of the map.
Increase
gridmap.height
accordingly.
Hint Similar to the previous point.
Offset the
gridmap.origin
accordingly.
Hint It is necessary to offset the lower left corner of the map for (min(0, x_coordinates),min(0,y_coordinates))*gridmap.resolution)
.
Offset the map data in
gridmap.data
accordingly.
Hint This is a sole problem of array indexing. You need to know the shift of the origin as described above. Then you just need to initialize the new data
as unknown data = 0.5*np.ones( (new_height,new_width) )
, reshape them data = data.reshape( (new_height, new_width) )
, and correctly index the previous data data[-y_shift:-y_shift+previous_height,-x_shift:-x_shift+previous_width] = grid_map.data.reshape(h,w)
.
If there was a map size update go back to step 3, otherwise ray trace individual scanned points and do the Bayessian update of the map
If there has been the map update, the origin has (most likely) changed and therefore the projection of the laser scan points to the map has changed, therefore it is necessary to recalculate their position and then fuse them to the map.
The ray tracing and fusion using the Bayessian update is similar to the approach described in
T1d-map.
The correct behavior of the dynamic map update can be seen in the following video.
Frontier detection
$\qquad\bullet\,$ Read more
The direct usage of the T1e-expl. No adjustments necessary
$\qquad\bullet\,$ Read more
The free-edge frontiers in assignment f1
may contain large segments of free-edge cells clustered into a single frontier. This may result in frontier being outside of the free area and that the robot cannot observe the whole free-edge from a single spot considering the maximal sensor range. Therefore it is beneficial to split large free-edge clusters to smaller ones and select multiple representatives as follows.
The number of representatives of a single frontier component may be computed as1)
$$n_r = 1 + \lfloor \frac{f}{D} + 0.5 \rfloor,$$
where $f$ is the number of frontier cells and $D$ is the sensor range in grid cell size.
Then, the K-means spatial clustering can be used to distribute the representatives in the connected component.
Note for K-means clustering you may use the scipy
library and its K-means implementation:
from sklearn.cluster import KMeans
The difference between the frontiers found according to the f1
assignment and f2
assignment can be seen on the following figures for $D=100$.
$\qquad\bullet\,$ Read more
On an occupancy grid, each cell $m_i$ carries its probability of being an obstacles $p( m_i \vert z_{1:t} , x_{1:t} )$ based on the so-far obtained measurements $z_{1:t}$, see Lab 04 for details.
Therefore, the utility of observing an arbitrary cell can be understood as the information gained by this observation, and this mutual information is bounded by the entropy of the binary distribution of the cell's occupancy.
Hence, the exploration locations are determined as areas from where the robot gains the most information about occupancy.
Approximating the Mutual Information
The mutual information gained by observing cell $m_i$ is bounded by the entropy of the its occupancy distribution, and thus the information gain is considered as
$$I(m_i) \approx H(m_i) ,$$
where the entropy of the binary occupancy distribution at the cell $m_i$ is
$$H(m_i) = - p \log p - (1-p) \log (1-p) ,$$
and $p = p( m_i \vert z_{1:t} , x_{1:t} )$ is the probability the cell $m_i$ is occupied based on the so-far sampled measurements.
Furthermore, since the occupancy of the individual cells is assumed to be independent, the mutual information gained by a robot observing its surroundings is defined as the sum of the respective mutual information of the cells visible from its position as follows.
Mutual Information and Robot Actions
The mutual information gained by observing the robot surroundings from the cell $m_o$ is the sum of the information gained by observing the individual cells as2)
$$I_{action}(a(m_o)) = \sum_{m_i \in R(a(m_o))} I (m_i),$$
where $a(m_o)$ is the action of observing from cell $m_o$, $R(a)$ is the set of cells that is observed as result of action $a$, and $I(m_i)$ is the mutual information regarding the occupancy of cell $m_i$.
The exact shape of the observed area depends on the properties of the used sensor.
In the case of the robot used in the project assignment, a precise approximation needs to consider the robot's minimal and maximal sensor range, and the robot's orientation and angular range of the sensor.
On the other hand, a simpler approach exploits a particular upper bound for the observed area are all the cells within the circular area around the robot with the radius of the robot's maximum sensor range.
In practice, such a simplified model is best for robots equipped with omni-directional sensors, or robots that can turn round while staying at the sampling location.
Planning
$\qquad\bullet\,$ Read more
Planning towards closest frontier is realized by planning the shortest path towards every frontier and selecting the closest one, i.e., it consists of the following steps:
Plan the path from the robot position given by the robot.odometry_
towards each frontier.
Select the shortest of the paths.
$\qquad\bullet\,$ Read more
It is an extension of f3
when the route is planned towards the goal with the highest utility. In case of draw, the closest frontier according to p1
is selected. Alternatively, weighting between close frontiers and more informative frontiers can be done based on the performance.
$\qquad\bullet\,$ Read more
There are usually multiple frontiers during the exploration. The TSP-like formulation of exploration considers that the robot will visit all of the frontiers, thus it may reduce unwanted oscillations of the exploration. Following steps are considered in TSP-like formulation 3):
construct the distance matrix among the robot position given by the robot.odometry_
and all the frontiers.
Find the shortest feasible tour by solving the TSP problem characterized by the distance matrix.
Assign the first frontier of the tour as the next exploration goal.
This task is only a slightly more complicated then the p1
assignment by calculating the whole distance matrix and solving the TSP. You should leverage on what you have learned at the T2b-dtspn task and use the LKH solver to calculate the shortest tour.
Additional extensions
With multi-robot system you should distinguish between decentralized and distributed systems.
Read more While the distributed system may still come to centralized decision, e.g., based on concensus, the decentralized means that there is no single point where the decision is made. Every node makes a decision for it’s own behaviour and the resulting system behaviour is the aggregate response. You can read a bit more about the difference here
$\qquad\bullet\,$ Read more
Basic implementation of the multi-robot system when two or more robots explore the unknown environment. When implemented in a single script the robots may only serve the purpuse of mobile sensors and their behavior is governed centrally. There are also no problems with communication and knowledge sharing between the robots. The basic multi-robot setup can be achieved as follows (note, with the CoppeliaSim/V-REP simulation it strongly depends on the order of operations to connect to multiple robots):
...
class Explorer:
...
if __name__ == "__main__":
#instantiate the robots
ex0 = Explorer(0)
ex1 = Explorer(1)
#connect the robots and start their locomotion
ex0.start()
ex1.start()
#main loop to plot the map of the environment and the current plan of the robots
...
Note that you can approach the multi-robot setup in different ways based on where the maps are merged.
You may either use robots mere as a remote walking laser scanners and coordinate everything from a single master node. Or you may distribute the intelligence among the robots which have to communicate to merge the maps together.
$\qquad\bullet\,$ Read more
$\qquad\bullet\,$ Read more
This is a baseline multirobot implementation similar to a1
. It is supposed that there is only one running process (script) at a time.
$\qquad\bullet\,$ Read more
Typically, individual robots represent individual autonomous entities both in software and hardware. Such a situation can be modeled by each robot being a single process, i.e., to run three robots simultaneously the Explorer.py
script has to be run three times (presumably with different input parameters indicating the robot number). This assignment focus on solving the issues related to interprocess communication and subsequent event synchronization between the agents.
Since connecting multiple robots to the simulator API can cause issues, it recommended to split the Explorer.py
into a standalone API wrapper process and a script used for the individual robot instances. Hence, there is only one process connected to the API. The communication between the API wrapper and the robots can be solved similarly to the communication between the individual robots, see the following figure.
$\qquad\bullet\,$ Read more
This is an advanced assignment that closely cope to the real-world deployment of the robots. In real-world applications it is common that the robots does not have precise localization available which we consider in this course as granted. Moreover, each robot estimate its odometry within its own reference frame, which origin usually coincide with the place where the robot has been turned on. Hence, each robot has its own coordinate frame and they have to first synchronize their coordinate frame to be able to merge the maps. In this assignment the task is for the robots to synchronize their coordinate frames, i.e., find the SE(3) transformation between their respective odometry frames. This may be achieved,e.g., using particle filter, or iterative closest point algorithm. If you want to try this assignment please contact the lab teacher to give you guidelines.