Table of Contents

Semestral work - Frontier-based Exploration

The main goal of the semestral work is to implement high-level exploration node explorer, which autonomously explore an unknown environment and localize barbie doll. The node should regularly publish the occupancy grid to topic /occupancy and the position of the detected barbie to topic /final_barbie_point. These messages will be used for the final evaluation. The exploration node will employ nodes implemented during regular labs (SLAM, planning, object detection). Due to the current situation, the semestral work will be carried out individually (no teams), using a simulator instead of the real robots in the labs.

How to start?

When working locally (i.e., using the ARO Singularity image), remember to always work with the latest image from Run ROS locally. You can check the upload date (written on that page) to make sure you have the latest version.

The template consists of five packages:

You can start by replacing icp_slam_2d, frontier.py, planner.py, path_follower.py, detector.py, network.py by the nodes you have implemented during the regular labs. Then you can start implementing your high-level exploration node in explorer..

Before you start coding:
  • Please, read the section Evaluation details to get familiar with the requirements and testing procedure of the semestral work.
  • There is a FAQ section at the end of this document that can help you solve some common issues.
  • To launch and debug your system (until sufficiently developed), use the aro_sim/launch/turtlebot3.launch file.

High-level exploration node

The node—executable Python script— should be located at exploration/scripts/explorer. An empty template is provided in the student package. Algorithm below provides an overview of a possible implementation. An exploration experiment with such a node is shown in the video above.

Exploration node—overview of a possible implementation

  1. Pick a frontier, if any, or a random position as a goal. ◃ frontier.py
  2. Plan a path to the goal, or go to line 1 if there is no path. ◃ planner.py
  3. Delegate path execution to low-level control. ◃ path_follower.py
  4. Monitor execution of the plan.
  5. Invoke a recovery behavior if needed. ◃ path_follower.py
  6. Repeat from line 1.

Deadlines, Milestones and Points

You can obtain 22 points from the two following milestones:

Milestone I (max 7 points): Upload video demonstrating “a reasonable” functionality of your exploration pipeline before 29 April to BRUTE. It is enough to upload only video (no codes are needed for this milestone). The reasonable functionality is that the robot autonomously explores the selected map (regardless of the exploration efficiency or accuracy of barbie detections).

Milestone II (max 15 points): Upload all codes to BRUTE before 2021-05-19 00:00:00. You should upload whole pipeline to the BRUTE system before this deadline. The evaluation will be based on running the Evaluathor node on our own maps (see “Evaluation details” section for details) ⇒ please make sure that:

Evaluation details (Milestone II)

Evaluation will be performed by an automatic evaluation script, which will run the simulation for 180 seconds and then it will stop the simulation. A simplified version of it is provided in the aro_evaluathor package. The evaluation script launches the aro_sim/launch/turtlebot3.launch file and then listens to these two topics:

Please, make sure your solution publishes the appropriate data on these topics (more details below). Otherwise, your solution might not be accepted! Additionally, make sure your entire solution is started via the turtlebot3.launch file. Do not modify this file! The entire aro_sim package will be replaced with a “clean” one. Any modifications you perform in the aro_sim package will be lost! If you need to launch additional nodes or otherwise modify the launch procedure, you should modify the exploration/launch/exploration.launch file, which is included in the turtlebot3.launch file.

To test your solution, you can run the aro_evaluathor/launch/eval_world.launch file. Use the argument world to change the map (e.g. “world:=aro_maze_8” or “world:=brick_maze_1”). The launch file starts the robot simulation and opens an OpenCV window where you can see the evaluation process (score and map to ground truth comparison). The results is also stored into the folder ~/aro_evaluation as a table and a video. Make sure your solution can be tested by the evaluation script without any errors before submission!

Example of how to start the evaluation: roslaunch aro_evaluathor eval_world.launch world:=aro_maze_8

You can get points for published occupancy grids and barbie positions, see paragraphs below for detailed description of the point assignment procedure. In order to capture the temporal progress of your exploration, we will evaluate published maps in the uniformly distributed temporal intervals: 30sec, 60sec, 90sec, 120sec, 150sec, 180sec, and estimate:
“final_points” = 1/6 * (map_points_30 + map_points_60 + map_points_90 + map_points_120 + map_points_150 + map_points_180) + barbie_points

Points from semestral work

The number of points from the semestral work will be determined as a saturated linear function of the “final_points” summed overall evaluation maps and starting positions. The exact shape of this function will depend on the performance of all submitted solutions and the performance of teacher's solution. In particular, we first evaluate teacher's solution with temporal handicap of 30sec (i.e. it will start running after 30 seconds from the start) and subtract additional 15.000 points. Everyone who submits a working solution, which achieves lower number of “final_points” than Handicapped Teacher's Solution (HTS), will obtain 4 points. The solutions, which achieve the same or higher number of “final_points” than HTS, will obtain 6 points. Remaining 9 points will be distributed based on the relative performance of the solutions, which were better than HTS. In particular, we will order these solutions according to achieved “final_points”, and express the relative ordering in terms of probability that your solution is better than another solution chosen at random. For example, if we have 100 submitted solutions and your solution was 37th (in terms of achieved “final_points”), your relative performance is 63%. We assign points according to the following table,

Performance Points
worse than HTS but working 4
better than HTS 6
10%-20% 7
20%-30% 8
30%-40% 9
40%-50% 10
50%-60% 11
60%-70% 12
70%-80% 13
80%-90% 14
90%-100% 15

The evaluation will be done on worlds similar to aro_maze_8. They won’t contain open areas as in house. The worlds will be maze-like with long corridors and thick walls. Only the burger robot will be used in evaluation.

Please, when developing your solution for the semestral project, use the turtlebot3.launch file to start your pipeline. This should be the main mode of starting your pipeline until it can produce reasonable results. The inclusion of evaluation package to the template is meant as a way for you to have a rough idea on what you can expect during evaluation process and whether your solution satisfies the requirements or not, before the actual evaluation. However, it is expected that you try running your solution via the evaluation launch file only when it is sufficiently functional. That is, the evaluation package is provided to check whether the functionality of your package is adequate, not whether it is functional.
Additionally, the evaluation package is provided as a black box. It is not meant to be readable or modifiable by students. There are also known issues when attempting to run the evaluation on a non-functioning solution (e.g., occupancy grid is not published, yet).



Map_points from published occupancy grid

The occupancy grid consists of 5x5cm cells. Each cell is classified into one out of three classes based on its occupancy confidence value in the int8[] data field: “Empty” (with occupancy confidence <0,25)), “Occupied” (occupancy confidence <25,100>) and “Unknown” (occupancy confidence -1). The part of the scene, which has not been covered by the published occupancy grid is considered to be “Unknown”. Given a ground truth occupancy grid and a published occupancy grid, the number of map_points will be determined according to the following table:

Published “Empty” <0,25) Published “Occupied” <25,100> Published “Unknown” -1
Ground truth “Empty” 0 -1 -1
Ground truth “Occupied” -1 0 -1
Ground truth “Unknown” -1 -1 0

Consider the following example: the 6x6m world, where all cells are observable (i.e. their ground truth class is either “Empty” or “Occupied”). Since the resolution of the occupancy grid is 5cm, the ground truth map will consists of 14400 cells which are either “Empty” or “Occupied”, anything else is “Unknown”. The maximum number of map_points, which you can obtain for publishing the map identical to the ground truth map, is zero. For publishing an empty map, you obtain -14400 map_points, since the unpublished cells are assumed to be from class “Unknown”. Similarly, you obtain -14400 map_points for publishing arbitrarily large map containing only cells with class “Unknown”. Nevertheless, you can obtain arbitrarily low number of map_points (lower than -14400) for publishing sufficiently large maps, which contains only “Empty” or “Occupied” cells.



Barbie_points from published barbie pose

You can (but do not have to) publish the position of the barbie doll. Only last three messages from the topic /final_barbie_point, received by the end of the evaluation run, are accepted. You can publish more messages during the run but only the last three are kept. Out of these three messages, the closes to the actual barbie position is taken and the difference between the estimated and actual position is used for barbie_points assignment, as follows. If the difference between published position and ground truth position (i.e. the localization error) is smaller than 50cm, you will obtain 10000 barbie_points. If the difference between published position and ground truth position is between 50-100cm, you will obtain +5000 barbie_points. If the difference between published position and ground truth position is bigger than 100cm, you will obtain -5000 barbie_points. If the position is not published or contains NaNs, you obtain 0 barbie_points.

error <0,50cm) error <50cm,100cm> error bigger than 100cm Position unpublished
+10000 +5000 -5000 0

Possible improvements

Whole pipeline

Please note that we will evaluate performance of the whole system in terms of the published occupancy grids and barbie positions, so the nodes must not only work individually but also work well with other nodes to fulfill their role in the whole system. Things to consider:


SLAM


Frontier detection


Path planning


Path following


Barbie detection

FAQ