Table of Contents

Semestral work - Frontier-based Exploration

The main goal of the semestral work is to implement high-level exploration node explorer in aro_exploration package, which autonomously explore an unknown environment. The published occupancy grid on topic /occupancy will be used for the final evaluation by comparison with ground truth data. The exploration node will employ nodes implemented during regular labs (SLAM, frontier, planning, following). The simulation semestral work will be carried out individually (no teams). Teams of up to 4 will be allowed for the second part.

How to start?

Download aro_semestral.zip containing exploration package with pre-configured launch file, simulation package with a minor fix and an example evaluation package. Place it in a common workspace with packages implemented for previous homeworks. Your task is to implement ROS communication between packages in the exploration node.

Your workspace should consist of several packages:

You can start by adding the implementations you have worked on during the regular labs to a single workspace. Then you can start implementing your high-level exploration node in aro_explorer.

Before you start coding:
  • Please, read the section Evaluation details to get familiar with the requirements and testing procedure of the semestral work.
  • There is a FAQ section at the end of this document that can help you solve some common issues.
  • To launch and debug your system (until sufficiently developed), use the aro_sim/launch/turtlebot3.launch file.

High-level exploration node

The node—executable Python script— should be located at aro_exploration/scripts/explorer. An empty template is provided package. Algorithm below provides an overview of a possible implementation. An exploration experiment with such a node is shown in the video above.

Exploration node—overview of a possible implementation

  1. Pick a frontier, if any, or a random position as a goal. ◃ frontier.py
  2. Plan a path to the goal, or go to line 1 if there is no path. ◃ planner.py
  3. Delegate path execution to low-level control. ◃ path_follower.py
  4. Monitor execution of the plan.
  5. Invoke a recovery behavior if needed. ◃ path_follower.py
  6. Repeat from line 1.

Deadlines, Milestones and Points

You can obtain 20 points from the two following milestones:

Milestone I (max 10 points): Upload all codes to BRUTE before 2022-05-02 00:00:00. You should upload your whole pipeline to the BRUTE system before this deadline ie. all packages from the workspace in a single zip file. The evaluation will be based on running the Evaluathor node on our own maps (see “Evaluation details” section for details) ⇒ please make sure that:

Milestone II (max 10 points): Transfer and fine-tune your code on real turtle-bots. Demonstrate exploration functionality, robot should successfully explore the testing area. Points will be assigned to working solutions.

Evaluation details (Milestone I)

Evaluation will also be performed by an automatic evaluation script, which will run the simulation for 180 seconds and then it will stop the simulation. A simplified version of it is provided in the aro_evaluathor package. The evaluation script launches the aro_sim/launch/turtlebot3.launch file and then listens to /occupancy topic for the occupancy map, with message type nav_msgs/OccupancyGrid (map frame = “map”, odom frame = “odom”, more specification below).

Please, make sure your solution publishes the appropriate data on this topic (more details below). Otherwise, your solution might not be accepted! Additionally, make sure your entire solution is started via the turtlebot3.launch file. Do not modify this file! The entire aro_sim package will be replaced with a “clean” one. Any modifications you perform in the aro_sim package will be lost! If you need to launch additional nodes or otherwise modify the launch procedure, you should modify the aro_exploration/launch/exploration.launch file in your exploration package, which is then included from the turtlebot3.launch file.

To test your solution, you can run the aro_evaluathor/launch/eval_world.launch file. Use the argument world to change the map (e.g. “world:=aro_maze_8” or “world:=brick_maze_1”). The launch file starts the robot simulation and opens an OpenCV window where you can see the evaluation process (score and map to ground truth comparison). The results is also stored into the folder ~/aro_evaluation as a table and a video. Make sure your solution can be tested by the evaluation script without any errors before submission!

Example of how to start the evaluation: roslaunch aro_evaluathor eval_world.launch world:=aro_maze_8

You can get map points for published occupancy grids, see paragraphs below for detailed description of the point assignment procedure. In order to capture the temporal progress of your exploration, we will evaluate published maps in the uniformly distributed temporal intervals: 30sec, 60sec, 90sec, 120sec, 150sec, 180sec, and estimate:
“final_map_points” = 1/6 * (map_points_30 + map_points_60 + map_points_90 + map_points_120 + map_points_150 + map_points_180)

Points from semestral work

The number of points from the semestral work will be determined as a function of the “final_map_points” for each evaluation run. We assign points according to the following table from percentage of maximum ground truth score for each evaluation run. Final performance used to determine points from semestral work is an average performance over all evaluation runs:

Performance Points
0%-25% 0
25%-30% 1
30%-35% 2
35%-40% 3
40%-45% 4
45%-50% 5
50%-60% 6
60%-70% 7
70%-80% 8
80%-90% 9
90%-100% 10

The evaluation will be done on worlds similar to aro_maze_8. They won’t contain open areas as in house. The worlds will be maze-like with long corridors and thick walls. Only the burger robot will be used in evaluation.

Please, when developing your solution for the semestral project, use the turtlebot3.launch file to start your pipeline. This should be the main mode of starting your pipeline until it can produce reasonable results. The inclusion of evaluation package is meant as a way for you to have a rough idea on what you can expect during evaluation process and whether your solution satisfies the requirements or not, before the actual evaluation. However, it is expected that you try running your solution via the evaluation launch file only when it is sufficiently functional. That is, the evaluation package is provided to check whether the functionality of your package is adequate, not whether it is functional.
Additionally, the evaluation package is provided as a black box. It is not meant to be readable or modifiable by students. There are also known issues when attempting to run the evaluation on a non-functioning solution (e.g., occupancy grid is not published, yet).



Map_points from published occupancy grid

The occupancy grid consists of 5x5cm cells. Each cell is classified into one out of three classes based on its occupancy confidence value in the int8[] data field: “Empty” (with occupancy confidence <0,25)), “Occupied” (occupancy confidence <25,100>) and “Unknown” (occupancy confidence -1). The part of the scene, which has not been covered by the published occupancy grid is considered to be “Unknown”. Given a ground truth occupancy grid and a published occupancy grid, the number of map_points will be determined according to the following table:

Published “Empty” <0,25) Published “Occupied” <25,100> Published “Unknown” -1
Ground truth “Empty” 1 -1 -1
Ground truth “Occupied” -1 1 -1
Ground truth “Unknown” -1 -1 0

Consider the following example: the 6x6m world, where all cells are observable (i.e. their ground truth class is either “Empty” or “Occupied”). Since the resolution of the occupancy grid is 5cm, the ground truth map will consists of 14400 cells which are either “Empty” or “Occupied”, anything else is “Unknown”. The maximum number of map_points, which you can obtain for publishing the map identical to the ground truth map is given by number of observable obstacles and size of empty space. For publishing an empty map, you obtain 0 map_points, since the unpublished cells are assumed to be from class “Unknown”. Similarly, you obtain 0 map_points for publishing arbitrarily large map containing only cells with class “Unknown”.

Possible improvements

Whole pipeline

Please note that we will evaluate performance of the whole system in terms of the published occupancy, so the nodes must not only work individually but also work well with other nodes to fulfill their role in the whole system. Things to consider:


SLAM


Frontier detection


Path planning


Path following

FAQ