The main goal of the semestral work is to implement high-level exploration node explorer
in aro_exploration
package, which autonomously explore an unknown environment.
The published occupancy grid on topic /occupancy
will be used for the final evaluation by comparison with ground truth data.
The exploration node will employ nodes implemented during regular labs (SLAM, frontier, planning, following).
The simulation semestral work will be carried out individually (no teams). Teams of up to 4 will be allowed for the second part.
Download aro_semestral.zip containing exploration package with pre-configured launch file, simulation package with a minor fix and an example evaluation package. Place it in a common workspace with packages implemented for previous homeworks. Your task is to implement ROS communication between packages in the exploration node.
Your workspace should consist of several packages:
aro_sim
), which contains configuration and launch files of the simulated environment. This package should stay as provided. Any solution tampering with the simulation environment during evaluation will be denoted as unacceptable. Notice that there are few changes in the sensor configuration from the defaults provided by turtlebot3 packages: (i) the source for the odometry are wheel encoders instead of ground-truth pose, therefore the odometry must be corrected by the ICP SLAM node, otherwise the resulting localization of the robot would be completely inaccurate.(ii) The lidar noise level is increased and the lidar range is accordingly adjusted.
evaluathor
), which contains evaluation node. This package should stay as provided.
aro_slam
), which contains implementation of the ICP-based SLAM with configuration and launch files for testing. This should be replaced by your own implementation from regular labs (HW 3: ICP function, possible improvements to the icp_slam
node).
aro_explorer
), which contains configuration and launch files of the exploration nodes.
aro_frontier
) providing services related to finding frontiers, n
aro_planner
) providing a path planning service.
aro_control
) responsible for path execution.
You can start by adding the implementations you have worked on during the regular labs to a single workspace. Then you can start implementing your high-level exploration node in aro_explorer
.
aro_sim/launch/turtlebot3.launch
file.
The node—executable Python script— should be located at aro_exploration/scripts/explorer
. An empty template is provided package. Algorithm below provides an overview of a possible implementation. An exploration experiment with such a node is shown in the video above.
Exploration node—overview of a possible implementation
frontier.py
planner.py
path_follower.py
path_follower.py
You can obtain 20 points from the two following milestones:
Milestone I (max 10 points): Upload all codes to BRUTE before 2022-05-02 00:00:00. You should upload your whole pipeline to the BRUTE system before this deadline ie. all packages from the workspace in a single zip file. The evaluation will be based on running the Evaluathor node on our own maps (see “Evaluation details” section for details) ⇒ please make sure that:
Milestone II (max 10 points): Transfer and fine-tune your code on real turtle-bots. Demonstrate exploration functionality, robot should successfully explore the testing area. Points will be assigned to working solutions.
Evaluation will also be performed by an automatic evaluation script, which will run the simulation for 180 seconds and then it will stop the simulation. A simplified version of it is provided in the aro_evaluathor
package. The evaluation script launches the aro_sim/launch/turtlebot3.launch
file and then listens to /occupancy
topic for the occupancy map, with message type nav_msgs/OccupancyGrid
(map frame = “map”, odom frame = “odom”, more specification below).
Please, make sure your solution publishes the appropriate data on this topic (more details below). Otherwise, your solution might not be accepted!
Additionally, make sure your entire solution is started via the turtlebot3.launch
file. Do not modify this file! The entire aro_sim package will be replaced with a “clean” one. Any modifications you perform in the aro_sim package will be lost! If you need to launch additional nodes or otherwise modify the launch procedure, you should modify the aro_exploration/launch/exploration.launch
file in your exploration package, which is then included from the turtlebot3.launch file.
To test your solution, you can run the aro_evaluathor/launch/eval_world.launch
file. Use the argument world to change the map (e.g. “world:=aro_maze_8” or “world:=brick_maze_1”). The launch file starts the robot simulation and opens an OpenCV window where you can see the evaluation process (score and map to ground truth comparison). The results is also stored into the folder ~/aro_evaluation as a table and a video. Make sure your solution can be tested by the evaluation script without any errors before submission!
Example of how to start the evaluation:
roslaunch aro_evaluathor eval_world.launch world:=aro_maze_8
You can get map points for published occupancy grids, see paragraphs below for detailed description of the point assignment procedure. In order to capture the temporal progress of your exploration, we will evaluate published maps in the uniformly distributed temporal intervals: 30sec, 60sec, 90sec, 120sec, 150sec, 180sec, and estimate:
“final_map_points” = 1/6 * (map_points_30 + map_points_60 + map_points_90 + map_points_120 + map_points_150 + map_points_180)
The number of points from the semestral work will be determined as a function of the “final_map_points”
for each evaluation run. We assign points according to the following table from percentage of maximum ground truth score for each evaluation run. Final performance used to determine points from semestral work is an average performance over all evaluation runs:
Performance | Points |
0%-25% | 0 |
25%-30% | 1 |
30%-35% | 2 |
35%-40% | 3 |
40%-45% | 4 |
45%-50% | 5 |
50%-60% | 6 |
60%-70% | 7 |
70%-80% | 8 |
80%-90% | 9 |
90%-100% | 10 |
The evaluation will be done on worlds similar to aro_maze_8. They won’t contain open areas as in house. The worlds will be maze-like with long corridors and thick walls. Only the burger robot will be used in evaluation.
turtlebot3.launch
file to start your pipeline. This should be the main mode of starting your pipeline until it can produce reasonable results. The inclusion of evaluation package is meant as a way for you to have a rough idea on what you can expect during evaluation process and whether your solution satisfies the requirements or not, before the actual evaluation. However, it is expected that you try running your solution via the evaluation launch file only when it is sufficiently functional. That is, the evaluation package is provided to check whether the functionality of your package is adequate, not whether it is functional.
The occupancy grid consists of 5x5cm cells.
Each cell is classified into one out of three classes based on its occupancy confidence value in the int8[] data
field:
“Empty” (with occupancy confidence <0,25)), “Occupied” (occupancy confidence <25,100>) and “Unknown” (occupancy confidence -1). The part of the scene, which has not been covered by the published occupancy grid is considered to be “Unknown”.
Given a ground truth occupancy grid and a published occupancy grid, the number of map_points will be determined according to the following table:
Published “Empty” <0,25) | Published “Occupied” <25,100> | Published “Unknown” -1 | |
---|---|---|---|
Ground truth “Empty” | 1 | -1 | -1 |
Ground truth “Occupied” | -1 | 1 | -1 |
Ground truth “Unknown” | -1 | -1 | 0 |
Consider the following example: the 6x6m world, where all cells are observable (i.e. their ground truth class is either “Empty” or “Occupied”). Since the resolution of the occupancy grid is 5cm, the ground truth map will consists of 14400 cells which are either “Empty” or “Occupied”, anything else is “Unknown”. The maximum number of map_points, which you can obtain for publishing the map identical to the ground truth map is given by number of observable obstacles and size of empty space. For publishing an empty map, you obtain 0 map_points, since the unpublished cells are assumed to be from class “Unknown”. Similarly, you obtain 0 map_points for publishing arbitrarily large map containing only cells with class “Unknown”.
Please note that we will evaluate performance of the whole system in terms of the published occupancy, so the nodes must not only work individually but also work well with other nodes to fulfill their role in the whole system. Things to consider: