Table of Contents

Semestral work - Frontier-based April-tag localization

On Sunday 18 May, 16:00 CEST, there was a change to the task organization in BRUTE. See deadlines_milestones_and_points for details.
Deadline SW part: 18 May 2025, 23:59 CEST (Monday labs) / 21 May 2025, 23:59 CEST (Thursday labs).

Deadline HW part: Ideally show it during lab 12 or 13.

Penalty 5%/day of delay.

The goal of the semestral work is to implement high-level exploration node explorer.py, which will utilize algorithms implemented during the semester to explore an unknown maze and localize a hidden Apriltag. The exploration node will employ nodes implemented during regular labs (factorgraph localization, ICP SLAM, frontier detection, planning and path following). The simulation semestral work will be carried out individually (no teams). Teams of up to 4 will be allowed for the second part.

The goal of the algorithm is to first localize the robot relative to the given absolute marker with known position, and then explore the world, find the relative marker, localize it and return to the starting position.

Details about what Apriltags are and how they work are given in Lab 05.

How to start?

You can start implementing your high-level exploration node in aro_exploration/nodes/exploration/explorer.py.

Before you start coding:
  • Please, read the section Evaluation details to get familiar with the requirements and testing procedure of the semestral work.
  • There is a FAQ section at the end of this document that can help you solve some common issues.
  • To launch and debug your exploration pipeline (until sufficiently developed), use the roslaunch aro_exploration aro_exploration_sim.launch command (inside Singularity).
  • To perform a test evaluation, use the roslaunch aro_exploration run_eval.launch command (inside Singularity).
  • The exact command-line Brute uses for running your code is roslaunch aro_exploration run_eval.launch rviz:=false world:=$WORLD marker_config:=$MARKER_CONFIG ground_truth:=false, where $WORLD and $MARKER_CONFIG are the selected world and marker config.

High-level exploration node

A template script is provided in the package. Algorithm below provides an overview of a possible implementation. An exploration experiment with such a node is shown in the video above.

Exploration node explorer.py — overview of a possible implementation

  1. Pick a frontier, if any, or a random traversable position as a goal. ◃ frontier.py
  2. Plan a path to the goal, or go to line 1 if there is no path. ◃ planner.py
  3. Delegate path execution to low-level control. ◃ path_follower.py
  4. Monitor execution of the plan.
  5. Invoke a recovery behavior if needed.
  6. Check for localization of the goal Apriltag ◃ aro_localization.py (pick any ID ≠ 7)
  7. Repeat from line 1 if Apriltag not localized / return to beginning if localized.

Deadlines, Milestones and Points

You can obtain 15 points from the two following milestones:

Milestone I (max 10 points): Upload all codes to BRUTE before deadline. Later uploads are penalized 1 pt/day. Use the provided submission script create_sw01_package.sh as with previous homeworks. The evaluation will be based on running the run_eval.py node (see “Evaluation details” section for details) ⇒ please make sure that:

CHANGE TO EVALUATION! Due to time limit restrictions, the task is divided into 5 Brute tasks: sw_slow_01, sw_slow_02, sw_slow_03, sw_slow_04 and sw_slow_05. Please, upload the same tar.gz archive to all five. The subtasks are ordered by increasing complexity. So until you can satisfy sw_slow_01, it does not make much sense to upload to the other tasks. Each of the subtasks can run up to 15 minutes, so be patient. Either task sw01_1 or sw_slow_01 is mandatory and you have to get at least 0.1 points from it. The other tasks are not set as mandatory, but they are integral part of the Semestral work solution. So please try to satisfy all 5 subtasks if possible.

Please note that you should not upload new solutions to the old tasks sw01_1, sw01_2 and sw01_3. Upload was blocked to these tasks and new uploads will only be accepted into the sw_slow tasks.

The final points for Milestone I will be the sum of best score you have achieved on each of the 5 simulated worlds, regardless if it were in the old or in the new tasks.

ANOTHER UPDATE: The points awarded by the new tasks are only difference to the points you already got for the given world in the old tasks (if any). This means that by submitting to the new tasks, you will always have at least as much points as you had in the old task (or more). As an example, the total points for worlds aro_easy_1 and aro_easy_2 will be the sum of sw01_1 + sw_slow_01 + sw_slow_02.

Due to time limit restrictions, the task is divided into 3 Brute tasks: sw01_1, sw01_2 and sw01_3. Please, upload the same tar.gz archive to all three. The subtasks are ordered by increasing complexity. So until you can satisfy sw01_1, it does not make much sense to upload to the other tasks. Each of the subtasks can run up to 15 minutes, so be patient. Only task sw01_1 is mandatory and you have to get at least 0.1 points from it. The other tasks are not set as mandatory, but they are integral part of the Semestral work solution. So please try to satisfy all 3 subtasks if possible.

Milestone II (max 5 points): Transfer and fine-tune your code on real Turtlebots. Demonstrate exploration and tag localization functionality: robot should successfully localize the tag (2 pt, mandatory) and return to starting position (2 pt) without a collision with the arena (1 pt). Time for the demonstrations will be during labs 12 and 13. You will be given access to the lab with the real robots earlier so that you can tune the solutions during the semester.

Bonus task (2 points): Once you successfully pass the base task of Milestone II, you can also request the bonus task. Bonus task is the same as the base task, but the teacher will put a dynamic obstacle into the playground after some time. The obstacle will be put on a place that will lie on the return path of the robot. The robot will thus need to dynamically replan the path and avoid hitting the obstacle. If the robot manages to fulfil the whole Milestone II task without colliding with the obstacle, you will get the 2 bonus points.

Evaluation details (Milestone I)

Evaluation will be performed by an automatic evaluation script, which will run the simulation for at most 120 seconds and then stop the simulation. A simplified version is provided for local testing with simple testing maps. The evaluation script listens to topic /relative_marker_pose for the position of the relative marker (message type geometry_msgs/PoseStamped). Evaluation node also monitors ground truth of robot position to determine if it returned to the starting position.

Please, make sure your solution publishes the appropriate data on this topic. Otherwise, your solution might not be accepted!

To test your solution, you can run the run_eval.launch file. Use the argument world to change the map (e.g. “world:=aro_eval_1”) and argument marker_config to change placement of the markers. The launch file starts the robot simulation and outputs the awarded points. The results are also stored into the folder ~/aro_evaluation. Make sure your solution can be tested by the evaluation script without any errors before submission!

Example of how to start the evaluation:

roslaunch aro_exploration run_eval.launch world:=aro_eval_1 marker_config:=2

This launch file internally starts a different launch file with these parameters:

roslaunch aro_exploration aro_exploration_sim.launch world:=$WORLD marker_config:=$MARKER_CONFIG ground_truth:=false mr_use_gt:=false tf_metrics:=false rviz:=false gui:=false localization_visualize:=false joy_teleop:=false run_mode:=eval

The latter one might be more suitable for local debugging. But if you want to test your code as in Brute, run that command exactly as is.

Points from semestral work

The number of points from the semestral work will be determined by number of successful tag localizations (up to 1pt) and returns of the robot to starting position (up to 1pt). Fifteen simulation experiments with randomized tag locations will be performed for each submission to determine the final score.

Successful localization and return to home are each awarded 1 pt if the pose is less than 0.25 meter from the ground truth position. The awarded points then linearly decrease to the distance of 1 meter. Only the x,y position of the robot/marker is evaluated, not its orientation. If the marker was not localized, no points will be awarded for robot position.

The evaluation will be done on worlds similar to aro_easy_* or aro_medium_*. The worlds will be simple maze-like corridors.

The evaluation is run 3 times on each world and the result is averaged per world:

BRUTE task World Marker config Time limit [s] # runs # runs (slow) Max points
sw01_1 aro_easy_1 1 120 3 2 2
sw01_1 aro_easy_2 1 120 3 2 2
sw01_2 aro_medium_1 2 120 3 2 2
sw01_2 aro_medium_3* 1 120 3 2 2
sw01_3 aro_hard_1* 1 240 3 1 2

* This world is not publicly available.

Evaluation details (Milestone II)

The last 2 labs of the semester are reserved for demonstration of your code on real robots. You can work in teams of up to 4.

You should not be directly using any 'aro_sim' configuration/script/launch file on the robot. Your pipeline execution on the real robot should be performed from aro_exploration/launch/exploration/aro_exploration_real.launch.

DO NOT launch any _sim.launch file on the real robot or your notebook connected to it.

See TurtleBot Lab Guide for details describing how to work with the real robots.

Your team will be awarded points according to the following rules:

You have multiple tries when showing your solution to a teacher, but each teacher has a (possibly unknown) limit of tries per team, so try not overshooting this limit.

The position of the absolute marker in the world coordinates is given by ROS parameters abs_marker_x, abs_marker_y and abs_marker_yaw. These parameters default to values (1.0, 0.07, 0.2) in localization.py. If you don't change them, the pose of the relative marker will be reported in this coordinate frame.

Possible improvements

Whole pipeline

Please note that we will evaluate performance of the whole system in terms of the localization accuracy, so the nodes must not only work individually but also work well with other nodes to fulfill their role in the whole system. Things to consider:


Factorgraph localization


SLAM


Frontier detection


Path planning


Path following

Troubleshooting