Search
The main goal of the semestral work is to implement high-level exploration node explorer in aro_exploration package, which autonomously explores an unknown environment to find and localize a hidden Apriltag. The exploration node will employ nodes implemented during regular labs (factorgraph localization, ICP SLAM, frontier detection, planning and path following). The simulation semestral work will be carried out individually (no teams). Teams of up to 4 will be allowed for the second part.
explorer
aro_exploration
Download semestral_packages.zip containing exploration package with pre-configured launch file and simulation package. Place it in a common workspace with packages implemented for previous homeworks. Your task is to implement ROS communication between packages in the exploration node.
Your workspace should consist of several packages:
aro_sim
aro_localization
aro_slam
icp_slam
aro_explorer
aro_frontier
aro_planner
aro_control
You can start by adding the implementations you have worked on during the regular labs to a single workspace. Then you can start implementing your high-level exploration node in aro_explorer.
aro_sim/launch/turtlebot3.launch
The node—executable Python script— should be located at aro_exploration/scripts/explorer. An empty template is provided in the package. Algorithm below provides an overview of a possible implementation. An exploration experiment with such a node is shown in the video above.
aro_exploration/scripts/explorer
Exploration node—overview of a possible implementation
frontier.py
planner.py
path_follower.py
aro_localization.py
You can obtain 20 points from the two following milestones:
Milestone I (max 10 points): Upload all codes to BRUTE before 2023-05-28 00:00:00. You should upload your whole pipeline to the BRUTE system before this deadline ie. all packages from the workspace in a single zip file. This means content of your aro_ws/src, not the folder itself. All required packages (aro_slam, aro_planning, aro_msgs, aro_localization, aro_frontier, aro_control, aro_exploration) have to be accessible directly from the root of the zip file (not included in another folder). The evaluation will be based on running the Evaluathor node (see “Evaluation details” section for details) ⇒ please make sure that:
aro_ws/src
Milestone II (max 10 points): Transfer and fine-tune your code on real Turtlebots. Demonstrate exploration and tag localization functionality: robot should successfully localize the tag and return to starting position. Time for the demonstrations will be during semester weeks 13 and 14. You will be given access to the lab with the real robots earlier so that you can tune the solutions during the semester.
Evaluation will also be performed by an automatic evaluation script, which will run the simulation for 180 seconds and then it will stop the simulation. A simplified version of it will be provided in the aro_evaluathor package. The evaluation script launches the aro_sim/launch/turtlebot3.launch file and then listens to topic /relative_marker_pose for the position of the relative marker (message type geometry_msgs/PoseStamped). Evaluation node also monitors ground truth of robot position to determine if it returned to the starting position.
aro_evaluathor
/relative_marker_pose
geometry_msgs/PoseStamped
Please, make sure your solution publishes the appropriate data on this topic. Otherwise, your solution might not be accepted! Additionally, make sure your entire solution is started via the turtlebot3.launch file. Do not modify this file! The entire aro_sim package will be replaced with a “clean” one. Any modifications you perform in the aro_sim package will be lost! If you need to launch additional nodes or otherwise modify the launch procedure, you should modify the aro_exploration/launch/exploration.launch file in your exploration package, which is then included from the turtlebot3.launch file. Only the packages implemented during homeworks, aro_exploration and aro_msgs will be kept.
turtlebot3.launch
aro_exploration/launch/exploration.launch
aro_msgs
To test your solution, you can run the aro_evaluathor/launch/eval_world.launch file. Use the argument world to change the map (e.g. “world:=aro_maze_8” or “world:=stage_1”) and argument marker_config to change placement of the markers. Additional marker poses can be found in aro_sim/config/init_poses. The launch file starts the robot simulation and outputs the awarded points. The results are also stored into the folder ~/aro_evaluation. Make sure your solution can be tested by the evaluation script without any errors before submission!
aro_evaluathor/launch/eval_world.launch
aro_sim/config/init_poses
Example of how to start the evaluation:
roslaunch aro_evaluathor eval_world.launch world:=aro_maze_8 marker_config:=1
The number of points from the semestral work will be determined by number of successful tag localizations (up to 1pt) and returns of the robot to starting position (up to 1pt). Five simulation experiments with randomized tag locations will be performed for each submission to determine the final score.
Successful localization or return to home is awarded 1pt if the pose is less than 0.25 meter from the ground truth position. The awarded points then linearly decrease to the distance of 1 meter. Only the x,y position of the robot/marker is evaluated, not its orientation. If the marker was not localized, no points will be awarded for robot position.
The evaluation will be done on worlds similar to aro_maze_8. The worlds will be maze-like with long corridors and thick walls. Only the burger robot will be used in evaluation.
The maximum time limit of single test instance out of 5 is 120s.
The last 2 labs of the semester are reserved for demonstration of your code on real robots. You can work in teams of up to 4. There are several changes to your scripts that have to be performed to successfully transfer from simulation.
cmd_vel
cmd_vel_mux/safety_controller
base_footprint
base_link
infra
aro_loc.launch
aro_loc_sim.launch
exploration.launch
You should also not be using any 'aro_sim' configuration/script/launch file on the robot. Your pipeline execution on the real robot should be performed from aro_exploration/launch/exploration.launch.
Please note that we will evaluate performance of the whole system in terms of the localization accuracy, so the nodes must not only work individually but also work well with other nodes to fulfill their role in the whole system. Things to consider:
real_time_factor