Warning
This page is located in archive. Go to the latest version of this course pages. Go the latest version of this page.

Semestral work - Frontier-based April-tag localization

The main goal of the semestral work is to implement high-level exploration node explorer in aro_exploration package, which autonomously explores an unknown environment to find and localize a hidden Apriltag. The exploration node will employ nodes implemented during regular labs (factorgraph localization, ICP SLAM, frontier detection, planning and path following). The simulation semestral work will be carried out individually (no teams). Teams of up to 4 will be allowed for the second part.

How to start?

Download semestral_packages.zip containing exploration package with pre-configured launch file and simulation package. Place it in a common workspace with packages implemented for previous homeworks. Your task is to implement ROS communication between packages in the exploration node.

Your workspace should consist of several packages:

  • Simulator (aro_sim), which contains configuration and launch files of the simulated environment. This package should stay as provided. Any solution tampering with the simulation environment during evaluation will be denoted as unacceptable. Notice that there are few changes in the sensor configuration from the defaults provided by turtlebot3 packages: (i) the source for the odometry are wheel encoders instead of ground-truth pose, therefore the odometry must be corrected by the ICP SLAM node, otherwise the resulting localization of the robot would be completely inaccurate.(ii) The lidar noise level is increased and the lidar range is accordingly adjusted.
  • Factorgraph localization (aro_localization), which contains the factorgraph-based localization algorithm and Apriltag detector from HW3.
  • Localization and mapping (aro_slam), which contains implementation of the ICP-based SLAM with configuration and launch files for testing. This should be replaced by your own implementation from regular labs (HW 4: ICP function, possible improvements to the icp_slam node).
  • Exploration (aro_explorer), which contains configuration and launch files of the exploration nodes.
  • Frontier detection (aro_frontier) providing services related to finding frontiers for exploration.
  • Path planning (aro_planner) providing a path planning service.
  • Path following (aro_control) responsible for path execution action server.

You can start by adding the implementations you have worked on during the regular labs to a single workspace. Then you can start implementing your high-level exploration node in aro_explorer.

Before you start coding:
  • Please, read the section Evaluation details to get familiar with the requirements and testing procedure of the semestral work.
  • There is a FAQ section at the end of this document that can help you solve some common issues.
  • To launch and debug your system (until sufficiently developed), use the aro_sim/launch/turtlebot3.launch file.

High-level exploration node

The node—executable Python script— should be located at aro_exploration/scripts/explorer. An empty template is provided in the package. Algorithm below provides an overview of a possible implementation. An exploration experiment with such a node is shown in the video above.

Exploration node—overview of a possible implementation

  1. Pick a frontier, if any, or a random traversable position as a goal. ◃ frontier.py
  2. Plan a path to the goal, or go to line 1 if there is no path. ◃ planner.py
  3. Delegate path execution to low-level control. ◃ path_follower.py
  4. Monitor execution of the plan.
  5. Invoke a recovery behavior if needed. ◃ path_follower.py
  6. Check for localization of Apriltag ◃ aro_localization.py
  7. Repeat from line 1 if Apriltag not localized / return to beginning if localized.

Deadlines, Milestones and Points

You can obtain 20 points from the two following milestones:

Milestone I (max 10 points): Upload all codes to BRUTE before 2023-05-28 00:00:00. You should upload your whole pipeline to the BRUTE system before this deadline ie. all packages from the workspace in a single zip file. This means content of your aro_ws/src, not the folder itself. All required packages (aro_slam, aro_planning, aro_msgs, aro_localization, aro_frontier, aro_control, aro_exploration) have to be accessible directly from the root of the zip file (not included in another folder). The evaluation will be based on running the Evaluathor node (see “Evaluation details” section for details) ⇒ please make sure that:

  • running the evaluathor launchfile runs everything what is needed on your side.
  • focus on implementing functionalities which generalize well rather than tuning something for a single map.
  • avoid sharing your codes with your colleagues, since all uploaded codes will be checked by global plagiat detection system. The system compares your codes with any other codes which have ever been uploaded to BRUTE (including other courses and years).

Milestone II (max 10 points): Transfer and fine-tune your code on real Turtlebots. Demonstrate exploration and tag localization functionality: robot should successfully localize the tag and return to starting position. Time for the demonstrations will be during semester weeks 13 and 14. You will be given access to the lab with the real robots earlier so that you can tune the solutions during the semester.

Evaluation details (Milestone I)

Evaluation will also be performed by an automatic evaluation script, which will run the simulation for 180 seconds and then it will stop the simulation. A simplified version of it will be provided in the aro_evaluathor package. The evaluation script launches the aro_sim/launch/turtlebot3.launch file and then listens to topic /relative_marker_pose for the position of the relative marker (message type geometry_msgs/PoseStamped). Evaluation node also monitors ground truth of robot position to determine if it returned to the starting position.

Please, make sure your solution publishes the appropriate data on this topic. Otherwise, your solution might not be accepted! Additionally, make sure your entire solution is started via the turtlebot3.launch file. Do not modify this file! The entire aro_sim package will be replaced with a “clean” one. Any modifications you perform in the aro_sim package will be lost! If you need to launch additional nodes or otherwise modify the launch procedure, you should modify the aro_exploration/launch/exploration.launch file in your exploration package, which is then included from the turtlebot3.launch file. Only the packages implemented during homeworks, aro_exploration and aro_msgs will be kept.

To test your solution, you can run the aro_evaluathor/launch/eval_world.launch file. Use the argument world to change the map (e.g. “world:=aro_maze_8” or “world:=stage_1”) and argument marker_config to change placement of the markers. Additional marker poses can be found in aro_sim/config/init_poses. The launch file starts the robot simulation and outputs the awarded points. The results are also stored into the folder ~/aro_evaluation. Make sure your solution can be tested by the evaluation script without any errors before submission!

Example of how to start the evaluation:

roslaunch aro_evaluathor eval_world.launch world:=aro_maze_8 marker_config:=1

Points from semestral work

The number of points from the semestral work will be determined by number of successful tag localizations (up to 1pt) and returns of the robot to starting position (up to 1pt). Five simulation experiments with randomized tag locations will be performed for each submission to determine the final score.

Successful localization or return to home is awarded 1pt if the pose is less than 0.25 meter from the ground truth position. The awarded points then linearly decrease to the distance of 1 meter. Only the x,y position of the robot/marker is evaluated, not its orientation. If the marker was not localized, no points will be awarded for robot position.

The evaluation will be done on worlds similar to aro_maze_8. The worlds will be maze-like with long corridors and thick walls. Only the burger robot will be used in evaluation.

The maximum time limit of single test instance out of 5 is 120s.


Please, when developing your solution for the semestral project, use the turtlebot3.launch file to start your pipeline. This should be the main mode of starting your pipeline until it can produce reasonable results. The inclusion of evaluation package is meant as a way for you to have a rough idea on what you can expect during evaluation process and whether your solution satisfies the requirements or not, before the actual evaluation. However, it is expected that you try running your solution via the evaluation launch file only when it is sufficiently functional. That is, the evaluation package is provided to check whether the functionality of your package is adequate, not whether it is functional.
Additionally, the evaluation package is provided as a black box. It is not meant to be readable or modifiable by students. The robot position ground truth will not be available in Brute evaluation.

Evaluation details (Milestone II)

The last 2 labs of the semester are reserved for demonstration of your code on real robots. You can work in teams of up to 4. There are several changes to your scripts that have to be performed to successfully transfer from simulation.

  1. Remap the cmd_vel to cmd_vel_mux/safety_controller in your control node. The '<remap>' has to be inside the '<node><\node>' tags.
  2. Change the robot frame from base_footprint to base_link
  3. Change marker camera to infra, or utilize aro_loc.launch instead of aro_loc_sim.launch from your main exploration.launch file.

You should also not be using any 'aro_sim' configuration/script/launch file on the robot. Your pipeline execution on the real robot should be performed from aro_exploration/launch/exploration.launch.

Possible improvements

Whole pipeline

Please note that we will evaluate performance of the whole system in terms of the localization accuracy, so the nodes must not only work individually but also work well with other nodes to fulfill their role in the whole system. Things to consider:

  • Inaccurate localization will result in distorted maps and wrong localization of the markers and the robot.
  • Slow localization will have a negative impact on low-level motion control. Low-level motion control can be adjusted as well if needed.
  • As all experiments are run in simulator, possible recovery behaviors can be quite aggressive. Nevertheless, if the maneuvers are too aggressive and robot is hitting obstacles, it will adversely affect factorgraph odometry and initial pose estimates for ICP.
  • Choosing inappropriate goals and visiting already covered areas repeatedly will slow down exploration.
  • Having no recovery or fallback behaviors can lead the system to halt in the very beginning.
  • Consider selecting important parameters such as max robot speed or obstacle margin and tune the pipeline as a black-box (e.g. random search, grid search, or CMA-ES).
  • A general advice is to focus on performance bottlenecks.


Factorgraph localization

  • Tune the costs of the individual measurement sources to play nicely together. Please note that exaggerating some of the costs will have very bad effects on the optimization.
  • Tune the optimizer parameters, such as number of iterations and loss function.
  • Try to detect adverse control (fast velocity or acceleration) or bumping into obstacles and decrease accordingly the costs of the related measurements.
  • Try to avoid the adverse control or bumping into obstacles.
  • If the optimization is too slow, you can test with lower real_time_factor to see whether the problem is in the localization itself or just its speed.
  • Look at the Apriltag detections and implement an even better filter of the false positive ones.
  • If you find the relative marker and return to the start and there is still a lot of time, try to plan a direct path to the relative marker and drive there again - this will help make the localization more accurate.
  • If you struggle too much with integrating the factorgraph localization with the rest of the pipeline, use the ICP SLAM odometry for frontier exploration and control and only use the factorgraph as a means to localize the marker.


SLAM

  • Due to different noise characteristics in the virtual and real environment, choose optimal configuration separately for each. The parameters found in the evaluation done as a part of homework 3 should give you a good starting point. You can try experimenting with the following parameters:
    • alignment: frame-to-frame / frame-to-map,
    • loss: point-to-point / point-to-plane,
    • descriptor: position / position and normal,
    • odometry: with / without (odometry should be used on the real robot).
  • High accelerations and fast maneuvers may reduce localization accuracy, especially on the real robots. Try limit the acceleration or the maximum velocity in the control node if that seems to be the problem.


Frontier detection

  • Utilize the 'visible_occupancy' topic to include unseen wall segments to the frontier search space. Utilize robot orientation to cover wall segments during exploration.
  • Consider different heuristics for frontier selection such as the number of unknown voxels in frontier's neighborhood, or the physical distance which has to be traveled.


Path planning

  • Apply path straightening for obtaining shorter paths with a lower number of turns.
  • Be aware that due to imprecision in path following, localization and mapping, the robot can appear in occupied space although the path leads exclusively through the unoccupied space. Make use of the fact that there is usually no un-traversable obstacle at the robot's current position.


Path following

  • Consider dynamical adjustment of the forward velocity based on turning angle.
  • Consider dynamical adjustment of the forward velocity based on the distance to obstacles.
  • Improve path following by providing paths with a lower number of turns.
  • Adjust the limits on control inputs to avoid negative effects on the precision of localization and mapping.
  • Try applying turning at a spot with zero forward velocity if the difference between current and desired orientation is too high.

FAQ

  • No map is displayed in rviz - Try checking whether icp_slam and aro_localization nodes work okay and publish tf transforms. Publishing map-to-odom transforms is their job. In the process it also calls your icp function, so make sure it's working too. If not, try to debug/fix it.
  • changing a parameter value in the code has no effect (e.g. robotDiameter = rospy.get_param('~robot_diameter', 0.8)) - This is likely because the parameter is already defined in a launchfile. The second argument of the get_param function is used only if the parameter has not already been defined. Try looking up the parameter in the turtlebot3.launch or exploration.launch files (or other launch files, if you use them). Use set_param if you want to change a parameter value from the code.
  • frame error(s) In RViz, you can get “Frame does not exist” error. This happens because the requested frame (e.g., the one set as a global fixed frame) was not published, yet. For example, the node publishing the occupancy grid, did not publish the necessary transform, yet. In case of other frame errors, check that the names of the frames in both the messages (i.e. <some_msg>/header/frame_id) and transforms, are correct.
  • spawn error during launch Sometimes, when starting the evaluation pipeline, the URDF model spawning node can die. Simply try restarting it again.
  • occupancy map changes size and causes crash The size of the occupancy map can (and most likely will) change during mapping. You should always look at the map metadata contained within the occupancy grid message (upon reception of every message).
  • The “numpy has no attribute float” error from ros_numpy package - Clone the https://github.com/qboticslabs/ros_numpy repository to your src directory or downgrade numpy to pre-1.24.
courses/aro/tutorials/semestral_work.txt · Last modified: 2023/05/23 13:37 by kratkvit