====== Semestral work - Frontier-based Exploration ====== The main goal of the semestral work is to implement high-level exploration node ''explorer'', which autonomously explore an unknown environment and localize barbie doll. The node should regularly publish the {{http://docs.ros.org/en/melodic/api/nav_msgs/html/msg/OccupancyGrid.html | occupancy grid}} to topic ''/occupancy'' and the {{http://docs.ros.org/en/melodic/api/geometry_msgs/html/msg/PointStamped.html | position}} of the detected barbie to topic ''/final_barbie_point''. These messages will be used for the final evaluation. The exploration node will employ nodes implemented during regular labs (SLAM, planning, object detection). Due to the current situation, the semestral work will be carried out individually (no teams), using a simulator instead of the real robots in the labs. \\ {{youtube>mjEf-1Se9Lg?medium}} ===== How to start? ===== /* To start working on semestral work download preliminary version of {{https://cw.fel.cvut.cz/wiki/_media/courses/aro/tutorials/aro_sim.zip | template source codes}}. The final template, which might slightly differ from the preliminary template, will be announced during the 8th week. */ {{ :courses:aro:tutorials:semestral_template.zip | template for semestral work}} When working locally (i.e., using the ARO Singularity image), remember to always work with the latest image from [[courses:aro:tutorials:ros|Run ROS locally]]. You can check the upload date (written on that page) to make sure you have the latest version. The template consists of five packages: * Simulator (''aro_sim''), which contains configuration and launch files of the simulated environment. **This package should stay as provided.** Any solution tampering with the simulation environment during evaluation will be denoted as unacceptable. Notice that there are few changes in the sensor configuration from the defaults provided by turtlebot3 packages: (i) the source for the odometry are wheel encoders instead of ground-truth pose, therefore the odometry must be corrected by the ICP SLAM node, otherwise the resulting localization of the robot would be completely inaccurate.(ii) The lidar noise level is increased and the lidar range is accordingly adjusted. * Evaluathor (''evaluathor''), which contains evaluation node. **This package should stay as provided.** * Localization and mapping (''aro_slam''), which contains implementation of the ICP-based SLAM with configuration and launch files. This should be **replaced by your own implementation** from regular labs (HW 3: ICP function, possible improvements to the ''icp_slam_2d'' node). * Object detection (''barbie_detection''), which contains configuration and launch files of the object detection and localization nodes. This node should be **replaced by your own implementation** from regular labs. * Exploration (''exploration''), which contains configuration and launch files of the exploration nodes. Node ''frontier.py'' providing services related to finding frontiers, node ''planner.py'' providing a path planning service and node ''path_follower.py'' responsible for path execution should be **replaced by your own implementation** from regular labs. Node ''explorer'' providing high-level exploration planning and control **should be implemented**, see following paragraphs for our suggestions. You can start by replacing ''icp_slam_2d'', ''frontier.py'', ''planner.py'', ''path_follower.py'', ''detector.py'', ''network.py'' by the nodes you have implemented during the regular labs. Then you can start implementing your high-level exploration node in ''explorer''.. Before you start coding:\\ * Please, read the section **Evaluation details** to get familiar with the requirements and testing procedure of the semestral work. * There is a FAQ section at the end of this document that can help you solve some common issues. * To launch and debug your system (until sufficiently developed), use the ''aro_sim/launch/turtlebot3.launch'' file. /* See {{ https://cw.fel.cvut.cz/wiki/_media/courses/aro/tutorials/aro_sim.pdf | instructions }} and {{https://cw.fel.cvut.cz/wiki/_media/courses/aro/tutorials/aro_sim.zip | source codes }} for details. More details on the evaluation procedure and the final deadline will be announced later. */ ===== High-level exploration node ===== The node—executable Python script— should be located at ''exploration/scripts/explorer''. An empty template is provided in the student package. Algorithm below provides an overview of a possible implementation. An exploration experiment with such a node is shown in the video above. Exploration node—overview of a possible implementation - Pick a frontier, if any, or a random position as a goal. ◃ ''frontier.py'' - Plan a path to the goal, or go to line 1 if there is no path. ◃ ''planner.py'' - Delegate path execution to low-level control. ◃ ''path_follower.py'' - Monitor execution of the plan. - Invoke a recovery behavior if needed. ◃ ''path_follower.py'' - Repeat from line 1. ===== Deadlines, Milestones and Points ===== You can obtain 22 points from the two following milestones: **Milestone I (max 7 points):** Upload video demonstrating "a reasonable" functionality of your exploration pipeline before 29 April to BRUTE. It is enough to upload only video (no codes are needed for this milestone). The reasonable functionality is that the robot autonomously explores the selected map (regardless of the exploration efficiency or accuracy of barbie detections). **Milestone II (max 15 points):** Upload all codes to BRUTE before 2021-05-19 00:00:00. You should upload whole pipeline to the BRUTE system before this deadline. The evaluation will be based on running the Evaluathor node on our own maps (see "Evaluation details" section for details) => please make sure that: * running the evaluathor launchfile runs everything what is needed on your side. * focus on implementing functionalities which generalize well rather than tuning something for a single map. * avoid sharing your codes with your colleagues, since all uploaded codes will be checked by global plagiat detection system. The system compares your codes with any other codes which has ever been uploaded to BRUTE (including other courses and years). ===== Evaluation details (Milestone II) ===== Evaluation will be performed by an automatic evaluation script, which will run the simulation for 180 seconds and then it will stop the simulation. A simplified version of it is provided in the ''aro_evaluathor'' package. The evaluation script launches the ''aro_sim/launch/turtlebot3.launch'' file and then listens to these two topics: * ''/occupancy'' for the occupancy map, with message type ''nav_msgs/OccupancyGrid'' (map frame = "map", odom frame = "odom", more specification below) * ''/final_barbie_point'' for the detected positions of the barbie target, message type ''geometry_msgs/PointStamped'' Please, make sure your solution publishes the appropriate data on these topics (more details below). Otherwise, your solution might not be accepted! Additionally, make sure your entire solution is started via the ''turtlebot3.launch'' file. Do not modify this file! The entire aro_sim package will be replaced with a "clean" one. **Any modifications you perform in the aro_sim package will be lost!** If you need to launch additional nodes or otherwise modify the launch procedure, you should modify the ''exploration/launch/exploration.launch'' file, which is included in the turtlebot3.launch file. To test your solution, you can run the ''aro_evaluathor/launch/eval_world.launch'' file. Use the argument **world** to change the map (e.g. "world:=aro_maze_8" or "world:=brick_maze_1"). The launch file starts the robot simulation and opens an OpenCV window where you can see the evaluation process (score and map to ground truth comparison). The results is also stored into the folder __~/aro_evaluation__ as a table and a video. Make sure your solution can be tested by the evaluation script without any errors before submission! Example of how to start the evaluation: ''roslaunch aro_evaluathor eval_world.launch world:=aro_maze_8'' You can get points for published occupancy grids and barbie positions, see paragraphs below for detailed description of the point assignment procedure. In order to capture the temporal progress of your exploration, we will evaluate published maps in the uniformly distributed temporal intervals: 30sec, 60sec, 90sec, 120sec, 150sec, 180sec, and estimate: \\ ''"final_points" = 1/6 * (map_points_30 + map_points_60 + map_points_90 + map_points_120 + map_points_150 + map_points_180) + barbie_points'' \\ \\ == Points from semestral work == **The number of points from the semestral work** will be determined as a saturated linear function of the ''"final_points"'' summed overall evaluation maps and starting positions. The exact shape of this function will depend on the performance of all submitted solutions and the performance of teacher's solution. In particular, we first evaluate teacher's solution with temporal handicap of 30sec (i.e. it will start running after 30 seconds from the start) and subtract additional 15.000 points. Everyone who submits a working solution, which achieves lower number of ''"final_points"'' than Handicapped Teacher's Solution (HTS), will obtain 4 points. The solutions, which achieve the same or higher number of ''"final_points"'' than HTS, will obtain 6 points. Remaining 9 points will be distributed based on the relative performance of the solutions, which were better than HTS. In particular, we will order these solutions according to achieved ''"final_points"'', and express the relative ordering in terms of probability that your solution is better than another solution chosen at random. For example, if we have 100 submitted solutions and your solution was 37th (in terms of achieved ''"final_points"''), your relative performance is 63%. We assign points according to the following table, | Performance | Points | | worse than HTS but working | 4 | | better than HTS | 6 | | 10%-20% | 7 | | 20%-30% | 8 | | 30%-40% | 9 | | 40%-50% | 10 | | 50%-60% | 11 | | 60%-70% | 12 | | 70%-80% | 13 | | 80%-90% | 14 | | 90%-100% | 15 | The evaluation will be done on worlds similar to aro_maze_8. They won’t contain open areas as in house. The worlds will be maze-like with long corridors and thick walls. Only the burger robot will be used in evaluation. \\ Please, when developing your solution for the semestral project, use the ''turtlebot3.launch'' file to start your pipeline. This should be the main mode of starting your pipeline until it can produce reasonable results. The inclusion of evaluation package to the template is meant as a way for you to have a rough idea on what you can expect during evaluation process and whether your solution satisfies the requirements or not, before the actual evaluation. However, it is expected that you try running your solution via the evaluation launch file only when it is sufficiently functional. That is, the evaluation package is provided to check whether the functionality of your package is adequate, not whether it is functional.\\ Additionally, the evaluation package is provided as a black box. It is not meant to be readable or modifiable by students. There are also known issues when attempting to run the evaluation on a non-functioning solution (e.g., occupancy grid is not published, yet). \\ \\ == Map_points from published occupancy grid == The {{http://docs.ros.org/en/melodic/api/nav_msgs/html/msg/OccupancyGrid.html | occupancy grid}} consists of 5x5cm cells. Each cell is classified into one out of three classes based on its occupancy confidence value in the ''int8[] data'' field: "Empty" (with occupancy confidence <0,25)), "Occupied" (occupancy confidence <25,100>) and "Unknown" (occupancy confidence -1). The part of the scene, which has not been covered by the published occupancy grid is considered to be "Unknown". Given a ground truth occupancy grid and a published occupancy grid, the number of map_points will be determined according to the following table: ^ ^ Published "Empty" <0,25) ^ Published "Occupied" <25,100>^ Published "Unknown" -1 ^ | Ground truth "Empty" | 0 | -1 | -1 | | Ground truth "Occupied" | -1 | 0 | -1| | Ground truth "Unknown" | -1 | -1 | 0| Consider the following example: the 6x6m world, where all cells are observable (i.e. their ground truth class is either "Empty" or "Occupied"). Since the resolution of the occupancy grid is 5cm, the ground truth map will consists of 14400 cells which are either "Empty" or "Occupied", anything else is "Unknown". The maximum number of map_points, which you can obtain for publishing the map identical to the ground truth map, is zero. For publishing an empty map, you obtain -14400 map_points, since the unpublished cells are assumed to be from class "Unknown". Similarly, you obtain -14400 map_points for publishing arbitrarily large map containing only cells with class "Unknown". Nevertheless, you can obtain arbitrarily low number of map_points (lower than -14400) for publishing sufficiently large maps, which contains only "Empty" or "Occupied" cells. \\ \\ == Barbie_points from published barbie pose == You can (but do not have to) publish the position of the barbie doll. Only last three messages from the topic ''/final_barbie_point'', received by the end of the evaluation run, are accepted. You can publish more messages during the run but only the last three are kept. Out of these three messages, the closes to the actual barbie position is taken and the difference between the estimated and actual position is used for barbie_points assignment, as follows. If the difference between published position and ground truth position (i.e. the localization error) is smaller than 50cm, you will obtain 10000 barbie_points. If the difference between published position and ground truth position is between 50-100cm, you will obtain +5000 barbie_points. If the difference between published position and ground truth position is bigger than 100cm, you will obtain -5000 barbie_points. If the position is not published or contains NaNs, you obtain 0 barbie_points. ^error <0,50cm) ^ error <50cm,100cm> ^ error bigger than 100cm ^ Position unpublished ^ | +10000 | +5000 | -5000 | 0 | /* Note that the evaluation GUI is rendered only when receiving occupancy grid messages. Therefore, nothing will be shown with the empty packages provided in the template. */ /* For the evaluation purposes, we will distinguish only two classes in the occupancy grid: "Empty" (with occupancy confidence <0,25)) and "Other", which merges Unknown (occupancy confidence -1) and Occupied (occupancy confidence <25,100> ). Given a ground truth occupancy grid and a predicted occupancy grid, the number of points will be determined according to the following table: ^ ^ Predicted "Empty" <0,25) ^ Predicted "Other" {-1, <25,100>}^ | Ground truth "Empty" | 1 | 0 | | Ground truth "Other" | -1 | 0 | You will get positive points for correctly classified "Empty" voxels and negative points for incorrectly classified "Other" voxels. */ ===== Possible improvements ===== === Whole pipeline === Please note that we will evaluate performance of the whole system in terms of the published occupancy grids and barbie positions, so the nodes must not only work individually but also work well with other nodes to fulfill their role in the whole system. Things to consider: * Inaccurate localization will result in distorted maps and wrong cells being compared to the ground truth during evaluation. * Slow localization will have a negative impact on low-level motion control. Low-level motion control can be adjusted as well if needed. * As all experiments are run in simulator, possible recovery behaviors can be quite aggressive. Nevertheless, if the maneuvers are too aggressive and robot is hitting obstacles, it will adversely affect odometry and initial pose estimates for ICP. * Choosing inappropriate goals and visiting already covered areas repeatedly will slow down exploration. * Having no recovery or fallback behaviors can lead the system to halt in the very beginning. * Consider selecting important parameters such as max robot speed or obstacle margin and tune the pipeline as a black-box (e.g. random search, grid search, or CMA-ES). * A general advice is to focus on performance bottlenecks. \\ === SLAM === * Stop ICP early once no more improvement can be made ("While not done") * Surfel map with point-to-plane metric * Allow storing normals in/along the map * After kNN search, the nearest point on the corresponding plane would be used instead of point itself * Absolute orientation would align to these points on plane * Keep backward compatibility of absolute_orientation and icp functions * e.g. ''icp(…, metric='point-to-point')'' default would keep the original semantics * for ''icp(…, metric='point-to-plane')'' * ''x'', ''y'', ''y_index'' could have a different semantics, e.g. ''x[3:6, :]'' could be normals, or * additional 'y_normal' parameter could hold normals * Try different initial estimates for ICP * Use constant velocity motion model - May work very well with a smooth controller ensuring slow velocity changes * Use IMU orientation with constant velocity motion model * Keep odom→base_footprint TF as is: ignore it as input, but consider it when outputing map→odom TF: T_r2m = T_o2m * T_r2o * IMU message (excerpt): orientation: x: -0.0011246201996952122 y: -0.003690314694296936 z: 0.29388865004485826 w: -0.9558318775714033 angular_velocity: x: -0.00011878946665939603 y: 4.463900499701507e-05 z: 0.0007740670965644818 * Use two occupancy thresholds b_empty and b_occupied * Remove points with $b_i < b_\mathrm{empty}$ * In localization, consider only points with occupancy $b_i \geq b_\mathrm{occupied}$ * Try different methods to filter out noise in the point map * Assess occupied/empty points based on signed distance from reconstructed surface * Reconstruct surface from input measurements (e.g., line segments from pair of points in 2D, polygons in 3D) and compare map points with possibly occluding line segments * Similar but comparing measured point with surfels in the map: a surfel may need to filtered out if it occludes a measured point \\ === Frontier detection === * Consider different heuristics for frontier selection such as the number of unknown voxels in frontier's neighbourhood, the distance which has to be travelled. \\ === Path planning === * Apply path straightening for obtaining shorter paths with a lower number of turns. * Be aware that due to imprecision in path following, localization and mapping, the robot can appear in occupied space although the path leads exclusively through the unoccupied space. Make use of the fact that there is usually no untraversable obstacle at the robot's current position. \\ === Path following === * Consider dynamical adjustment of the forward velocity based on turning angle. * Consider dynamical adjustment of the forward velocity based on the distance to obstacles. * Improve path following by providing paths with a lower number of turns. * Adjust the limits on control inputs to avoid negative effects on the precision of localization and mapping. * Try applying turning at a spot with zero forward velocity if the difference between current and desired orientation is too high. * Consider adjusting the heading of the robot at the end of the path when it is beneficial. \\ === Barbie detection === * We suggest you implement a detection aggregator, which will subscribe all the detections from the topic ''/barbie_point'' from your detector node and publish only the most probable barbie position to the topic ''/final_barbie_point''. ===== FAQ ===== * **No map is displayed in rviz** - Try checking whether icp_slam_2d works okay and publishes tf transforms, publishing map-to-odom transforms is its job. In the process it also calls your icp function, so make sure it's working too. If not, try to debug/fix it. * **changing a parameter value in the code has no effect** (e.g. //weights_path = rospy.get_param('~weights_path', 'trained_weights')//) - This is likely because the parameter is already defined in a launchfile. The second argument of the //get_param// function is used only if the parameter has not already been defined. Try looking up the parameter in the turtlebot3.launch or exploration.launch files (or other launch files, if you use them). Use //set_param// if you want to change a parameter value from the code. * **frame error(s)** In RViz, you can get "Frame does not exist" error. This happens because the requested frame (e.g., the one set as a global fixed frame) was not published, yet. For example, the node publishing the occupancy grid, did not publish the necessary transform, yet. In case of other frame errors, check that the names of the frames in both the messages (i.e. ///header/frame_id//) and transforms, are correct. * **spawn error during launch** Sometimes, when starting the evaluation pipeline, the URDF model spawning node can die. Simply try restarting it again. * **occupancy map changes size and causes crash** The size of the occupancy map can (and most likely will) change during mapping. You should always look at the map metadata contained within the occupancy grid message (upon reception of every message).