Homework 07 - Frontier detection

Deadline: 4 May 2025, 23:59 CEST (Monday labs) / 5 May 2025, 23:59 CET (Thursday labs) (5 May is a Monday!). Penalty 5%/day of delay.

Responsible lecturer: Tomáš Musil (musilto8@fel.cvut.cz)

Your task will be to finish a prepared node for finding potential goal positions that should lead the robot to explore an unknown environment.

Frontier-Based Exploration

Where should the robot go to expand its map? The vast majority of approaches to this problem are based on the idea of “frontiers” introduced in [1] in 1997. A frontier is usually defined as the boundary between space that is known and free (containing no obstacles) and the space that is unknown. When dealing with a lidar-equipped robot using an occupancy grid in an indoor setting, simply moving the robot close to a frontier will nearly always uncover a new part of the environment and expand the occupancy grid. In this assignment, you will be implementing a node for finding these frontiers on a 2D occupancy grid (see HW06 for detailed info about the data type). We will use these specific definitions:

  • Frontier Cell: a cell of the occupancy grid that is free and has at least 1 unknown cell next to it (in an 8-neighborhood).
  • Frontier: a contiguous (connected in an 8-neighborhood) set of frontier cells
  • Frontier Center: a frontier cell belonging to a frontier, which is closest to the geometric mean position of the cells. Since the mean position might lie in unsafe space, it is safer to rather navigate to a frontier cell, which is known to be free of obstacles.

As motivation, here is how more sophisticated exploration approaches, which still use the concept of frontiers, can work in 3D:

Inflating Obstacles and Clearing Start Position

Searching for frontier cells and navigating to them on the input occupancy grid would work nicely if your robot was smaller than the cell of the grid. However, the robot is much larger than a single cell. This could thus cause the detected frontier cells to lie in places where the robot would collide with obstacles! For this reason, you need to inflate obstacles and unknown space (since it also might contain obstacles), similarly as you did in Homework 06 - Path Planning. Use the provided get_circular_dilation_footprint() kernel with the following function to perform the inflation:

https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.grey_dilation.html

You should then run the WFD algorithm on the modified grid. Make sure you merge the inflated obstacles and unknown space in the correct order!

Also, as in the Homework 06 - Path Planning, you should handle the case when the robot is too close to obstacles. If that happens, the start position can seem untraversible in the inflated grid, and then the first BFS of the WFD will immediately stop and return no frontiers. Setting the area around the robot as traversible, as in HW06, is the simplest solution, but beware - this could lead to finding deformed frontiers, depending on the specific way of implementation. We suggest starting the first BFS of the WFD with a larger starting 'open set' - not just the one cell corresponding to the robot's center position, but putting all the cells within the robot's dilation footprint into the starting open set.

Wave-front Frontier Detection (WFD)

To find the frontiers in the grid, we will use the WFD algorithm introduced in the lectures. In a nutshell, it is a 2-stage BFS algorithm on a 2D grid. We run it in two iterations:

  • outer BFS - starting from the robot's position, search for frontier cells on the modified grid
  • inner BFS - “frontier assembly” = group contiguous frontier cells into separate frontiers.

For the pseudocode and more details, please see the ARO lectures. Make sure to only accept frontier cells that are reachable, and filter out frontiers that have fewer cells than the value of the min_frontier_size parameter. You can find pseudocode and more information in the lectures and in [2]. An example result can be seen below. Green = frontier cells, Red = frontier centers, Blue = robot's position.

Assignment

Your task will be to implement functionality for the detection of frontiers in an occupancy grid. You will need to finish parts of code in aro_exploration/nodes/frontier.py and aro_exploration/src/aro_exploration/utils.py.

1) If you haven't already done it in the Path Planning homework, implement the map_to_grid_coordinates() and grid_to_map_coordinates() functions in aro_exploration/src/aro_exploration/utils.py. These should transform a given position in the 'icp_map' frame (e.g. the robot's position) to discrete coordinates specifying a cell in the occupancy grid (see above for an image explaining the indexing we will use in this assignment).

2) In the find_frontiers function or when receiving an occupancy grid message, construct a modified grid where the obstacles and unknown space are inflated. Use the get_circular_dilation_footprint() in utils.py as the kernel for inflating the obstacles with the provided dilation function: https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.grey_dilation.html. For this assignment (you can change it for the semestral work), do not use a different kernel or different function, because the evaluation uses these and a mismatch might cause your solution to find frontiers at positions that could be unsafe according to the different kernel

3) Implement the Wavefront Frontier Detection algorithm and run it on the modified occupancy grid to obtain a list of reachable frontiers. We suggest you use the provided Frontier class.

4) From all found frontiers, select the one with the center closest to the robot and return its center in map coordinates in the get_closest_frontier service.

Make sure to:

  • seach for frontiers on a grid inflated by the given kernel and inflation function.
  • return the closest Frontier Center, not the closest Frontier Cell
  • filter out too small frontiers based on the given min_frontier_size parameter
  • mark the cells in the modified grid near the robot's position (using the same kernel as for inflating the obstacles and unknown space) as free to avoid not generating any goals when the robot ends up close to a wall

Submission and Evaluation

Run the script aro_exploration/homeworks/create_hw07_package.sh to create an archive containing your implementation (frontier.py and utils.py) and upload it to the upload system. Your solution will be evaluated based on the output of your get_closest_frontier service. The output position of this service will be compared to the reference solution at several positions of the robot in the map. You can use the provided frontier_evaluate.py file for testing your solution locally before submitting to BRUTE. The expected output is in the same coordinates as the robot's position, ie. the map frame, not the discrete grid cell coordinates.

Testing

For visualization, use the following launchfile:

roslaunch aro_exploration aro_frontier_planning_test_local.launch

This launchfile, as in the previous homework, starts the frontier detection and planning nodes, plays a repeating rosbag with OccupancyGrid messages, and opens RVIZ. Since this is not a simulation, only these two nodes running and taking data from a rosbag, you need to manually publish the robot's position yourself using static transform. You can do this using the following command:

rosrun tf2_ros static_transform_publisher x y z r p y icp_map base_footprint
where 'x y z' specify position and 'r p y' orientation of the robot, which you can set as zero.

To detect the frontiers and get the closest one, call the service using the command line as

rosservice call /get_closest_frontier
After calling the service, you should see the visualization markers (blue cube for the robot's position, a green cube for each frontier cell in each frontier and a red cube for each frontier's center) appear in RVIZ. Note that you first need to implement the map_to_grid and grid_to_map functions for this visualization to work properly.

For evaluating your solution locally, use the provided evaluation script as rosrun aro_exploration frontier_evaluate.py file. This is a self-contained evaluation which initializes ros, publishes the transformations automatically and compares the outputs of your service to the reference solution.

References

courses/aro/tutorials/homework07.txt · Last modified: 2025/02/26 09:51 by musilto8