t2-tspn - Data-collection Path Planning with Remote Sensing (TSPN)

The main task is to implement a solver for instances of the Traveling Salesman Problem with Neighborhoods (TSPN), specifically with the disk-shaped neighborhood, a.k.a. Close Enough TSP, using unsupervised learning principles of the Self-Organizing Map (SOM).

Files SOMSolver.py - File to be implemented
utils.py, Path.py, and Point2D.py - Helpers for evaluation, drawing, and saving figures
problems - Scenarios and
t2-tspn.py - Evaluation script
Resources t2-tspn.zip - Initial files and evaluation script
Implement determination of the alternate_goal in learn_epoch() function of SOMSolver.py.

Assignment

Familiarize yourself with the provided SOMSolver.py that implements an unsupervised learning-based solution to the TSP using principles of Self-Organizing Maps (SOMs) according to slides from lec05.

From Lecture 05:

Utilize the alternate goal concept for solving instances of the TSP with Neighborhoods (TSPN). In the regular SOM-based solution to the TSP, the neurons are adapted towards the goals (cities) in each epoch. However, in the TSPN, the neurons are adapted to the closes point in the goal's neighborhood, as shown in the figure.

Therefore, the concept enables us to find shorter solutions than neurons directly to the goals (disks' centers); see the right image and the following GIF with SOM evolution.

See the SOM evolution.


Approach

The SOM-based unsupervised learning for routing problems is a repetitive learning of one-dimensional SOM that iteratively adapts to the input goal locations in a number of learning epochs. In each epoch, the goal locations are presented to the network in a random order (to avoid local extreme) and for each presented location, select_winner() method determines the best-fitting (closest) non-inhibited neuron. The winner is then adapted towards the goal together with its neighboring neurons, which is effectively the movement of the weights of the neurons (coords in a plane) towards the goal location with the decreasing power according to the distance of the neighboring neuron to the winner (in the number of neurons). The winner is then inhibited not to be a winner for more than one goal location in each learning epoch. The winner neighboring can be set to about 20 % of the total number of neurons.

The provided implementation in Python is not computationally efficient. Proper implementation is several orders of magnitude faster, such as https://github.com/comrob/gsoa. Clone git clone https://github.com/comrob/gsoa and install apt install libboost-all-dev.

The provided implementation adapts the neurons directly to the goal location. Therefore, it is necessary to adjust the winner selection using the concept of the alternate goal. Notice that for the instances with disk-shaped neighborhoods of identical radii, we can just alternate the target locations towards which the neurons are adapted. However, it is more suitable to consider the alternate goal directly in the winner selection for varying radii.

In the evolution t2-tspn.py, the animation can be disable by setting animate to False.
import SOMSolver as som
 
animate = False
 
planner = som.SOMSolver()
path = planner.plan_tour(goals, radius, animate)
courses/crl-courses/redcp/tasks/t2-tspn.txt · Last modified: 2022/11/26 21:43 by faiglj