Warning

This page is located in archive.
Go to the latest version of this course pages.
Go the latest version of this page.

The goal of this lab exercise is to implement a multi-objective evolutionary algorithm (MOEA), in particular the NSGA-II one.

As you will see, it consists in modifying just a few parts of the standard evolutionary algorithm that you implemented in the last lab exercise.

**If you do not complete all parts during the lab time, finish the work at home.**

The first modification is related to the way two candidate solutions are compared w.r.t. their quality. In case of a single-objective optimization, the comparison of two solutions is based just on a single fitness value assigned to each solution. Simply, the smaller the fitness value, the better the solution is in case of the minimization problem. Similarly, but using the “larger” relation for the maximization problem.

It is not that straightforward in case of multi-objective optimizations because there are multiple quality measures assigned to each solution. Various strategies for comparing two solutions are used in various MOEAs. NSGA-II algorithm compares solutions based on two performance indicators derived from the original objective values [please, refer to the lecture slides for details]:

**non-dominated front number**,**uniqueness**of the solution within its non-dominated front (i.e., the*crowding distance*).

Your first task is to implement a function/method that calculates the non-dominated front number and the crowding distance for all individuals in the population.

Input | Evaluation of the whole population, i.e., set of vector of objective values for each solution. |
---|---|

Output | The non-dominated front number and crowding distance for each solution. |

Test data without normalization (provided by Antonín Hruška): test_simple_hruska.txt

Test data with normalization (provided by Jan Pikman): small, large

All objectives are minimized.

NSGA-II uses a binary tournament operator $t(x_i, x_j)$ that returns true, meaning the $x_i$ is better than $x_j$, if one of the following conditions holds:

- $x_i$ is from the better front than $x_j$, i.e., $front(x_i) < front(x_j)$,
- both, $x_i$ and $x_j$ are from the same front, but $x_i$ is more unique than $x_j$ within the front, i.e., $front(x_i) == front(x_j)$ AND $crowding\_distance(x_i) > crowding\_distance(x_j)$.

Your second task is to implement this binary tournament operator.

Input | Two solutions $x_i$ and $x_j$ |
---|---|

Output | True, if $x_i$ is better than $x_j$. False, otherwise. |

The last task is to implement the replacement strategy used in NSGA-II. It works so that it takes two populations

- $P_t$ … the current population at time $t$,
- $Q_t$ … the population of solutions newly created of the population $P_t$

and selects out of the $P_t \bigcup Q_t$ a new set of the best solutions for the population $P_{t+1}$, please see the lecture slides for details.

Input | Populations $P_t$ and $Q_t$ |
---|---|

Output | Population $P_{t+1}$ |

**Now, you have all ingredients needed to complete the NSGA-II algorithm!**

To test your NSGA-II implementation you can use the **Multi-objective 0/1 Knapsack Problem**. This problem suits well to our purposes as you can use the simple binary representation with the mutation and crossover operators that you have already implemented.

For the definition of a Single-objective 0/1 Knapsack Problem please consult Wikipedia.

The multi-objective version is well described in this article. In MO variant with $D$-dimensional knapsack, the weight and value of each item $i$ for knapsack $d$ is given by a $D$-dimensional vector $w_i=(w_{i1},\dots, w_{iD})$ and $v_i=(v_{i1},\dots, v_{iD})$. Similarly, the knapsack has a $D$-dimensional capacity vector $W = (W_1, \dots, W_D)$. Assuming the solution is represented by a binary vector $x = (x_1, \ldots, x_m)$ where $m$ is the number of items to choose from, the goal is to maximize

$$\text{maximize } f(x) = (f_1(x), \dots, f_D(x)), $$

where

$$f_d(x) = \sum_{i=1}^m v_{id}x_i,$$

such that the sum of weights in each dimension $d$ does not exceed $W_d$, i.e.

$$\forall d \in \{1,\ldots,D\}: \sum_{i=1}^m w_{id}x_i \leq W_d.$$

You can use the Test Problem Suite with various data sets and the best known solutions for each of them available.

In case of any issues downloading the test problems from the site above, you can download the problem definitions and Pareto fronts here.

Data sets in Test Problem Suite use the following notation:

- n-th knapsack stands for n-th dimension, $n \in \{1,\dots,D\}$.
- Profit stands for the value.

courses/a0m33eoa/labs/week_06.txt · Last modified: 2023/11/19 20:26 by xposik