Warning
This page is located in archive.

Constrained optimization

The goal of this lab exercise is to implement an evolutionary algorithm for constrained optimization problems. Likewise in NSGA-II, also here the whole task consists of only a few modifications to the standard genetic algorithm. In particular, you should implement two variants of the adaptive penalty:

If you do not complete all parts during the lab time, finish the work at home.

Penalty weight adaptation

Option 1

The weight $r_g(t)$ is adapted in time as follows:

  • if the best solution in the last $k$ generations was always feasible, then the weight decreases as $r_g(t+1) = (1/\beta_1) * r_g(t)$
  • if the best solution in the last $k$ generations was always infeasible, then the weight increases as $r_g(t+1) = \beta_2 * r_g(t)$
  • otherwise, the weight remains the same, $r_g(t+1) = r_g(t)$

$\beta_1, \beta_2 > 1$ and $\beta_1 \neq \beta_2$

Implement this adaptive method.

Input Current $r_g(t)$
Input Information about feasibility of the best solution in the last $k$ generations
Output Updated weight $r_g(t+1)$

Implement a method for computing the overall fitness of the individual in the form $\psi(x) = f(x) + r_g(t)*\sum_{i=1}^{m+p}G_i(x)$, where

  • $f(x)$ is the objective function value
  • $G_i(x)$ is the size of the $i$-th constraint's violation
Input $f(x)$
Input $G_i(x)$ pro $i = 1 ... m+k$
Input $r_g(t)$
Output fitness

Use the implemented adaptive penalty method within an evolutionary algorithm with the generational replacement strategy.

Option 2

Based on this article.

Assume a penalized fitness function $$\psi(x) = f(x) + \alpha\sum_{i=1}^{m+p}G_i(x).$$

New value of $\alpha(t+1)$ is either

  • $\frac1c \cdot \alpha(t)$ if $\tau_t > \tau_{target}$, or
  • $c \cdot \alpha(t)$, otherwise,

where $c>1$ is a user-defined parameter (a recommended value is around 1.1), and $\tau_{target}$ is a desired proportion of feasible individuals in population (a recommended value is around 0.6).

Stochastic Ranking

This method first sorts individuals in the current population (or rather determines their rank) from the best to the worst one using a stochastic Bubble-sort-like procedure. On such a population, a standard tournament selection method can be applied to choose an individual on the basis the lower rank of the individual, the better the individual is.

Implement the stochastic Bubble-sort method, for details see the lecture slides.

Input Population of individuals; each assigned its values $f(x)$ a $\sum_{i=1}^{m+p}G_i(x)$
Output Population sorted from the best to the worst one

Use the implemented stochastic ranking procedure together with the rank-based tournament selection within an evolutionary algorithm with the generational replacement strategy.

Experimental evaluation

We shll use real-valued function optimization as the target test problem. In particular, problems g06, g08, g11 a g24 from problem_definitions.

Use the already implemented operators for real-valued representation.

Problem g06:

  • two non-linear inequality constraints $g_1(x)$ a $g_2(x)$
  • optimum solution (the red point) is on the border between the areas of feasible and infeasible solutions
  • areas of the constraints are shown in green

Problem g08:

  • two non-linear inequality constraints $g_1(x)$ a $g_2(x)$
  • optimum solution lies inside the feasible area

Problem g11:

  • one non-linear equality constraint $h(x)$
  • use the relaxation of the equality constraint in the form $g \equiv |h(x)|-\epsilon \leq 0$
  • recommended value $\epsilon = 0.0015$, but you are free to try others as well

Problem g24:

  • two non-linear inequality constraints $g_1(x)$ a $g_2(x)$
  • area of feasible solutions consists of two disjoint parts
  • optimum solution lies on the border of the feasible area

courses/a0m33eoa/labs/week_07.txt · Last modified: 2023/11/13 12:47 by xposik