====== Constrained optimization ====== The goal of this lab exercise is to implement an evolutionary algorithm for constrained optimization problems. Likewise in NSGA-II, also here the whole task consists of only a few modifications to the standard genetic algorithm. In particular, you should implement two variants of the adaptive penalty: * method that adapts the penalty coefficients: * Option 1: based on the feasibility of the best solution observed in the last $k$ generations, [[https://www.researchgate.net/publication/30823754_A_Genetic_Algorithm_for_the_Multiple-Choice_Integer_Program|A Genetic Algorithm for the Multiple-choice Integer Program]], * Option 2: based on the target proportion $\tau_{target}$ of feasible individuals in population; * method using so-called //Stochastic Ranking// for sorting of individuals in the population based on the objective function and constraint violation simultaneously, [[https://www.researchgate.net/publication/3418601_Yao_X_Stochastic_ranking_for_constrained_evolutionary_optimization_IEEE_Trans_Evol_Comput_4_284-294|Stochastic Ranking for Constrained Evolutionary Optimization]]. ** If you do not complete all parts during the lab time, finish the work at home. ** ===== Penalty weight adaptation ===== ==== Option 1 ==== The weight $r_g(t)$ is adapted in time as follows: * if the best solution in the last $k$ generations was always **feasible**, then the weight decreases as $r_g(t+1) = (1/\beta_1) * r_g(t)$ * if the best solution in the last $k$ generations was always **infeasible**, then the weight increases as $r_g(t+1) = \beta_2 * r_g(t)$ * otherwise, the weight remains the same, $r_g(t+1) = r_g(t)$ $\beta_1, \beta_2 > 1$ a $\beta_1 \neq \beta_2$ Implement this adaptive method. ^ Input | Current $r_g(t)$ | ^ Input | Information about feasibility of the best solution in the last $k$ generations | ^ Output | Updated weight $r_g(t+1)$ | Implement a method for computing the overall fitness of the individual in the form $\psi(x) = f(x) + r_g(t)*\sum_{i=1}^{m+p}G_i(x)$, where * $f(x)$ is the objective function value * $G_i(x)$ is the size of the $i$-th constraint's violation ^ Input | $f(x)$ | ^ Input | $G_i(x)$ pro $i = 1 ... m+k$ | ^ Input | $r_g(t)$ | ^ Output | fitness | Use the implemented **adaptive penalty** method within an evolutionary algorithm with the **generational replacement strategy**. ==== Option 2 ==== Based on [[https://hal.inria.fr/inria-00001273/document|this article]]. Assume a penalized fitness function $$\psi(x) = f(x) + \alpha\sum_{i=1}^{m+p}G_i(x).$$ New value of $\alpha(t+1)$ is either * $\frac1c \cdot \alpha(t)$ if $\tau_t > \tau_{target}$, or * $c \cdot \alpha(t)$, otherwise, where $c>1$ is a user-defined parameter (a recommended value is around 1.1), and $\tau_{target}$ is a desired proportion of feasible individuals in population (a recommended value is around 0.6). ===== Stochastic Ranking ===== This method first sorts individuals in the current population from the best to the worst one using a stochastic Bubble-sort-like procedure. On such a population, a standard tournament selection method can be applied to choose an individual on the basis **the lower rank of the individual, the better the individual is**. Implement the stochastic Bubble-sort method, for details see the {{:courses:a0m33eoa:en:labs:a0m33eoa_constrainthandling.pdf|lecture slides}}. ^ Input | Population of individuals; each assigned its values $f(x)$ a $\sum_{i=1}^{m+p}G_i(x)$| ^ Output | Population sorted from the best to the worst one | Use the implemented **stochastic ranking** procedure together with the **rank-based tournament selection** within an evolutionary algorithm with the **generational replacement strategy**. ===== Experimental evaluation ===== As the test problem we will use real-valued function optimizations. In particular, problems g06, g08, g11 a g24 from {{ :courses:a0m33eoa:cviceni:2006_problem_definitions_and_evaluation_criteria_for_the_cec_2006_special_session_on_constraint_real-parameter_optimization.pdf | problem_definitions}}. Use the already implemented operators for real-valued representation. **Problem g06**: * two non-linear inequality constraints $g_1(x)$ a $g_2(x)$ * optimum solution (the red point) is on the border between the areas of feasible and infeasible solutions * areas of the constraints are shown in green {{:courses:a0m33eoa:cviceni:g06.png?700|}} **Problem g08**: * two non-linear inequality constraints $g_1(x)$ a $g_2(x)$ * optimum solution lies inside the feasible area {{:courses:a0m33eoa:cviceni:g08_detail.png?600|}} **Problem g11**: * one non-linear equality constraint $h(x)$ * use the relaxation of the equality constraint in the form $g \equiv |h(x)|-\epsilon \leq 0$ * recommended value $\epsilon = 0.0015$, but you are free to try others as well {{:courses:a0m33eoa:cviceni:g11.png?700|}} **Problem g24**: * two non-linear inequality constraints $g_1(x)$ a $g_2(x)$ * area of feasible solutions consists of two disjoint parts * optimum solution lies on the border of the feasible area {{:courses:a0m33eoa:cviceni:g24.png?700|}}