Multi-Objective Optimisation Flashcards

1
Q

Multiobjective Archiving/ Pareto

A

Multi-objective archiving, also known as Pareto archiving or Pareto front maintenance, is a technique used in multi-objective optimization algorithms to store and manage non-dominated solutions found during the search process. Here’s how it works:

  1. Pareto Dominance: In multi-objective optimization, a solution is considered superior to another if it is not worse in any objective and strictly better in at least one objective. This relationship is known as Pareto dominance. A solution that is not dominated by any other solution in the search space is called non-dominated or Pareto optimal.
  2. Archiving: During the optimization process, as solutions are evaluated, non-dominated solutions are stored in an archive. This archive maintains a collection of non-dominated solutions encountered throughout the search.
  3. Archive Management: The archive is continuously updated as new solutions are found. When a new solution is evaluated, it is compared to the solutions in the archive. If the new solution is dominated by any existing solution in the archive, it is discarded. If it dominates any existing solutions, it replaces them in the archive. If it is non-dominated, it is added to the archive.
  4. Archive Size Control: To manage the size of the archive, various strategies can be employed, such as maintaining a fixed-size archive, removing redundant or less diverse solutions, or periodically pruning the archive to retain only the best solutions.
  5. Pareto Front Representation: The solutions stored in the archive collectively form an approximation of the Pareto front, which represents the trade-off between conflicting objectives. The Pareto front provides decision-makers with a set of non-dominated solutions, allowing them to explore different trade-offs and make informed decisions based on their preferences.

Multi-objective archiving is a fundamental component of many multi-objective optimization algorithms, such as evolutionary algorithms (e.g., NSGA-II, SPEA2), particle swarm optimization (e.g., PSO-MO), and genetic algorithms (e.g., MOGA). By maintaining a diverse set of non-dominated solutions, multi-objective archiving enables the exploration of the solution space, facilitating the discovery of trade-off solutions that balance conflicting objectives effectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

SA

A

To perform multiobjective optimisation using SA the following features must be modified:
Solution acceptance
Annealing schedules
Restarts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Multiobjective GA

A

The fact that GAs search from population to population rather than from one individual solution to another makes them very well suited to performing multiobjective optimization. It is easy to conceive of a population being evolved onto the trade-off surface by a suitably configured GA.
With an appropriate archiving scheme in place, the only modification required for a single-objective GA, to perform multiobjective optimization, is in the selection scheme

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Pareto Based Selection: ranking

A

find a set ofPareto optimal solutions(also known as the Pareto front) that represent trade-offs between conflicting objectives.
The nondominated members of the population are identified. These are assigned rank 1.
The rank 1 individuals are removed from consideration and the nondominated members of the remaining population are identified. These are assigned rank 2.
The rank 2 individuals are also removed from consideration and the nondominated members of the remaining population are identified. These are assigned rank 3.
And so on until the entire population has been ranked

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Multiobjective Tabu Search

A

In single-objective TS points are compared on the basis of their objective function values

In multiobjective TS (MOTS) the basis for comparison uses the concepts of dominance and Pareto-equivalence† † Two solutions are Pareto-equivalent if neither dominates the other

Initialization: Start with an initial set of candidate solutions, typically generated randomly or using some heuristic method.
Tabu List Initialization: Initialize a Tabu list to keep track of recently visited solutions to prevent cycling. The Tabu list ensures that the search avoids revisiting the same solutions within a certain number of iterations.
Neighborhood Exploration: Generate neighboring solutions by applying local search operators to the current solutions. These operators modify the solutions while preserving their feasibility. For multiobjective problems, these operators should maintain the Pareto dominance relationship among solutions.
Tabu Search Strategy: Use a Tabu search strategy to guide the exploration of the solution space. This strategy involves iteratively moving from one solution to another, considering both the quality of solutions and their tabu status. Tabu moves may violate certain constraints or conditions to escape local optima.
Pareto Dominance Check: After generating neighboring solutions, evaluate their quality using Pareto dominance. Discard solutions dominated by others and add non-dominated solutions to the set of candidate solutions.
Tabu List Update: Update the Tabu list with the newly visited solutions to prevent revisiting them in subsequent iterations. The length of time a solution remains in the Tabu list can be controlled to balance exploration and exploitation.
Termination Criteria: Terminate the algorithm based on predefined stopping criteria, such as a maximum number of iterations, convergence of the Pareto front, or reaching a specified computational budget.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Hooke and Jeeves Move

A

A set of candidate neighbouring solutions is generated by incrementing and decrementing individual control variables
The objective functions for each new point are evaluated and, as long as the point is neither Tabu nor violates any constraints, it is considered a candidate for the next point in the search
Each candidate point is classified according to its domination or Pareto-equivalence to the current point:
If there is a single dominating point, it is automatically accepted as the next point
If there are multiple dominating points, the dominated points within that group are removed and one is selected at random from those remaining – the other nondominated points become candidates for intensification
If there are no dominating points, the same procedure is applied to those candidate points which are PE to the current point
If there are no PE points, a dominated point is selected in the same fashion
A pattern move strategy is also implemented:
Before every second H&J move, the previous move is repeated. This new point is compared to the current one, and, if it dominates it, is accepted as the next point; if not, a standard H&J move is made.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Performance Measures

A

Time” to reach the optimum (if known). “Time” can be measured by the amount of c.p.u. time expended or by the number of objective function evaluations required.

The value of the objective function for the best solution found after a specified length of (c.p.u.) time or a specified number of objective function evaluations is a more generally useful performance measure.

For stochastic optimization methods, several (at least 25, preferably 50+) runs should be made using different random number sequences and, if the starting point can affect the run, different initial solutions, and the outcomes averaged to measure performance.

Once an appropriate time over which to compare performances has been identified, then, for stochastic search methods, two measures are of interest:
The value of the best objective found averaged over multiple runs;
The standard deviation in the value of the best objective found over multiple runs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Dissimilarity Archiving

A

Store the best L solutions located, where L may be of the order of 25.
The best 25 solutions may well be very similar and therefore this archive may not provide very much information about the rest of the search space explored.
This disadvantage may be overcome by storing the best L solutions with a minimum level of dissimilarity.
Dissimilarity Archiving
Nees a measure of dissimilarity between solutions
Need 2 dissimilarity thresholds: Dmin and Dmax

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Application in Engineering Design

A

Formulating Objective Functions
Objective metrics

Contraints
Hard contraints: infeasible once violated
Penalty terms
Soft constraints
By adding a constraint, it may cause discontinuity. However, relaxing this allows the optimiser to pass through the virtual discontinuity

All objective functions will be subject to the same set of constraints

How well did you know this?
1
Not at all
2
3
4
5
Perfectly