Intro to Optimisation Flashcards

1
Q

3 Elements

A

Objective Function: function to be optimised
A function that connects the decision variables to the criterea
Decision variable: control variables that can be changed
A problem usually involves several parameters, of which some are very important for the proper working of the solution
Other parameters usually remain fixed or vary in relation to the parameters which may be important in a problem
The choice of the important parameters in an optimisation problem is very much dependent on the user
The rule of thumb is to choose as few variables as possible
Depending on the feedback from the optimisation, more variables can be added or replaced to improve the optimisation procedure
Constraints: the extent to which variables can be changed
Inequality constraints
Equality constraint: harder to handle and should be avoided where possible
The number of equality constraints should be kept as low as possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Methodology

A

Identify the need

Choose decision variables

Formulate contraints

Formulate the objective function

Choose an optimisation algorithm

Obtain solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Classification

A

Deterministic: you are able to calculate this solution
Heuristics: don’t require information about derivatives of the function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Simulated Annealing

A

a metal is heated to a high temperature quickly and then gradually cooled. At high temperatures, the atoms move fast, and when the temperature is reduced, their kinetic energy decreases as well. At the end of the annealing process, the atoms fall into a more ordered state, and the material is more ductile and easier to work with.
process starts with a high-energy state (an initial solution) and gradually lowers the temperature (a control parameter) until it reaches a state of minimum energy (the optimal solution)
iteratively improves the current solution by randomly perturbing it and accepting the perturbation with a certain probability. The probability of accepting a worse solution is initially high and gradually decreases as the number of iterations increases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Genetic Algorithm

A

Population based technique which simulates the genetic evolution of a species
Initialization: Generate an initial population of random individuals, each representing a possible solution to the problem.
Evaluation: Calculate the fitness score of each individual, which measures how well it solves the problem.
Selection: Select a subset of individuals from the current population, based on their fitness scores, to be the parents of the next generation.
Crossover: Combine two or more parents to create new offspring, by exchanging some of their genetic information (such as bits, characters, or numbers).
Mutation: Introduce some random changes in the offspring, by flipping, swapping, or altering some of their genetic information.
Replacement: Replace the current population with the new offspring, or keep some of the best individuals from the current population.
Termination: Check if a stopping criterion is met, such as reaching a maximum number of generations, finding an optimal solution, or reaching a convergence point. If not, go back to the evaluation step and repeat the process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Hooke and Jeeves

A

Choose a starting point
Record the current point as the base point
For each variable in turn
Increase it by its increment
If this reduces the function, retain the change otherwise reverse it and reevaluate
If there is no improvement for either, retain the original value
If step 3 resulted in an improvement, got to 5. Otherwise, stop if each increment is smaller than the required tolerance. If some are not, go back to step 2.
Return to 3

Exploratory Move: The algorithm starts by performing an exploratory move in the search space. This move typically involves a small step size along each coordinate direction. The function value at this new point is compared with the current best solution.
Pattern Move: If the exploratory move leads to an improvement in the function value, the algorithm continues in the same direction, taking a larger step size (the pattern move). This step modifies the current solution by a certain amount along each coordinate direction.
Shrinkage: If the pattern move does not lead to an improvement, the algorithm shrinks the step size and tries again. This process continues until a step size is found that leads to an improvement or until a termination criterion is met.
Convergence: The algorithm converges when the step size becomes sufficiently small or when a predefined stopping criterion is satisfied, such as reaching a maximum number of iterations or a specified tolerance level for the function value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Tablu

A

Local Search based technique guided by the use of adaptive or flexible memory structures
Short Term Memory holds the Tabu locations
Medium Term Memory contains the non-dominated solutions
Long Term Memory keeps information about the control variable space
SSR: Step Size Reduction restarts the search from a best solution found
SD: Search Diversification directs the search to underexplored regions
SI: Search Intensification focuses the search on areas of the search space with good objective function values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Ant Colony Optimisation

A

Ant Colony Optimization (ACO) is a metaheuristic algorithm inspired by the foraging behavior of ants. Here’s a simplified explanation of how it works:

  1. Initialization: Initialize a colony of artificial ants and place them at various locations in the solution space.
  2. Construction of Solutions: Each ant constructs a solution by iteratively selecting components based on a probabilistic decision rule. The decision rule is influenced by pheromone trails and heuristic information. Pheromone trails represent the attractiveness of solution components based on their historical performance, while heuristic information guides ants towards promising regions of the solution space.
  3. Solution Evaluation: Evaluate the quality of solutions constructed by ants using an objective function.
  4. Update Pheromone Trails: After all ants have constructed solutions, update the pheromone trails based on the quality of solutions found. Good solutions contribute to reinforcing the corresponding solution components’ pheromone trails, while poor solutions lead to a weakening of pheromone trails.
  5. Global Pheromone Update: Optionally, perform a global pheromone update to further regulate the pheromone levels and prevent stagnation. This could involve evaporation to prevent pheromone buildup and encourage exploration.
  6. Termination: Terminate the algorithm when a stopping criterion is met, such as reaching a maximum number of iterations or finding a satisfactory solution.
  7. Output: Return the best solution found by the ants during the optimization process.

ACO iteratively explores the solution space by constructing solutions based on local information (pheromone trails and heuristic information) and updating the solution quality-guiding pheromone trails accordingly. Over time, the algorithm converges towards promising regions of the solution space, effectively solving combinatorial optimization problems. ACO has been successfully applied to various optimization problems, including routing, scheduling, and graph optimization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Hybrid

A

Mixing TS and SA
In SA, when the search stalls, the system is heated again and the search restarted at a higher temperature – “strategic oscillation”
In TS, moves for which the variation of the objective functions does not pass a certain threshold are forbidden. The value of the threshold is decreased according to some “cooling schedule”
Memetic Algorithms, Genetic Local Search

By integrating TS and SA, you can leverage their complementary strengths to improve the search process. TS can provide intensification by exploiting local search, while SA can offer diversification through its probabilistic acceptance of worse solutions, allowing for a more comprehensive exploration of the solution space. This hybrid approach could enhance the efficiency and effectiveness of optimization in various domains, including scheduling, routing, and resource allocation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly