Lecture 4 - Move Acceptance in Local Search Metaheuristics Flashcards

1
Q

What are the three different types of parameter setting for metaheuristics?

A

Static - either there is no parameter to set, or parameters are set to a fixed value e.g. IOM (intensity of mutation = 5)
Dynamic - parameter values vary with respect to time/iteration count.
Adaptive - given the same candidate and current solutions at the same current elapsed time or iteration count, the acceptance threshold or acceptance probability is not guaranteed to be the same as one or more components depend on search history

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is Threshold move acceptance?

A

Determine a threshold which is in the vicinity of a chosen solution quality e.g. the quality of the best solution found so far or current solution, and accept all solutions below that threshold.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are some examples of threshold move acceptance?

A

Static - accept a worsening solution if the worsening of the objective value is no worse than a fixed value
Dynamic - Great Deluge or Flex Deluge
Adaptive - Extended Great Deluge, Modified Great Deluge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How does Great Deluge work?

A

Choose an initial solution
Choose a rain speed
Choose the initial water level
Then repeat this loop:
Choose a new solution which is a perturbation of the old one
Compute the objective value of the new solution
If the value is smaller than the water level, then the old solution becomes the new one, and the water level lowers by a pre-set decay rate.
If the value is not smaller, then loops again, unless there has been no increase in quality or too many iterations, in which case it terminates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How does the Extended Great Deluge work?

A

Same as the normal one, but feedback is received during the search and decay-rate is updated/reset accordingly whenever there is no improvement for a long time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are some examples of stochastic move acceptance?

A

Static - Naive acceptance: P is fixed e.g. if improving P=1.0, else P=0.5
Dynamic - Simulated Annealing: P changes in time with respect to the difference in the quality of current and previous solutions. Temperature parameter changes dynamically
Adaptive - Simulated Annealing with reheating: P is modified via increasing temperature time to time causing partial restart - increasing the probability of acceptance of non-improving solutions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Simulated Annealing?

A

A stochastic local search algorithm inspired by the physical process of annealing (letting something cool down over time, rather than rushing the cooling process)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does Simulated Annealing work?

A

Generates an initial solution, and initialises temperature to T0
Then repeats this loop:
Chooses a neighbouring solution to the current solution.
Then gets the difference between the current and the candidate solution i.e. solution currently being worked on - solution set as currently best solution
Then, if either the difference is smaller than 0, or a random value generated (between 0 to 1) is smaller than e^(-(difference between solutions)/Temperature), it accepts the solution.
It then updates the temperature according to the cooling schedule and keeps repeating the loop until termination criteria is satisfied.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does the temperature mean in Simulated Annealing?

A

T is initially high - many inferior moves are accepted
T is decreasing - inferior moves are nearly always rejected
As the temperature decreases, the probability of accepting worsening moves decreases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the 4 parts that make up the cooling schedule?

A

Starting temperature
Final temperature
Temperature decrement
Iterations at each temperature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the features of starting and final temperature?

A

Starting temperature:
Hot enough - to allow almost all neighbours
Not so hot - random search for sometime
Estimate a suitable starting temperature
Final temperature:
Usually 0, however in practice, not required
T is low - accepting a worse move is almost the same as 0 at that point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the three types of temperature decrement strategies?

A

Linear:
T = T - x (x being an arbitrary value)
Geometric - T = T * alpha (alpha is typically in the interval [0.9, 0.99]
Lundy Mees:
T = T/(1 + (beta * T))
One iteration at each T, but decrease T very slowly. Beta is typically a small value that is close to 0 e.g 0.0001

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the features of iterating at each temperature?

A

One iteration at each T
A constant number of iterations at each T
Dynamically change the number of iterations at each T:
At higher Ts - less number of iterations
At lower Ts - higher number of iterations, local optimum fully exploited
Reheating - if stuck at local optima for a while, increase the current temperature with a certain rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What parameter tuning methods are there?

A

Traditional approaches e.g. use of an arbitrary setting or trial & error
Sequential Tuning - fix parameter values successively
Design of experiments
Meta-optimisation - use a metaheuristic to obtain ‘optimal’ parameter settings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does Design of Experiments mean?

A

A systematic method (controlled experiments) to determine the relationship between controllable and uncontrollable factors (inputs to the process, variables) affecting a process, their levels (settings) and the response (output) of that process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What types of Sampling are there for Design of Experiments?

A

Random
Latin Hyper-cube
Orthogonal

17
Q

What is Random Sampling?

A

Generate each sample point independently (M)

18
Q

What is Latin Hyper-cube sampling?

A

Decide the number of sample points (M) for N variables and for each sample point remember in which row and column the sample point was taken

19
Q

What is Orthogonal Sampling?

A

The sample space is divided into equally probably subspaces. Sample points simultaneously, ensuring they form an ensemble of Latin Hypercube samples

20
Q

What are Taguchi Orthogonal Arrays?

A

They are highly fractional orthogonal designs, which can be used to estimate main effects using only a few experimental runs (which can consist of multiple trials)

21
Q

What are the main steps for Taguchi Orthogonal Arrays?

A

Selection of control parameters (independent variables/factors)
Selection of number of level settings for each parameter
Select a suitable orthogonal array based on the number of parameters and levels
Conduct the experiments using the algorithm on the selected subset of test instances
Analyse the results
Determine the optimum levels for the individual parameters
Confirmation experiment - use the same configuration for the rest of the experiments