Lecture 14 - Single Global Part 2 Flashcards

1
Q

What is Simulated Annealing?

A

Simulated Annealing (SA) is a single-state global optimisation algorithm inspired by the physical process of annealing in metallurgy, where metal is slowly cooled to allow atoms to settle into a stable, low-energy structure.

  • In Hill Climbing, only better moves are accepted.
    In Simulated Annealing, you can sometimes accept worse solutions, depending on how much worse they are and the current “temperature” t
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the Acceptance Probability Function?

A

REFER TO NOTES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the advantages/disadvantages of Simulated Annealing

A

Advantages:
- Can escape local optima
- Simple to implement
- Easy to tune for a wide variety of problems

Disadvantages:
- Choosing a cooling schedule can be tricky
- Can be slow to converge
- Doesn’t “learn” from previous steps (no memory)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the understanding of temperature t as a variable and what is its role?

A

t = infinity -> accept anything
t = 0 -> accept only better moves

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Algorithm 13 - Simulated Annealing

A

REFER TO NOTES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is this Cooling Schedule?

A

t is decreased according to some schedule - normally tuneable
REFER TO NOTES FOR FORMULA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why is the cooling schedule important?

A

Why this matters:
* If cooling is too fast: you lose the ability to escape bad local optima (algorithm becomes greedy too early).
* If cooling is too slow: very slow convergence, waste of compute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a Tabu List?

A
  • A memory-based optimisation technique — it stores past solutions and uses them to avoid revisiting the same areas of the search space.
    • Based on a “don’t go back” rule:
      ○ Previously visited solutions (or moves) are marked tabu (taboo/forbidden) for a period.
    • We don’t store all visited solutions forever — only a recent history (l entries).
    • Old solutions are removed automatically (queue behaviour).
    • This allows the algorithm to revisit old areas after a while → supports a kind of dynamic memory reset.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the purpose of the Tabu List?

A

Avoids common issues in hill climbing:
* Getting stuck in local optima
* Cycling (revisiting the same solutions)
-> leading to more structured exploration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the key components of a Tabu List?

A

Key Component: Tabu List
* A list of previously visited solutions
* Implemented as a queue (first-in, first-out)
* Size is linear (length l)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the Tradeoffs of a Tabu List?

A

Trade-Off:
* Searching this list takes time (linear scan)
* But skipping it causes revisiting and gets you stuck
So there’s a trade-off: cost of checking vs benefit of avoiding cycles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the advantages/disadvantages of a Tabu List?

A

Advantages:
- Explores more effectively than hill climbing
- Prevents redundancy
-Can be combined with almost any local search

Disadvantages:
- Needs additional memory and bookkeeping
- Must define a “distance” or “feature space” for real-valued domains
- May over-restrict the search if the tabu list is too aggressive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Algorithm 14 - Tabu Search

A

REFER TO NOTES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the Variations of Tabu Search?

A

Real-Value Search
Non-Numerical Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is Real-Value Search?

A

In continuous domains (e.g., x ∈ ℝⁿ), you can’t match exact previous solutions. So instead:
Define “sufficiently close” to be considered a match
Use distance metrics (e.g., Euclidean)
Reject if within some ε-neighbourhood of tabu list entries

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the limitations of Real-Value Search?

A

Limitations:
Can be expensive in time (computing distances)
Can be infeasible in high dimensions (curse of dimensionality)

17
Q

What is Non-Numerical Search?

A

For combinatorial or symbolic problems:
Instead of full solutions, store a record of moves/changes (features)
E.g., “swapped job A and B”
Tabu list stores recent changes, not entire solutions
During tweaking, the algorithm consults the list of forbidden moves

18
Q

What are the benefits of Non-Numerical Search?

A

Benefit:
More abstract and lightweight than storing full solutions

19
Q

What is Iterate Local Search?

A

ILS is hill climbing with smarter restarts. It doesn’t just restart randomly — it uses a heuristic to guide restarts toward promising areas.

20
Q

Why dont we use just plain starts?

A

Why not plain restarts?
* Random restarts often land you back in similar local optima, especially in flat landscapes.
* Without guidance, you waste time re-climbing the same hill.

21
Q

What are the assumptions about Iterate Local Search?

A

ILS Heuristic Assumption:
“Better local optima are often near the one you’re already in.”
So instead of jumping far away, you:
* Slightly perturb the current local optimum (your home base)
* Climb again
* Decide whether the new peak is worth keeping

Basically it nudges the current solution into a new are, hoping to find a better optima in that region

22
Q

What are the core concepts of Iterate Local Search?

A

Home Base:
* A saved local optimum (solution) from which you explore the nearby space.
* When restarting, you perturb near this point, not from scratch.
Perturb()
* A function that pushes you away from the current basin
NewHomeBase()
* Decides whether to adopt the new local minimum

23
Q

What are the two key decisions you need to make in Iterate Local Search?

A

Two Key Decisions:
1. Where to restart from? (around the home base — via Perturb())
2. When to adopt a new home base? (via NewHomeBase())

24
Q

What are some strategies in Iterate Local Search to make those key decisions?

A

Strategies:
* Only adopt new home if it’s better → like a “hill climb of hill climbs”
* Always adopt new one → “random walk of hill climbs”
* A middle ground may be best, e.g. SA-style probabilistic adoption

25
Algorithm 16 - ILS
REFER TO NOTES
26
Designing NewHomeBase() and Perturb()
REFER TO NOTES
27
What is Mixing and Matching?
Emphasizes an important practical and conceptual insight in metaheuristic optimisation: that you are not limited to using one algorithm at a time — instead, you can combine techniques to create better, problem-specific search strategies. Mixing and Matching = Hybrid Heuristics * In optimisation, especially single-state global optimisation, no one method is universally best (thanks to the No Free Lunch Theorem). * Therefore, combining strengths of different methods is often more effective. * This is not just allowed — it is encouraged.
28
Why does Mixing and Matching Matter?
Why this matters: * Problem landscapes vary: rugged, flat, noisy, multi-modal... * You may need more exploration in early stages (use SA, Gaussian tweaks) * Later you may need fine-tuning (use hill climbing or local SA) Or, to avoid traps, you may need memory (use Tabu Search)
29
How do you mix and match algorithms?
How to Mix and Match: Think of these algorithms as modular building blocks: * Tweaking mechanism (e.g., Gaussian, uniform, feature-based) * Acceptance rule (e.g., greedy, SA, probabilistic) * Restart strategy (e.g., random, perturbation-based, tabu-aware) * Memory mechanism (e.g., Tabu list, best-so-far)
30
What are some Key Things to Think About in Optimisation when measuring succes?
There are some key questions you can ask to know is your optimisation is good - measuring success, need to ask: - How do you know if your algorithm is any good? - How do you compare algorithms? - Is optimisation a theoretical or empirical science?
31
How do you know if your algorithm is any good?
How do you know if your algorithm is any good? * There's no one-size-fits-all answer. * Depends on: ○ What metric(s) matter (solution quality? speed?) What kind of problem you're solving
32
How do you compare algorithms?
How do you compare algorithms? You can't just run them blindly — you must control for: Computing resources Evaluations Solution Quality Problem Class
33
Is optimisation a theoretical or empirical science?
Is optimisation a theoretical or empirical science? * Theoretical: Focus on provable guarantees (e.g. convergence, complexity bounds) * Empirical: Focus on performance via experiments * In practice, metaheuristics (like SA, Tabu) are largely empirical — we test and compare them on benchmark problems, not prove formal theorems. NOTE: This is crucial in exam answers: always mention both sides if asked about evaluation.
34
What are some Key Things to Think About in Optimisation when we might be losing by using traditional single-state search strategies?
Are we wasting information/computations? How much knowledge is actually being remembered? Can we do better?/Could we make better use of it to learn about the problem space?
35
Are we wasting information/computations?
Are we wasting information/computations? * In many single-state methods, each iteration: ○ Evaluates a bunch of solutions ○ But only keeps one (or even just part of one) * The rest are thrown away! * Is that wasteful? Could we be learning from all of them?
36
How much knowledge is actually being remembered?
How much knowledge is actually being remembered? Implicit memory - built in knowledge Exclicit memory - storing past solutions
37
Can we do better?/Could we make better use of it to learn about the problem space?
Can we do better?/Could we make better use of it to learn about the problem space? This is a prompt to think about: * Could we build smarter methods that learn about the landscape as they go? * Should we record more than just “best-so-far”? Can hybrid methods or adaptive approaches use this memory?