Lecture 14 - Single Global Part 2 Flashcards
What is Simulated Annealing?
Simulated Annealing (SA) is a single-state global optimisation algorithm inspired by the physical process of annealing in metallurgy, where metal is slowly cooled to allow atoms to settle into a stable, low-energy structure.
- In Hill Climbing, only better moves are accepted.
In Simulated Annealing, you can sometimes accept worse solutions, depending on how much worse they are and the current “temperature” t
What is the Acceptance Probability Function?
REFER TO NOTES
What are the advantages/disadvantages of Simulated Annealing
Advantages:
- Can escape local optima
- Simple to implement
- Easy to tune for a wide variety of problems
Disadvantages:
- Choosing a cooling schedule can be tricky
- Can be slow to converge
- Doesn’t “learn” from previous steps (no memory)
What is the understanding of temperature t as a variable and what is its role?
t = infinity -> accept anything
t = 0 -> accept only better moves
Algorithm 13 - Simulated Annealing
REFER TO NOTES
What is this Cooling Schedule?
t is decreased according to some schedule - normally tuneable
REFER TO NOTES FOR FORMULA
Why is the cooling schedule important?
Why this matters:
* If cooling is too fast: you lose the ability to escape bad local optima (algorithm becomes greedy too early).
* If cooling is too slow: very slow convergence, waste of compute.
What is a Tabu List?
- A memory-based optimisation technique — it stores past solutions and uses them to avoid revisiting the same areas of the search space.
- Based on a “don’t go back” rule:
○ Previously visited solutions (or moves) are marked tabu (taboo/forbidden) for a period. - We don’t store all visited solutions forever — only a recent history (l entries).
- Old solutions are removed automatically (queue behaviour).
- This allows the algorithm to revisit old areas after a while → supports a kind of dynamic memory reset.
- Based on a “don’t go back” rule:
What is the purpose of the Tabu List?
Avoids common issues in hill climbing:
* Getting stuck in local optima
* Cycling (revisiting the same solutions)
-> leading to more structured exploration
What are the key components of a Tabu List?
Key Component: Tabu List
* A list of previously visited solutions
* Implemented as a queue (first-in, first-out)
* Size is linear (length l)
What are the Tradeoffs of a Tabu List?
Trade-Off:
* Searching this list takes time (linear scan)
* But skipping it causes revisiting and gets you stuck
So there’s a trade-off: cost of checking vs benefit of avoiding cycles
What are the advantages/disadvantages of a Tabu List?
Advantages:
- Explores more effectively than hill climbing
- Prevents redundancy
-Can be combined with almost any local search
Disadvantages:
- Needs additional memory and bookkeeping
- Must define a “distance” or “feature space” for real-valued domains
- May over-restrict the search if the tabu list is too aggressive
Algorithm 14 - Tabu Search
REFER TO NOTES
What are the Variations of Tabu Search?
Real-Value Search
Non-Numerical Search
What is Real-Value Search?
In continuous domains (e.g., x ∈ ℝⁿ), you can’t match exact previous solutions. So instead:
Define “sufficiently close” to be considered a match
Use distance metrics (e.g., Euclidean)
Reject if within some ε-neighbourhood of tabu list entries
What are the limitations of Real-Value Search?
Limitations:
Can be expensive in time (computing distances)
Can be infeasible in high dimensions (curse of dimensionality)
What is Non-Numerical Search?
For combinatorial or symbolic problems:
Instead of full solutions, store a record of moves/changes (features)
E.g., “swapped job A and B”
Tabu list stores recent changes, not entire solutions
During tweaking, the algorithm consults the list of forbidden moves
What are the benefits of Non-Numerical Search?
Benefit:
More abstract and lightweight than storing full solutions
What is Iterate Local Search?
ILS is hill climbing with smarter restarts. It doesn’t just restart randomly — it uses a heuristic to guide restarts toward promising areas.
Why dont we use just plain starts?
Why not plain restarts?
* Random restarts often land you back in similar local optima, especially in flat landscapes.
* Without guidance, you waste time re-climbing the same hill.
What are the assumptions about Iterate Local Search?
ILS Heuristic Assumption:
“Better local optima are often near the one you’re already in.”
So instead of jumping far away, you:
* Slightly perturb the current local optimum (your home base)
* Climb again
* Decide whether the new peak is worth keeping
Basically it nudges the current solution into a new are, hoping to find a better optima in that region
What are the core concepts of Iterate Local Search?
Home Base:
* A saved local optimum (solution) from which you explore the nearby space.
* When restarting, you perturb near this point, not from scratch.
Perturb()
* A function that pushes you away from the current basin
NewHomeBase()
* Decides whether to adopt the new local minimum
What are the two key decisions you need to make in Iterate Local Search?
Two Key Decisions:
1. Where to restart from? (around the home base — via Perturb())
2. When to adopt a new home base? (via NewHomeBase())
What are some strategies in Iterate Local Search to make those key decisions?
Strategies:
* Only adopt new home if it’s better → like a “hill climb of hill climbs”
* Always adopt new one → “random walk of hill climbs”
* A middle ground may be best, e.g. SA-style probabilistic adoption