chapter 10 - Multiprocessor Scheduling (Advanced) Flashcards
what is cache coherence problem
when CPU with private caches hold inconsistent values of the same memory due to delayed updated
How is cache coherence maintained?
Hardware protocols like bus snooping detect writes and update/invalidate caches
why are write back caches problematic for coherence
memory updates are delayed, making it harder for other CPU to stay updated
What can happen if threads access a shared queue without locking?
- Duplicate values
- Memory errors like double free
how do you protect critical sections when cache coherence
use locks
What is the downside of locks in multiprocessors?
More CPUs → more contention for locks → performance bottleneck
What is cache affinity?
The benefit of keeping a process on the same CPU to reuse useful cached state
Why does moving a process to another CPU hurt performance?
Its new CPU must repopulate the cache from memory
What is Single-Queue Scheduling (SQMS)?
All CPUs share one global queue of runnable jobs
What are the downsides of SQMS?
- Scalability issues due to locking
- cache affinity leads to unnecessary migrations
What is Multi-Queue Scheduling (MQMS)?
Each CPU has its own scheduling queue
What are the benefits of MQMS?
- Better scalability
- Preserves cache affinity
What is the main issue with MQMS?
Load imbalance - some CPUs may be idle while others are overloaded
How is load imbalance fixed in MQMS?
- Migration of jobs between CPUs
- Work stealing- idle CPUs take work from busy ones
What does the O(1) scheduler do?
- Uses multiple queues
- Priority-based with dynamic adjustments
- Works well for interactive workloads
What is CFS (Completely Fair Scheduler)?
Uses multiple queues
Implements deterministic proportional-share algorithm (like stride scheduling)
Ensures fair CPU time based on weights
BFS (Brain Fuck Scheduler)
- Uses a single queue (like SQMS)
- tries to be proportional-share, with a more complex algorithm:
- Focuses on simplicity and responsiveness on desktop systems