chapter 10 - Multiprocessor Scheduling (Advanced) Flashcards
(17 cards)
what is cache coherence problem
when different CPU hold different values of the same memory due to delayed updated
How is cache coherence maintained?
Hardware protocols - bus snooping detect writes and update/invalidate caches
why are write back caches problematic for coherence
memory updates are delayed, making it harder for other CPU to stay updated
What can happen if threads access a shared queue without locking?
- Duplicate values
- Memory errors like double free
how do you protect critical sections when cache coherence
use locks
What is the downside of locks in multiprocessors?
More CPUs → more contention for locks → performance bottleneck
What is cache affinity?
The benefit of keeping a process on the same CPU to reuse useful cached state
Why does moving a process to another CPU hurt performance?
Its new CPU must repopulate the cache from memory
What is Single-Queue Scheduling (SQMS)?
All CPUs share one global queue of runnable jobs
What are the downsides of SQMS?
- Scalability issues due to locking
- cache affinity leads to unnecessary migrations
What is Multi-Queue Scheduling (MQMS)?
Each CPU has its own scheduling queue
What are the benefits of MQMS?
- Better scalability
- Preserves cache affinity
What is the main issue with MQMS?
Load imbalance - some CPUs may be idle while others are overloaded
How is load imbalance fixed in MQMS?
- Migration of jobs between CPUs
- Work stealing- idle CPUs take work from busy ones
What does the O(1) scheduler do?
- Uses multiple queues
- Priority-based with dynamic adjustments
- Works well for interactive workloads
What is CFS (Completely Fair Scheduler)?
- Uses multiple queues
- Fair CPU time based on weights
BFS (Brain Fuck Scheduler)
- Uses a single queue (like SQMS)
- tries to be proportional-share