4. Synchronization Flashcards
What do threads do?
Threads abstract and multiplex the CPU
How does the OS create the illusion of concurrency?
The OS quickly switches the processors between multiple threads of execution
Why is the illusion of concurrency both powerful and useful?
It helps us to think about how to structure our applications, and it hides latencies caused by slow hardware devices (like waiting for keyboard input).
Briefly define the coordination problem that multi-threaded concurrency creates.
How do we enable efficient communication between the multiple threads involved in coordinating a single task?
Briefly define the correctness problem that multi-threaded concurrency creates.
How do we ensure that shared state (memory, files) remains consistent when being accessed by multiple threads concurrently?
What is the difference between concurrency and parallelism?
Concurrency is about dealing with lots of things at once (independently executing processes).
Parallelism is about doing lots of things at once (simultaneous execution of possibly related computations).
For an OS to achieve concurrency, what must be by default true about threads?
They must be able to be run in any order. They must be allowed to be stopped and restarted at any time. They must be able to remain stopped for any amount of time.
This is all assuming that no explicit synchronization techniques have been employed. Example: protecting a critical region with a lock, making it impossible for another thread to execute that section while another thread is within it.
What is a race condition?
When the output of a process is unexpectedly dependent on timing or other events
Ex: A condition that occurs when two or more threads can access shared data and try to change it at the same time. Threads can be started and stopped at any moment, so it is possible that one thread checks a piece of memory for a condition, but before executing its desired operation to/with that data, it is stopped and another thread changes that data first. Now the initial check that the first thread made is incorrect, but the thread does not know that. In short, a race condition exists when the unknowable order of access that threads make to a shared resource impacts the outcome of execution.
What is concurrency?
The illusion that multiple things are happening at once. (Requires stopping or starting any thread at any time)
What is atomicity?
The illusion that a set of separate actions occurred all at once.
How is atomicity achieved?
This requires not stopping certain threads at certain times or not starting certain threads at certain times.
i.e. providing some limited control to thread over their scheduling
What is a critical section?
A critical section contains a series of instructions that only one thread can be executing at any given time
This set of instructions will look atomic with respect to other threads executing code within the critical section
What is mutual exclusion?
Only one thread should be executing in the critical section at one time.
What is progress?
All threads should eventually be able to proceed through the critical section
What is performance (as it relates to a critical section)?
We want to keep critical sections as small as possible (to improve performance) without sacrificing correctness
What are the three requirements of a critical section?
Mutual exclusion, progress, and performance
What are the two possible approaches to implementing critical sections?
A “don’t stop” policy, where threads cannot be de-scheduled while they are in a critical region (works on uniprocessors)
A “don’t enter” policy, where threads cannot enter a critical section while another thread is executing in it (works on multicore systems).
List 3 special hardware instructions which are guaranteed to be atomic across all cores.
Test-and-set, compare-and-swap, load-link and store-conditional
What does the test-and-set hardware instruction do?
Write to a memory location and return its old value
What does the compare-and-swap hardware instruction do?
Compare the contents of a memory location to a given value. If they are the same, set the variable to a new given value.
If the value had been updated by another thread in the meantime, the write would fail.
What does the load-link and store-conditional hardware instruction do?
load-link and store-conditional (LL/SC) are a pair of instructions used in multithreading to achieve synchronization. Load-link returns the current value of a memory location, while a subsequent store-conditional to the same memory location will store a new value only if no updates have occurred to that location since the load-link. Together, this implements a lock-free atomic read-modify-write operation.
What is busy waiting?
Threads wait for the critical section by “pounding on the door”, executing the check it is waiting on repeatedly. Thread checks for its entire time-slice if necessary and continues checking once it is rescheduled. Only stops checking when it is given access to the resource.
Why is busy waiting especially bad on a single core system?
Busy waiting prevents the thread in the critical section from making progress!
(But wont it simply be descheduled once it runs out of time?)
What is a lock?
A synchronization primitive used to implement critical sections.
Threads acquire a global lock when entering a critical section and release it when leaving. Only one thread can hold a lock at a time, so if a critical section is bounded by a lock_acquire and lock_release, only one thread will be able to execute that section at a time.
If a thread tries to acquire a lock that is currently held, it will go to sleep (be descheduled) until the lock is released