C191-Terms-Chapter-5 Flashcards
(34 cards)
Concurrency
the act of multiple processes (or threads) executing at the same time. When multiple physical CPUs are available, the processes may execute in parallel.
On a single CPU, concurrency may be achieved by time-sharing.
When concurrent processes access a common data area, the data must be protected from simultaneous change by two or more processes. Otherwise the updated area may be left in an inconsistent state.
critical section
a segment of code that cannot be entered by a process while another process is executing a corresponding segment of the code.
A software solution to the CS problem.
Any solution to the critical section (CS) problem must satisfy the following requirements:
Guarantee mutual exclusion: Only one process may be executing within the CS.
Prevent lockout: A process not attempting to enter the CS must not prevent other processes from entering the CS.
Prevent starvation: A process (or a group of processes) must not be able to repeatedly enter the CS while other
processes are waiting to enter.
Prevent deadlock: Multiple processes trying to enter the CS at the same time must not block each other indefinitely.
Process cooperation
many applications require the cooperation of processes to solve a common goal. A typical scenario is when one process produces data needed by another process.
The producer process must be able to inform the waiting process whenever a new data item is available.
In turn, the producer must wait for an acknowledgement from the consumer, indicating that the consumer is ready to accept the next data item.
mutex lock
A mutual exclusion lock; the simplest software tool for assuring mutual exclusion.
We use the mutex lock to protect critical sections and thus prevent race conditions.
That is, a process must acquire the lock before entering a critical section; it releases the lock when it exits the critical section.
The acquire() function acquires the lock, and the release() function releases the lock.
A mutex lock has a boolean variable available whose value indicates if the lock is available or not. If the lock is available, a call to acquire() succeeds, and the lock is then considered unavailable.
A process that attempts to acquire an unavailable lock is blocked until the lock is released.
contended
A term describing the condition of a lock when a thread blocks while trying to acquire it.
Locks are either contended or uncontended. A lock is considered contended if a thread blocks while trying to acquire the lock. If a lock is available when a thread attempts to acquire it
uncontended
A term describing a lock that is available when a thread attempts to acquire it.
If a lock is available when a thread attempts to acquire it, the lock is considered uncontended.
high contention
Contended locks can experience either high contention (a relatively large number of threads attempting to acquire the lock).
low contention
a relatively small number of threads attempting to acquire the lock.
short duration
Given that waiting on a lock requires two context switches—a context switch to move the thread to the waiting state and a second context switch to restore the waiting thread once the lock becomes available—the general rule is to use a spinlock if the lock will be held for a duration of less than two context switches.
busy waiting
A practice that allows a thread or process to use CPU time continuously while waiting for something. An I/O loop in which an I/O thread continuously reads status information while waiting for I/O to complete.
While a process is in its critical section, any other process that tries to enter its critical section must loop continuously in the call to acquire().
This continual looping is clearly a problem in a real multiprogramming system, where a single CPU core is shared among many processes.
Busy waiting also wastes CPU cycles that some other process might be able to use productively.
spinlock
A locking mechanism that continuously uses the CPU while waiting for access to the lock.
Spinlocks do have an advantage, however, in that no context switch is required when a process must wait on a lock, and a context switch may take considerable time.
In certain circumstances on multicore systems, spinlocks are in fact the preferable choice for locking.
If a lock is to be held for a short duration, one thread can “spin” on one processing core while another thread performs its critical section on another core.
On modern multicore computing systems, spinlocks are widely used in many operating systems.
The software solution to the CS problem has several major shortcomings:
The solutions works only for 2 processes. When 3 or more processes need to share a CS, a different solution must be developed.
The solution is inefficient. While one process is in the CS, the other process must wait by repeatedly testing and setting the synchronization variables. The waiting process is only wasting CPU and memory resources without accomplishing any useful work.
The solution addresses only competition among processes. To address problems of process cooperation, entirely different solutions must be devised.
Semaphores
general-purpose primitives that allow solving a variety of synchronization problems in a systematic manner.
A semaphore s is a non-negative integer variable that can be accessed using only two special operations, P and V.
V(s): increment s by 1
P(s): if s > 0, decrement s by 1, otherwise wait until s > 0
The implementation of P and V must guarantee that:
If several processes simultaneously invoke P(s) or V(s), the operations will occur sequentially in some arbitrary order.
If more than one process is waiting inside P(s) for s to become > 0 , one of the waiting processes is selected to complete the P(s) operation.
The selection can be at random but a typical implementation maintains the blocked processes in a queue processed in FIFO order.
The CS problem using semaphores
A single semaphore, initialized to 1, is sufficient to solve the problem for any number of processes.
The bounded-buffer problem
The bounded-buffer problem is a classic synchronization problem intended to illustrate process cooperation.
A producer process shares a buffer with a consumer process. The buffer has a fixed number of slots. The producer fills empty slots with data in increasing order.
The consumers follows the producer by removing the data in the same order. The buffer is called a circular buffer because, upon reaching the last slot, each process continues with the first slot (modulo n).
The solution must guarantee the following:
Consumers do not overtake producers and access empty slots.
Producers do not overtake consumers and overwrite full slots.
The bounded-buffer problem using semaphores
Two semaphores are used to coordinate the processes. The semaphore f represents the number of full buffer slots and is incremented each time the producer places a new data item into the buffer and decremented each time the consumer removes an item from the buffer.
The semaphore e represents the number of empty slots and is modified by the producer and consumer analogously.
test-and-set instruction (TS)
copies a variable into a register and sets the variable to zero in one indivisible machine cycle.
Test-and-set has the form TS(R, x) where R is a register and x is a memory location and performs the following operations:
- Copy x into R.
- Set x to 0.
lock
a synchronization barrier through which only one process can pass at a time. TS allows an easy implementation of a lock.
Binary semaphores
Many synchronization problems do not require the full power of a general semaphore.
A binary semaphore can take only the values 0 or 1. Pb and Vb are the simplified P and V operations that manipulate binary semaphores.
Pb and Vb on the binary semaphore sb can be implemented directly using the TS instruction:
Vb(sb): sb = 1
Pb(sb): do TS(R,sb) while R == 0
Busy-waiting
is the act of repeatedly executing a loop while waiting for some condition to change. Busy-waiting consumes CPU resources and should be avoided whenever possible. The implementation of Pb(sb) using TS is very simple but suffers from the drawback of busy-waiting.
Implementing P and V operations on general semaphores
A general semaphore s can be implemented using a regular integer variable manipulated by the functions P(s) and V(s). To guarantee that only one operation at a time can access and manipulate s, a binary semaphore is used.
The variable s serves a dual purpose:
When s is greater or equal 0, s represents the value of the semaphore.
Whenever s falls below 0, the value represents the number of processes blocked on the semaphore.
To avoid busy-waiting inside P(s) when s falls below 0, the process performs a blocking request, which places the process on a waiting list associated with s.
A V(s) operation increments s and reactivates one process if one or more are blocked on s.
monitor
a high-level synchronization primitive implemented using P and V operations. Following the principles of abstract data types, a monitor encapsulates data along with functions through which the data may be accessed and manipulated.
condition variable
a named queue on which processes can wait for some condition to become true.