Final Flashcards
(134 cards)
What is a thread?
In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system.
What is a heavyweight process?
A normal process under an OS is a “heavyweight process.” For each such process, the OS provides an independent address space to keep different users and services separated. Switching from one such process to another is time-consuming, and this task is performed by the Memory Management Unit (MMU).
Why do we call a thread a lightweight process (LWP)?
A thread is called a lightweight process (LWP) because it runs under the address space of a regular (heavy-weight) process, and LWPs under the same process may share, e.g., variables. Switching from one LWP to another is much faster than switching from one heavyweight process to another, because there is less to manage, and the MMU is not involved.
What is the difference between a thread and a process?
Threads within the same process run in shared memory space, while processes run in separate memory spaces. Processes are independent of one another, and they don’t share their code, data and OS resources. Threads share their code section, data section and OS resources (like files and signals) with other threads. But, like a process, a thread has its own program counter (PC), register set and stack space.
Are there situations that we use multithreading?
Multithreading has many advantages, but in the following two cases, multithreading is preferable over a single thread process:
A) Processing power: if you have a multi-core computer system, multithreading is preferable.
B) Multithreading avoids priority inversion where a low priority activity such as accessing the disk, blocks a high priority activity, such as user interface responding to a request.
What is an example of a situation where having a single thread is preferred over multithreading?
If we are waiting for a user response or we are waiting for data to arrive over the network, it is useless to assign several threads to wait for the same thing.
How would a web-server act under a multithreading system?
The server listens for a new client to ask for a transaction. Then the server would assign a thread to the requesting client and start listening for the next client.
What is the difference between running four threads on a single core processor and running the same number of threads on a double core processor. Explain concurrency and parallelism using single core and double core processors.
On a single core processor, all of the threads take turns in a round robin fashion. This is known as “concurrency.” On a double core processor, two threads run (concurrently) on one processor and the other two would run on the second one. This parallel running of threads on multiple cores is known as “parallelism”.
What are the four benefits of multithreading?
- Responsiveness: If a process is divided among multiple threads, then if one part of the process is blocked, the other parts could go on.
- Resource sharing: Different threads of a process can share code and memory of that process.
- Economy: Starting a new thread is much easier and faster than starting a new process.
- Scalability: A multithreaded process runs faster if we transfer it to a hardware platform with more processors.
What are the challenges programmers face when they design the code for multiprocessors?
In general, five areas present challenges in programming for multicore systems.
- Identifying tasks (dividing activities): Involves examining applications to find areas that can be divided into separate, concurrent tasks.
- Balance. Programmers must ensure that tasks are of the same value in terms of complexity and execution time.
- Data splitting. Data must be split, in a balanced manner, among already split concurrent tasks.
- Data dependency. Programmers must ensure that different tasks that are running concurrently do not have data dependence.
- Testing and debugging. Many different execution paths are possible, which is more complicated to test and debug than single-threaded applications.
What are the two types of parallelism?
Data parallelism and task parallelism.
Data parallelism: Data is divided into subsets and each subset is sent to different threads. Each thread performs the same operation.
Task parallelism: The whole data is sent to different threads, and each thread does a separate operation.
How do we compute speedup using Amdahl’s law?
speedup <= 1 / (S + ((1 - S) / N)
(S is serial portion, N is number of processing cores.) Speedup indicates how much faster the task is running on these N processors as compared to when it was running serially.
Suppose that 50% of a task can be divided equally among ten threads and each thread will run on a different core.
a) What is the speedup of this multithreading system as compared to running the whole task as a single thread?
b) What is the speedup of part A if we could send 90% of the job to ten threads?
a) speedup <= 1 / ((0.5 + (0.5 / 10))) <= 1.8
b) speedup <= 1 / ((0.1 + (0.9 / 10))) <= 5.3
What is the upper bound in Amdahl’s law?
The upper bound in Amdahl’s law is that as N approaches infinity, the speedup converges to 1 / S. E.g., if 40% of an application is performed serially, the maximum speedup is 2.5 times, regardless of the number of processing cores we add.
In the context of Amdahl’s law, what is the meaning of “diminishing returns?”
The upper bound of speedup = 1/S is still an optimistic estimation. When the number of processors and threads increase, the overhead of handling them increases too. Too much increase in the number of threads could cause a loss, and the speedup may fall below 1/S. This is known as a diminishing return, which says that sometimes a smaller number of threads could result in higher performance.
What are the three most popular user-level thread libraries?
POSIX Phtreads, Windows and Java.
What is the relationship between user threads and kernel threads?
User threads run within a user process. Kernel threads are used to provide privileged services to processes (such as system calls). They are also used by the kernel to keep track of what is running on the system, how much of which resources are allocated to what process, and to schedule them. Hence, we do not need to have a one-to-one relationship between user threads and kernel threads.
a) In the relationship between the user and kernel threads, what is the many-to-one model?
b) What is the shortcoming of this model?
a) Before the idea of threads became popular, OS kernels only knew processes. An OS would consider different processes as separate entities. Each process was assigned a working space and could produce system calls and ask for services. Threading in the user space was not dealt with by the OS. With user mode threading, support for threads was provided by a programming library, and the thread scheduler was a subroutine in the user program itself. The OS would see the process, and the process would schedule its threads by itself.
b) The shortcoming of this model is if one of the user threads needed a system call, such as a page fault, then the other threads were blocked.
a) What is the one-to-one threading model?
b) What are its advantages and shortcomings?
a) The one-to-one model maps each user thread to a kernel thread.
b) An advantage of the one-to-one threading model is that it provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call. It also allows multiple threads to run in parallel on multiprocessors.
The shortcoming occurs when there are too many user threads, which may burden the performance of the operating system.
What is a many-to-many multithreading model?
The OS decides the number of kernel threads, and the user process determines the number of user threads. A process that runs on an eight-core processor would have more kernel threads than the one which runs on a quad-core processor. This model does not suffer from either of the shortcomings of the other two models.
What is pthread?
It is POSIX (portable OS interface) thread library providing programmers with application program interface (API) for creating and managing threads.
What is synchronous threading?
The parent, after creating the threads, has to wait for the children to terminate before it can resume operation.
For thread programming in C or C++ using pthread, what header file should be included?
include (less than sign)pthread.h(greater than sign)
- What does the following piece of code do?
It uses a function to perform summation. The main program sequentially calls the function. It takes in an array, then calculates the sum from i to n for each number in the array.