FINAL Flashcards
(167 cards)
What is a thread?
In computer science, a thread refers to a sequenceof instructions within a program that can be executed independently. It is a lightweight unit of execution that belongs to a larger process and shares the same memory space. Threads enable concurrent execution of multiple tasks within a single program, allowing for increased efficiency and responsiveness
(In computer science, a thread is like a small independent worker inside a program. It’s like a mini-program within a bigger program, and it can do its own tasks without depending on others. Threads share the same memory with the main program, which helps them work together smoothly.
Threads make programs more efficient and responsive because they can do different tasks at the same time. It’s like having multiple workers who can handle different jobs simultaneously, which speeds things up and makes the program more flexible.)
What is a “heavy-weight process?”
A normal process under an
Operating System (OS) is a “heavy-weight process.” The OS provides an
independent address space for each such process to keep different users and
services separated. Switching from one such process to another is time-
consuming, and this task is performed by the Memory Management Unit
(MMU)
(A normal process is like a heavy-weight program, and the MMU is like the traffic cop, managing the flow of processes to ensure fairness and separation.)
Why do we call a thread a “lightweight process (LWP)?”
A thread is called a Lightweight Process (LWP) because it runs under the address space of a regular (heavy-weight) process, and LWPs under the same process may share,
e.g., variables. Switching from one LWP to another is much faster than switching
from one heavy-weight process to another because there is less to manage, and
the MMU is not involved
(A thread is a lightweight helper that works together with a big program. Switching between helpers is super quick because they share and don’t need a traffic cop!)
What is the difference between a thread and a process?
A- Threads within the same process run in shared memory space, while processes
run in separate memory spaces.
B- Processes are independent of one another, and they don’t share their codes,
data, and OS resources.
Unlike processes, threads share their code section, data section, and OS
resources (like open files and signals) with other threads.
Similarities between threads and processes:
Similar to a process, a thread has its program counter (PC), register set, and
stack space
What are the situations in which we use “multithreading?”
Multithreading has many advantages, but in the following two cases,
multithreading is preferable to a single-thread process:
A- Processing power: Multithreading is preferable if you have a multi-core
computer system.
B- Multithreading avoids priority inversion. Priority inversion is a situation where
a low-priority task blocks a high-priority activity. An example of low priority
task is accessing the secondary memory, while a user interface is a high-priority
task
What is an example where having a single thread is preferred over
multithreading?
If we are waiting for a user response or waiting for
data to arrive over the network
What is Multithreading?
Multithreading is the ability of a program to
create and manage multiple threads. A program can perform multiple tasks
concurrently and efficiently using available system resources by utilizing
multiple threads.
What are thread states?
Threads can be in different states, such as
running, ready, blocked, or terminated. The operating system scheduler
determines which threads get executed and when based on factors like thread
priorities, dependencies, and available resources
What do we mean by “Thread Communication and Synchronization?”
Threads within a program often need to communicate and coordinate with
each other to accomplish tasks. This can be achieved using synchronization
mechanisms like Mutex, semaphores, and condition variables to prevent race
conditions and ensure data consistency.
What are “Thread Models?”
Thread models include user-level threads and kernel-level threads.
User-level threads are managed by a user-level library, while kernel-level threads are managed by the operating system kernel.
Most modern operating systems use a hybrid model that combines both types of threads. An example of a user-level thread library is pthread.h, which relies on the operating system kernel for scheduling and executing threads.
While the library provides abstractions and mechanisms for thread management and synchronization, the operating system handles the actual scheduling and context switching tasks.
How would a web server act under a multithreading system?
The server listens for a new client to request a transaction. Then the server assigns a thread to the requesting client and listens for the next client
What is the difference between running four threads on a single-core
processor and running the same number of threads on a double-core
processor? Draw a simple diagram for each case
All of the threads take a turn in a round-robin fashion on a single-core processor. This is known
as “concurrency.” On the other hand, on a double-core processor, two threads run on one core, and the other two would run on the second core. This parallel running of threads on multiple cores is known as “parallelism.”
What are the four benefits of multithreading?
A- Responsiveness: If a process is divided among multiple threads, the other
parts could go on if one part of the process is blocked.
B- Resource sharing: different process threads can share the process’s code
and memory.
C- Economy: Starting a new thread is much easier and faster than creating a
new process.
D- Scalability: A multithreaded process runs faster if we transfer it to a
hardware platform with more processors.
What challenges do programmers face when designing the code for
multiprocessors?
challenges include
A- Dividing activities: finding areas that can be divided into separate and
concurrent tasks.
B- Balance: programmers must ensure that different tasks are of the same
value in complexity and execution time.
C- Data-splitting: Data should be split, in a balanced manner, among already
split concurrent tasks.
D- Data dependency: The programmer should ensure that concurrently
running tasks do not have data dependence.
E- Testing and debugging: Many different execution paths are possible and
more complicated than testing single-threaded applications
What are the two types of parallelism?
A- Data parallelism: Data is divided into subsets, and each subset is sent to
different threads. Each thread performs the same operations.
B- Task parallelism: The whole data is sent to different threads, and each
thread does a separate operation.
How do we compute “speedup” using Amdahl’s law?
Speedup = 1/ (S + “1-S”/N)
In this formula, S is the portion of the task that has to be
performed serially, and (1-S) is part of the task that can be distributed on N
processors. Speedup indicates how much faster the task is running on these N
processors as compared to when it was running serially
Suppose that 50% of a task can be divided equally among ten threads, and each
thread will run on a different core. A) What will be the speedup of this
multithreading system compared to running the whole task as a single thread?
B) What will be the speedup of part (A) if we could send 90% of the job to ten
threads?
What is the upper bound in Amdahl’s law?
In the context of “Amdahl’s law,” what is the meaning of “diminishing returns?”
What are the three popular user-level thread libraries?
OSIX
pthreads, Windows, and Java
What is the relationship between user threads and kernel threads?
user threads run within a user process. Kernel threads are used to provide
privileged services to processes (such as system calls). The kernel also uses
them to track what is running on the system, how much of which resources are
allocated to what process, and how to schedule them. Hence, we do not need a
one-to-one relationship between user threads and kernel threads
A) In the relationship between the user and kernel threads, what is the “many-
to-one model?” B) What is the shortcoming of this model?
A) Before threads became popular, OS kernels only knew the concept of processes. An
OS would consider different processes and consider each a separate entity.
Each process was assigned a working space and could produce system calls and
ask for services. Threading in the user space was not dealt with by the OS. With
User mode threading, support for threads was provided by a programming
library, and the thread scheduler was a subroutine in the user program itself.
The operating system would see the process, and the process would schedule
its threads by itself. B) If one of the user threads needed a system call, such as a
page fault, the other threads were blocked.
What is the “one-to-one” threading model? What are its advantages and
shortcomings?
Each user thread is assigned a kernel thread. Hence,
we can achieve more concurrency, and threads can proceed while one thread
is blocked. The disadvantage occurs when there are too many user threads,
which may burden the performance of the operating system
What is a “many-to-many” multithreading model?
The OS decides the number of kernel threads, and the user process determines the number of user threads. A process that runs on an eight-core processor would have more kernel threads than the one which runs on a quad-core processor. This model
does not suffer from either of the shortcomings of the other two models.