ch4 - basic Flashcards

(43 cards)

1
Q

Threads run within applications?

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

thread creation is (heavyweight)/(lightweight)

A

lightweight

while process creation is heavyweight

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

draw a single threaded process

A

+——————————-+
code
+——————————-+
data
+——————————-+
files
+——————————-+
registers
+——————————-+
PC
+——————————-+
stack
+——————————-+
thread → ~~~
+—————————–+

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

draw a multithreaded process

A

+————————————————-+ code
+————————————————-+
data
+————————————————-+
files
+————————————————-+
registers—-registers—-registers
+————+————-+——————–+
stack ——- stack———stack
+————-+————-+——————–+
PC ————PC ————PC
+————-+————-+——————–+
thread———thread———-thread
+————-+————-+——————–+

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the basic flow of a multithreaded server when handling a client request?

A

1.Client sends a request to the server.

2.Server creates a new thread to handle the request.

3.Server immediately resumes listening for additional client requests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what are the benefits of Multithreaded Server Architecture?

A

1.Responsiveness – may allow continued execution if part of process is blocked, especially important for user interfaces
2.Resource Sharing – threads share resources of process, easier than shared memory or message passing
3.Economy – cheaper than process creation, thread switching lower overhead than context switching
4.Scalability – process can take advantage of multicore architectures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the “dividing activities” challenge in multicore systems?

A

Breaking a program into independent tasks that can run in parallel without interfering with each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the “balance” challenge in multicore systems?

A

Ensuring that all processing cores are kept busy and no core is overloaded or idle too long.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the “data splitting” challenge in multicore systems?

A

Dividing data into parts so that different threads can work on them independently without conflicts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the “data dependency” challenge in multicore systems?

A

Managing situations where tasks depend on each other’s results, which can cause delays or require careful synchronization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the “testing and debugging” challenge in multicore systems?

A

It’s harder to test and debug parallel programs because bugs may only appear under specific, rare timing conditions (heisenbugs).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does parallelism mean in operating systems?

A

Parallelism means the system can truly perform more than one task at the exact same time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does concurrency mean in operating systems?

A

Concurrency means the system supports making progress on multiple tasks at once, even if not truly executing them simultaneously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Can a single processor provide concurrency? How?

A

Yes, by using a scheduler that rapidly switches between tasks to give the illusion that they are running at the same time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Can a single processor provide parallelism? How?

A

NO! it can’t

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What’s the difference between concurrency and parallelism?

A

Concurrency = making progress on multiple tasks (may or may not be truly simultaneous).

Parallelism = multiple tasks executing exactly at the same time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Draw concurrent execution on single-core system

A

|T1|T2|T3|T4|…|T2|

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Draw parallelism on a multi-core system

A

core1:|T1|T3|T1|T3|…|T1|

core2:|T2|T4|T2|T4|…|T2|

19
Q

how many types of parallelism?

20
Q

What are the types of parallelism?

A

Data parallelism – distributes subsets of the same data across multiple cores, same operation on each
Task parallelism – distributing threads across cores, each thread performing unique operation

21
Q

What is Data parallelism?

A

Data parallelism involves splitting a large dataset into smaller chunks, which are processed simultaneously across multiple processors. Each processor works on a part of the data using the same operation, often seen in tasks like vector or matrix computations.

22
Q

What is task parallelism?

A

Task parallelism divides a program into distinct tasks, where each task performs a different operation on the data. These tasks run concurrently, and each task may have its own specific computation.

23
Q

compare data parallelism and task parallism

A

Both data parallelism and task parallelism involve executing multiple operations simultaneously to improve performance.

Both techniques aim to optimize computational efficiency by leveraging multiple processors or cores.

24
Q

contrast data parallelism and task parallism

A

Data Parallelism: Focuses on splitting large datasets into smaller chunks and performing the same operation on each chunk.

Task Parallelism: Divides a program into distinct tasks, each performing different operations, and runs them concurrently.

Data Parallelism: Typically requires less coordination between tasks as they perform the same operation on different data.

Task Parallelism: Often involves more complex communication and synchronization between tasks, as they may depend on one another.

25
what is the purpose of Amdahl’s Law?
It identifies performance gains from adding additional cores to an application that has both serial and parallel components
26
what is Amdahl’s Law equation?
speedup <= 1/(S+(1-S)/N) | S is serial portion N processing cores ## Footnote For example: if application is 75% parallel / 25% serial, moving from 1 to 2 cores results in speedup of 1.6 times
27
To what speedup approaches when N approaches infinity?
1/S
28
# True or False? Serial portion of an application has disproportionate effect on performance gained by adding additional cores
True that's because the serial portion is the part that cannot be parallelized in a program and it must be executed by a single core.
29
Write the names of multithreading models
Many-to-One One-to-One Many-to-Many
30
Explain Many-to-One multithreading model
Many user-level threads mapped to single kernel thread One thread blocking causes all to block Multiple threads may not run in parallel on multicore system because only one may be in kernel at a time
31
Explain One-to-One multithreading model
Each user-level thread maps to kernel thread Creating a user-level thread creates a kernel thread More concurrency than many-to-one Number of threads per process sometimes restricted due to overhead
32
Explain Many-to-Many multithreading model
Allows many user level threads to be mapped to many kernel threads Allows the operating system to create a sufficient number of kernel threads
33
Explain the key difference between the many-to-one and one-to-one multithreading models.
In many-to-one, the OS only sees a single thread, so no true parallelism on multicore systems; in one-to-one, multiple threads can run truly in parallel because each has a separate kernel thread. ## Footnote Many-to-One maps many user-level threads to one kernel thread. One-to-One maps each user thread to its own kernel thread.
34
In the many-to-one threading model, why can a blocking system call in one thread block the entire process? How does this compare to the one-to-one model?
In many-to-one, all user threads are tied to one kernel thread. If one thread makes a blocking system call (e.g., waiting for I/O), the single kernel thread is blocked, causing all user threads to stop until the system call finishes. In one-to-one, each user thread has its own kernel thread — so if one thread blocks, other threads can still continue execution independently.
35
Describe how the many-to-many multithreading model improves upon the limitations of both many-to-one and one-to-one models
Many-to-Many maps many user threads to a smaller or equal number of kernel threads, flexibly. It avoids the bottleneck of many-to-one (where a blocking call stops everything) and avoids the heavy overhead of one-to-one (where each thread demands a kernel thread). Advantage: Lets multiple threads execute concurrently without flooding the system with too many kernel threads.
36
What is Implicit Threading?
Creation and management of threads done by compilers and run-time libraries rather than programmers
37
Explain Thread Pools
Create a number of threads in a pool where they await work **Advantages: **Usually slightly faster to service a request with an existing thread than create a new thread Allows the number of threads in the application(s) to be bound to the size of the pool Separating task to be performed from mechanics of creating task allows different strategies for running task
38
Explain Fork Join
divide to subtasks, then build the result from the results of the tasks | Same idea as divide and conquer
39
Mention the names of threading issues
Semantics of fork() and exec() system calls Signal handling (Synchronous and asynchronous) Thread cancellation of target thread (Asynchronous or deferred) Thread-local storage Scheduler Activations
40
Why signal handling is a threading issue?
n multithreaded programs, who handles the signal? All threads? One thread? Which one? You have to manage signals carefully across threads...
41
Why Thread cancellation is a threading issue?
If you cancel a thread, it could be holding locks, halfway through an operation, writing to a file... Canceling threads can cause resource leaks, deadlocks, and data corruption if not handled properly.
42
Why Thread-local storage is a threading issue?
Thread-local storage lets each thread have its own private copy of data If you don’t use it properly, threads might corrupt shared data or accidentally stomp on each other’s stuff.
43
Why is Scheduler Activations a threading issue?
How do we schedule threads when the kernel doesn’t even know all the user threads exist?