10 - Concurrency / Parallelism Flashcards
1
Q
What is Parallelism and what are its main purposes?
A
- Parallelism involves doing many tasks simultaneously instead of sequentially.
- Its main purposes are to improve throughput and decrease the runtime of programs by utilizing multiple cores or computers.
2
Q
What are the different types of Parallelism?
A
- Data Parallelism: Performing the same computation on a lot of data.
- Task Parallelism: Different computations/programs running at the same time.
- Pipeline Parallelism: Similar to an assembly line process.
3
Q
What is Concurrency and what is its purpose?
A
- Concurrency is the execution of two or more tasks overlapping in time.
- Its purpose is to decrease response time by switching between different threads of control and working on one thread when it can make useful progress.
4
Q
How is concurrency relevant in the real world and in computing?
A
- In the real world, different activities often proceed simultaneously and are inter-related.
- In computing, concurrency is evident with multiple resources like processors, disks, network interfaces, and multiple CPUs/Cores handling multiple users/applications simultaneously.
5
Q
Why is concurrency/parallelism important for programmers?
A
- Network services must handle multiple requests from clients simultaneously.
- Modern applications often have user interfaces and execute application logic simultaneously.
- Parallel programs need to efficiently map work onto multiple processors.
- There is a need to mask the latency of disk and network operations
6
Q
What are the different levels of concurrency?
A
- Instruction Level: Execution of two or more machine instructions simultaneously.
- Statement Level: Execution of two or more statements at the same time.
- Unit Level: Execution of two or more subprogram units simultaneously.
- Program Level: Execution of two or more programs simultaneously
7
Q
Can there be concurrency without parallelism and parallelism without concurrency?
A
- Concurrency without parallelism involves tasks interacting with different subsystems while other tasks are scheduled when the first waits on the subsystem’s response.
- Parallelism without concurrency involves executing code across multiple CPUs or cores, multiple PCs, or alongside GPU and I/O processors
8
Q
How do single-threading and multithreading differ in processes and threads?
A
- Single Threading (Uniprogrammed): Involves a single CPU or multiple CPUs executing one thread at a time.
- Multithreading: Allows simultaneous execution of multiple threads, either in a single or multiple CPU setup.
- Examples include MSDOS (single-threaded, uniprogrammed) and Linux/Windows (multithreaded, multiprogrammed)
9
Q
What are the key concepts in scheduling within operating systems?
A
- Scheduling involves queues to store information about tasks/processes, including the Job Queue (processes awaiting admission), Ready Queue (processes in main memory ready to execute), and Wait Queue(s) (processes waiting for an IO device or other processes).
- The Job scheduler selects processes to put onto the ready queue.
- The CPU scheduler selects the next process to execute and allocates CPU resources
- Guarantees
- Fairness - every process gets its fair share
- Efficiency - keep CPU busy 100%
- Response time - for interactive processes
- Turn-around - for batch processes
- Throughput - maximize results
10
Q
What is the difference between preemptive and non-preemptive scheduling?
A
- Non-preemptive scheduling allows a process to run until it blocks or exits, which is simple to implement but open to denial-of-service as a malicious process can refuse to yield.
- Preemptive scheduling, implemented in modern OS, solves denial-of-service by allowing the OS to preempt long-running processes. However, it is more complex due to timer management and concurrency issues
11
Q
What are some exaples of scheduling algorithms?
A
- Round Robin
- Priority (static and dynamic)
- Shortest job first
- Guaranteed Scheduling
- Earliest deadline first