Week 5 - Operating System Fundamentals (Part 4) Flashcards

Process Scheduling (68 cards)

1
Q

What is the objective of multi-programming in an operating system?

A

The objective of multi-programming is to have some process running at all times, maximizing CPU utilization. This ensures that while one process is waiting (e.g., for I/O), another process can use the CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the main purpose of CPU scheduling in an operating system?

A

The main purpose of CPU scheduling is to decide which process to execute next by selecting the next process in the Ready queue for execution on a processor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the objective of multi-programming in an operating system?

A

The objective of multi-programming is to have some process running at all times to maximize CPU utilisation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What happens during a scheduling decision in an operating system?

A

A scheduling decision takes place when an event interrupts the execution of a process, selecting the next process from the Ready queue for execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the possible events that trigger a scheduling decision?

A
  • Clock Interrupts (process running → ready state)
  • I/O Interrupts (process waiting → ready state)
  • Operating System Calls (e.g., read, write, process ready → waiting)
  • Signals (e.g., semaphores)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the purpose of the Ready queue in the context of scheduling?

A

The Ready queue holds processes that are ready to execute but are waiting for CPU time. The scheduler selects a process from this queue to run on the CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the characteristic of process execution in batch processing?

A

In batch processing, processes are typically CPU-bound, meaning they spend more time performing computations than waiting for I/O operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the characteristic of process execution in interactive systems?

A

In interactive systems, processes are typically I/O-bound, meaning they spend more time waiting for I/O operations than performing computations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the two main phases in the CPU-I/O burst cycle of a process?

A

The two main phases in the CPU-I/O burst cycle are:

  1. CPU Burst – Process executes instructions using the CPU.
  2. I/O Burst – Process waits for I/O operations to complete.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How does the scheduler use the CPU-I/O burst cycle?

A

The scheduler can schedule another process to execute during the I/O burst of a process, optimizing CPU usage while the process is waiting for I/O operations to complete.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What happens at the end of the final CPU burst in a process?

A

At the end of the final CPU burst, the process typically makes a system request to terminate execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does the histogram of CPU bursts typically show about most processes?

A

The histogram shows that most processes have many very short CPU bursts, especially in interactive systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What type of systems are characterized by many short CPU bursts?

A

Interactive systems are characterized by many short CPU bursts as processes frequently alternate between CPU and I/O operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

When the CPU becomes idle, what must the OS do?

A

When the CPU becomes idle, the OS must select one of the processes in the ready queue to be executed next.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Is the ready queue necessarily a first-in, first-out (FIFO) queue?

A

No, the ready queue may not be FIFO. It could be a priority queue, tree, or an unordered linked list.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Who is responsible for selecting a process to execute when the CPU becomes idle?

A

The short-term scheduler is responsible for selecting the process to execute from the ready queue when the CPU becomes idle.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Conceptually, what are all processes waiting for?

A

Conceptually, all processes are waiting for a chance to run on the CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the role of the dispatcher in an operating system?

A

The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. It involves switching context, switching to user mode, and jumping to the proper location in the user program to restart it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the tasks involved when the dispatcher gives control to the next process?

A

The dispatcher performs three tasks:

  1. Switching context
  2. Switching to user mode
  3. Jumping to the proper location in the user program to restart it
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Why should the dispatcher be as fast as possible?

A

The dispatcher should be as fast as possible because it is invoked during every process switch, and the time it takes to stop one process and start another is called dispatch latency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is non-preemptive scheduling in operating systems?

A

In non-preemptive scheduling, once a process is scheduled, it continues to execute on the CPU until:

  1. It finishes (terminates).
  2. It releases the CPU voluntarily (cooperative scheduling).
  3. It blocks due to an event such as an I/O interrupt or waiting for another process.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What events cause a process to release the CPU in non-preemptive scheduling?

A

In non-preemptive scheduling, a process may release the CPU if:

  1. It finishes (terminates).
  2. It voluntarily releases the CPU (cooperative scheduling).
  3. It blocks due to events like I/O interrupts or waiting for another process.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is preemptive scheduling in operating systems?

A

In preemptive scheduling, the operating system interrupts processes to regain control of the CPU. A process executes until its time slice expires or a higher-priority process becomes ready. The current process is then suspended and placed back in the Ready queue, while a new process is selected for execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What happens when a time slice expires in preemptive scheduling?

A

When a time slice expires in preemptive scheduling:

  1. A clock interrupt returns control of the CPU to the scheduler.
  2. The current process is suspended and placed in the Ready queue.
  3. A new process is selected from the Ready queue and executed on the CPU.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What happens if a higher-priority process becomes ready in preemptive scheduling?
In preemptive scheduling, if a process with higher priority becomes ready, the operating system interrupts the current process and places it in the Ready queue. The higher-priority process is then selected for execution.
26
What is CPU utilisation in CPU scheduling criteria?
CPU utilisation refers to keeping the CPU as busy as possible. Ideally, the utilization should be between 40-90%, ensuring the CPU is being efficiently used without idling too much.
27
What is throughput in the context of CPU scheduling?
Throughput refers to the number of processes that complete their execution per time unit. It's a measure of how many tasks are finished in a given amount of time.
28
What does turnaround time represent in CPU scheduling?
Turnaround time is the total amount of time taken to execute a process from start to completion, including waiting time, execution time, and any delays.
29
What is waiting time in CPU scheduling?
Waiting time is the total amount of time a process has spent waiting in the Ready queue before it gets executed by the CPU.
30
What does response time measure in CPU scheduling?
Response time is the time it takes from when a request is submitted until the first response is produced by the operating system. It represents the system's latency.
31
What does the operating system aim to maximize in CPU scheduling?
The operating system aims to maximize: - CPU Utilization: Keep the CPU as busy as possible. - Throughput: The number of processes that complete their execution per time unit.
32
What do users aim to minimize in CPU scheduling?
Users aim to minimize: - Turnaround Time: The total time taken to execute a process from start to completion. - Waiting Time: The total amount of time a process has spent waiting in the Ready queue. - Response Time: The time it takes for the first response after a request is submitted.
33
What is the simplest CPU-scheduling algorithm?
First Come, First Served (FCFS) is the simplest CPU-scheduling algorithm. The process that requests the CPU first is allocated the CPU first.
34
How are processes managed in the FCFS scheduling algorithm?
In FCFS, processes are managed using a FIFO (First In, First Out) queue. - Processes enter the ready queue, and the PCB is linked to the tail of the queue. - The CPU is allocated to the process at the head of the queue when the CPU is free.
35
What is the main idea behind Shortest-Job First (SJF) Scheduling?
SJF associates each process with the length of its next CPU burst. The CPU is assigned to the process with the shortest next CPU burst when it becomes available.
36
How does SJF handle ties between processes with the same CPU burst?
If two processes have the same next CPU burst, First Come, First Served (FCFS) is used to break the tie.
37
Why is SJF sometimes called the "shortest-next-CPU-burst algorithm"?
SJF is better described as the shortest-next-CPU-burst algorithm because the scheduling depends on the length of the next CPU burst of a process, rather than the total length of the process.
38
Why is Shortest-Job First (SJF) Scheduling considered optimal?
SJF is provably optimal because it minimizes the average waiting time for a given set of processes.
39
How does SJF reduce the average waiting time?
By moving a short process before a long one, SJF decreases the waiting time of the short process more than it increases the waiting time of the long process, resulting in a lower average waiting time.
40
What is the main difficulty of implementing Shortest-Job First (SJF) Scheduling?
The main difficulty is determining the length of the next CPU burst, which is not known in advance.
41
Why is it difficult to implement SJF at the short-term CPU scheduling level?
It is difficult because there is no way to know the length of the next CPU burst, and asking the user for this information is impractical.
42
What is one possible solution to the difficulty of determining the next CPU burst in SJF?
Approximation can be used to estimate the length of the next CPU burst in SJF.
43
What does Shortest Remaining Time First (SRTF) scheduling refer to?
SRTF is the preemptive version of Shortest Job First (SJF), where a newly arrived process with a shorter remaining CPU burst time can preempt the currently executing process.
44
What happens in Nonpreemptive SJF when a new process arrives?
In Nonpreemptive SJF, the newly arrived process is not allowed to preempt the current process. The current process is allowed to finish its CPU burst before the new process gets executed.
45
How does Preemptive SJF handle a new process arrival?
In Preemptive SJF, if the newly arrived process has a shorter CPU burst than the remaining time of the currently executing process, it preempts the current process and begins execution.
46
What is Priority Scheduling in CPU scheduling?
Priority Scheduling is a scheduling algorithm where each process is assigned a priority, and the CPU is allocated to the process with the highest priority. If multiple processes have the same priority, they are scheduled using FCFS (First-Come, First-Served).
47
How is Shortest Job First (SJF) related to Priority Scheduling?
SJF is a special case of the Priority Scheduling algorithm, where processes with shorter burst times are given higher priority.
48
What are internal criteria for defining priorities in Priority Scheduling?
Internal criteria for defining priorities include measurable quantities such as time limits, memory requirements, and the number of open files.
49
What are external criteria for defining priorities in Priority Scheduling?
External criteria for defining priorities involve factors outside the OS, such as process importance, funds paid for computation, the sponsoring department, or even political factors.
50
What is the difference between preemptive and non-preemptive priority scheduling in terms of CPU allocation?
- Preemptive priority scheduling will preempt the CPU if a new process with a higher priority arrives. - Non-preemptive priority scheduling will not preempt the CPU, instead, it places the new process at the head of the ready queue.
51
In preemptive priority scheduling, what happens when a higher priority process arrives?
In preemptive priority scheduling, if a new process with a higher priority arrives, it preempts the CPU from the currently running process.
52
What is indefinite blocking or starvation in priority scheduling?
Indefinite blocking (or starvation) occurs when low-priority processes are continually blocked because higher priority processes keep arriving, preventing the low-priority processes from getting the CPU.
53
How can indefinite blocking affect a system?
In a heavily loaded system with many high-priority processes, low-priority processes may never get the CPU, potentially leading to unexecuted processes and system crashes.
54
What is the solution to indefinite blocking or starvation in priority scheduling?
The solution is aging: the importance of a process increases with time (e.g., its priority increases by one point every 15 minutes), ensuring that low-priority processes eventually run.
55
What is Round-Robin (RR) Scheduling designed for?
Round-Robin (RR) scheduling is designed for time-sharing systems, where multiple processes are given a small unit of time (time quantum) to execute on the CPU.
56
How does Round-Robin (RR) Scheduling work?
In Round-Robin scheduling, the ready queue is circular, and the CPU allocates a time quantum (typically 10 to 100 milliseconds) to each process in the ready queue. The CPU goes around the queue, giving each process an equal share of time.
57
How is time quantum used in Round-Robin scheduling?
Time quantum (or time slice) in Round-Robin scheduling is a small, fixed amount of time (10 to 100 milliseconds). Each process is allowed to execute for this amount of time, after which the CPU is reassigned to the next process in the queue.
58
How is the ready queue treated in Round-Robin (RR) scheduling?
The ready queue is treated as a FIFO (First-In, First-Out) queue of processes.
59
What happens when a process's CPU burst is less than one time quantum in RR scheduling?
The process releases the CPU voluntarily after completing its CPU burst.
60
What happens if a process's CPU burst is greater than one time quantum in RR scheduling?
The timer interrupts, causing a context switch, and the process is moved to the tail of the ready queue.
61
In Round-Robin (RR) scheduling, if there are n processes and time quantum q, how much CPU time does each process get?
Each process gets 1/n of the CPU time in chunks of at most q time units.
62
In RR scheduling, how long does a process wait before its next turn at the CPU?
At most (n - 1) × q time units.
63
Example: With 5 processes and q = 20 ms, how often does each process get CPU time?
Each process gets up to 20 milliseconds every 100 milliseconds.
64
How does the size of the time quantum affect Round-Robin (RR) scheduling?
- If extremely large, RR behaves like FCFS (First Come First Served). - If extremely small, RR leads to many context switches.
65
What is the ideal relationship between time quantum and context-switch time in RR scheduling?
The time quantum should be large compared to the context-switch time to minimize overhead.
66
What happens if the context-switch time is about 10% of the time quantum?
Around 10% of CPU time will be spent on context switching.
67
What are typical values for time quantum and context-switch time on modern systems?
- Time quantum: 10–100 milliseconds - Context-switch time: Less than 10 microseconds
68