Quiz 4/CPU Scheduling Flashcards

1
Q

Basic Concepts

A

Maximum CPU utilization obtained with multiprogramming.

CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait.

CPU burst followed by I/O burst.

CPU burst distribution is of main concern.

The final CPU burst ends with a system request to terminate execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

CPU-burst times

A

An I/O-bound program typically has many short CPU bursts. A CPU-bound program might have a few long CPU bursts.

This distribution can be important in the selection of an appropriate CPU-scheduling algorithm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

CPU Scheduler

A

Short-term scheduler selects from among the processes in ready queue, and allocates the CPU to one of them
->Queue may be ordered in various ways (FIFO queue, priority queue, tree, unordered linked list).
->Records in the queue are generally PCBs of the processes.
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state (e.g., wait() for child)
2. Switches from running to ready state (e.g., an interrupt occurs)
3. Switches from waiting to ready (e.g., I/O completion)
4. Terminates
If Scheduling occurs only under 1 and 4 is nonpreemptive (preemptive can lead to race conditions)
->Once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state.
All other scheduling is preemptive (used in real-time systems, allow interactivity)
->Consider access to shared data
->Consider preemption while in kernel mode
->Consider interrupts occurring during crucial OS activities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Dispatcher

A

Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves:

  • > switching context
  • > switching to user mode
  • > jumping to the proper location in the user program to restart that program (program counter contains the next instruction, this is used to determine where we “jump” to)

Dispatch latency – time it takes for the dispatcher to stop one process and start another running

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Scheduling Criteria

A

Various scheduling algorithms compared based on:
CPU utilization – keep the CPU as busy as possible
->Typically from 40% (light load) to 90% (heavy load)

Throughput – # of processes that complete their execution per time unit
->May be 1 proc/hr (long processes) or 10 proc/hr (short processes)

Turnaround time – amount of time to execute a particular process (time it takes from process start to finish)
->(waiting to get into memory) + (waiting in the ready queue) + (executing on the CPU) + (doing I/O)

Waiting time – amount of time a process has been waiting in the ready queue
->Does NOT include I/O time!!!

Response time – amount of time it takes from when a request was submitted until the first response is produced,

  • > Different from turnaround time!
  • > Good for Timesharing systems.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Scheduling Algorithm Optimization Criteria

A
DESIRABLE:
Max CPU utilization
Max throughput
Min turnaround time 
Min waiting time 
Min response time

GENERALLY: optimize the avg measure.
->However, under some circumstances, we prefer to optimize the minimum or maximum values rather than the average. For example, to guarantee that all users get good service, we may want to minimize
the maximum response time.

  • Variance could be minimized too*
  • > Investigators have suggested that, for interactive systems (such as desktop
    systems) , it is more important to minimize the variance in the response time than to minimize the average response time. A system with reasonable and predictable response time may be considered more desirable than a system that is faster on the average but is highly variable. However, little work has been done on CPU-scheduling algorithms that minimize variance.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

First-Come, First-Served (FCFS) Scheduling

Nonpreemptive!

A

With this scheme, the process that requests the CPU first is allocated the CPU first.

Nonpreemptive algorithm!

The implementation of the FCFS policy is easily managed with a FIFO queue.

  • > When a process enters the ready queue, its PCB is linked onto the tail of the queue.
  • > When the CPU is free, it is allocated to the process at the head of the queue.
  • > The running process is then removed from the queue.

On the negative side, the average waiting time under the FCFS policy is often quite long.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Convoy Effect

Seen in FCFS

A

Consider one CPU-bound & many I/O-bound processes…

convoy effect: all the other processes wait for the one big process to get off the CPU.

This effect results in lower CPU and device utilization than might be possible if the shorter processes were allowed to go first.

Not good for timesharing systems because it allows one process to keep the CPU for an extended period.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Shortest-Job-First (SJF) Scheduling

aka Shortest-Next-CPU-Burst

A

Can be preemptive or non-preemptive

Associate with each process the length of its next CPU burst

  • > Use these lengths to schedule the process with the shortest time
  • > If next CPU burst of two processes are the same, FCFS is used to break the tie.

SJF is optimal – gives minimum average waiting time for a given set of processes (for scheduling algorithms)

  • > The difficulty is knowing the length of the next CPU request
  • > Could ask the user
  • ->For long-term jobs where the user specifies the process time limit upon job submission.
  • –>Estimated Low: faster response but if too low it may exceed the time limit and require re-submission.
  • –>Need to be an accurate estimation!

Since SJF cannot be implemented at the level of short-term CPU scheduling, we can use approximation to determine the length of the next CPU burst.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Determining Length of Next CPU Burst

A

Can only estimate the length – should be similar to the previous one
->Then pick process with shortest predicted next CPU burst

Can be done by using the length of previous CPU bursts, using exponential averaging

  1. 𝑡_𝑛= actual length of 𝑛^𝑡ℎ CPU burst
  2. 𝜏_(𝑛+1)= predicted value for the next CPU burst
  3. 𝛼, 0≤𝛼≤1
  4. Define: 𝜏_(𝑛=1 )= 𝛼𝑡_𝑛 + (1−𝛼)𝜏_𝑛. 𝜏_𝑛=𝑝𝑟𝑒𝑣𝑖𝑜𝑢𝑠 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑 𝑣𝑎𝑙𝑢𝑒

Commonly, α set to ½
α = 0, recent history has no effect
α = 1, only most recent CPU burst matters
α = 1/2, recent and past history equally weighted
->Since both α and (1 - α) are less than or equal to 1, each successive term has less weight than its predecessor!!

Preemptive version called shortest-remaining-time-first

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Shortest-Remaining-Time-First

Preemptive SJF

A

The next CPU burst of the newly arrived process may be shorter than what is left of the currently executing process.
->A preemptive SJF algorithm will preempt the currently executing process, whereas
->A nonpreemptive SJF algorithm will allow the currently running process to finish its CPU burst
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5

Takes into consideration the remaining time! P1 remaining time at 1 is 7 when P2 arrives, since P2 has a burst of 3 we will switch to P2. When P3 arrives P2 is at 3, P1 is still at 7. P2 will continue running until P4 arrives, P1 is at 7, P2 is at 2, P3 is at 7, we continue with P2 until it finishes at 5….

Preemptive avg waiting time: 6.5ms
Nonpreemptive avg waiting time: 7.75

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Priority Scheduling

A

A priority number (integer) is associated with each process

The CPU is allocated to the process with the highest priority (smallest integer -> highest priority)

  • > Preemptive
  • > Nonpreemptive

SJF is priority scheduling where priority is the inverse of predicted next CPU burst time

Problem (All priority based scheduling): Starvation – low priority processes may never execute

  • > In a heavily loaded computer system, a steady stream of higher-priority processes can prevent a low-priority process from ever getting the CPU
  • > **Rumor has it that when they shut down the IBM 7094 at MIT in 1973, they found a low-priority process that had been submitted in 1967 and had not yet been run.

Solution: Aging – as time progresses increase the priority of the process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Round-Robin Scheduling

Preemptive FIFO

A

Good for interactive systems

Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.

If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.

Timer interrupts every quantum to schedule next process

Performance

  • > q large (finish jobs before q is reached, CPU burst is higher than quantum) => RR turns into FIFO
  • > q small => q must be large with respect to context switch, otherwise overhead is too high

**Rule of thumb: 80% of the CPU bursts should be shorter than the time quantum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Multilevel Queue

A

Ready queue is partitioned into separate queues (processes are divided into groups), e.g.:

  • > foreground (interactive)
  • > background (batch)

Process permanently in a given queue

Each queue has its own scheduling algorithm, example:

  • > foreground – RR
  • > background – FCFS

Scheduling must be done between the queues:

  • > Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation.
  • > Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes;
  • ->i.e., 80% to foreground in RR
  • ->20% to background in FCFS
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Multilevel Feedback Queue

Most complex of the general scheduling algorithms

A

A process can move between the various queues; aging can be implemented this way (used to prevent starvation)

Multilevel-feedback-queue scheduler defined by the following parameters:

  • > number of queues
  • > scheduling algorithms for each queue
  • > method used to determine when to upgrade a process
  • > method used to determine when to demote a process
  • > method used to determine which queue a process will enter when that process needs service

**It is the most complex algorithm since defining the best scheduler requires some means by which to select values for all the parameters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Thread Scheduling

A

Distinction between user-level and kernel-level threads

When threads supported, threads scheduled, not processes

Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP

  • > Known as process-contention scope (PCS) since scheduling competition is within the process
  • > Typically done via priority set by programmer

Kernel thread scheduled onto available CPU is system-contention scope (SCS) – competition among all threads in system

Systems using one-to-one like Windows, Linux, Solaris schedule threads using only SCS.

17
Q

Pthread Scheduling

A

API allows specifying either PCS or SCS during thread creation

  • > PTHREAD_SCOPE_PROCESS schedules threads using PCS scheduling
  • > PTHREAD_SCOPE_SYSTEM schedules threads using SCS scheduling

Can be limited by OS – Linux and Mac OS X only allow PTHREAD_SCOPE_SYSTEM

int scope;
pthread_attr_t attr;
/* get the default attributes */
pthread_attr_init(&attr);

pthread attr setscope(pthread attr t *attr, int scope);
pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM);
pthread attr getscope(pthread attr t *attr, int *scope);
pthread_attr_getscope(&attr, &scope)

18
Q

Multiple-Processor Scheduling

A

CPU scheduling more complex when multiple CPUs are available

Homogeneous processors within a multiprocessor

  • > Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing (one reads from the ready queue, distributes to the other processors; ex on white board
  • > Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes in common ready queue, or each has its own private queue of ready processes (ex: cashiers in a store. Finish one customer, ask for next)
  • ->Currently, most common
  • ->Can have race conditions if only one ready queue

Processor affinity – process has affinity for processor on which it is currently running (affected by main-memory architecture, e.g., Non-Uniform Memory Access)

  • > soft affinity (not guaranteed)
  • > hard affinity (guaranteed)
  • > Variations including processor sets
19
Q

Multiple-Processor Scheduling - Load Balancing (Often counteracts processor affinity)

A

If SMP, need to keep all (active) CPUs loaded for efficiency
->Some CPU/cores may be off to save energy on energy-constrained devices.

Load balancing attempts to keep workload evenly distributed

  • > Needed only in symmetric multiprocessing where each CPU has its own ready queue.
  • > If there is a common queue, load balancing is NOT needed because idle processors can dequeue immediately another task.

Push migration – periodic task checks load on each processor, and if found pushes task from overloaded CPU to other CPUs

Pull migration – idle processors pulls waiting task from busy processor

Push and Pull migrations need NOT to be mutually exclusive

Affinity vs Load Balancing
->Data locality vs efficiency

20
Q

Multicore Processors

A

Recent trend to place multiple processor cores on same physical chip

Faster and consumes less power

Multiple threads (hardware threads, not the software threads that we have been covering) per core also growing

  • > Takes advantage of memory stall to make progress on another thread while memory retrieve happens
  • ->Coarse-grained vs Fine-grained (pg 282-283)
21
Q

Real-Time CPU Scheduling

A

Can present obvious challenges

Soft real-time systems – no guarantee as to when critical real-time process will be scheduled

Hard real-time systems – task must be serviced by its deadline

Two types of latencies affect performance

  • > Interrupt latency – time from arrival of interrupt to start of routine that services interrupt
  • > Dispatch latency – time for schedule to take current process off CPU and switch to another

Conflict phase of dispatch latency:

  1. Preemption of any process running in kernel mode
  2. Release by low-priority process of resources needed by high-priority processes
22
Q

Priority-based Scheduling

A

For real-time scheduling, scheduler must support preemptive, priority-based scheduling
->But only guarantees soft real-time

For hard real-time must also provide ability to meet deadlines

Processes have new characteristics: periodic ones require CPU at constant intervals

  • > Has processing time t, deadline d, period p: 0 ≤ t ≤ d ≤ p
  • > Rate of periodic task is 1/p
23
Q

Rate Monotonic Scheduling

A

A priority is assigned based on the inverse of its period

Shorter periods = higher priority;

Longer periods = lower priority
It is optimal for static priority schedulers

P1 period: 50, processing time: 20
P2 period: 100, processing time: 35
P1 is assigned a higher priority than P2.
CPU utilization: (20/50 + 35/100) = 75%

Rate monotonic is optimal: i.e., if a set of processes cannot be scheduled by this algorithm, it cannot be scheduled by any other algorithm that assign static priority

24
Q

Rate Monotonic Scheduling (Cont’d)

Max CPU utilization formula

A

P1 period: 50, processing time: 25 (P1 has priority; lower period)
P2 period: 80, processing time: 35
CPU utilization: (25/50 + 35/80) = 94%
->Should be schedulable!
->We see that P2 misses its deadline at 80!

Max CPU Utilization for meeting deadlines of N processes using with RM (or any other static priority algorithms):
CPU Utilization < N (21/N - 1) = 83% with N=2

25
Q

Earliest Deadline First Scheduling (EDF)

A

Priorities are assigned according to deadlines:

  • > the earlier the deadline, the higher the priority;
  • > the later the deadline, the lower the priority

Priorities are dynamic!

  • > Optimal for dynamic priority algorithms
  • > Can (theoretically) use 100% of CPU utilization

Compared to RM, EDF does not require periodic jobs.

26
Q

Proportional Share Scheduling (needs admission-control algorithm..)

A

T shares (total CPU utilization time) are allocated among all processes in the system

An application receives N shares where N < T

This ensures each application will receive N / T of the total processor time

Needs to have admission control to see if there are enough shares for a new process.

27
Q
A

The POSIX.1b standard

API provides functions for managing real-time threads

Defines two scheduling classes for real-time threads:

  1. SCHED_FIFO - threads are scheduled using a FCFS strategy with a FIFO queue. There is no time-slicing for threads of equal priority
  2. SCHED_RR - similar to SCHED_FIFO except time-slicing occurs for threads of equal priority

Defines two functions for getting and setting scheduling policy:

  1. pthread_attr_getsched_policy(pthread_attr_t *attr, int *policy)
  2. pthread_attr_setsched_policy(pthread_attr_t *attr, int policy)
28
Q

Linux Scheduling in Version 2.6.23 +

A

Completely Fair Scheduler (CFS)

Scheduling classes
->Each has specific priority
->Scheduler picks highest priority task in highest scheduling class
->Rather than quantum based on fixed time allotments, based on proportion of CPU time
2 scheduling classes included, others can be added
1. default
2. real-time

Quantum calculated based on nice value from -20 to +19

  • > Lower value is higher priority
  • > Calculates target latency – interval of time during which each runnable task should run at least once
  • > Target latency can increase if say number of active tasks increases

CFS scheduler maintains per task virtual run time in variable vruntime

  • > Associated with decay factor based on priority of task – lower priority is higher decay rate
  • > Normal default priority (nice value 0) yields virtual run time = actual run time

To decide next task to run, scheduler picks task with lowest virtual run time

29
Q

Linux Scheduling (Cont.)

A

Real-time scheduling according to POSIX.1b
->Real-time tasks have static priorities

Real-time plus normal map into global priority scheme

Nice value of -20 maps to global priority 100

Nice value of +19 maps to priority 139

Priority ranges from 0-139 w/ real-time from 0-99, normal from 100-139

30
Q

Windows Scheduling

A

Windows uses priority-based preemptive scheduling (wants to support real-time!!)

Highest-priority thread runs next

Dispatcher is scheduler

Thread runs until (1) blocks, (2) uses time slice, (3) preempted by higher-priority thread

Real-time threads can preempt non-real-time

32-level priority scheme

Variable class is 1-15, real-time class is 16-31

Priority 0 is memory-management thread

Queue for each priority

If no run-able thread, runs idle thread

31
Q

Windows Priority Classes

A

Win32 API identifies several priority classes to which a process can belong

  • > REALTIME_PRIORITY_CLASS, HIGH_PRIORITY_CLASS, ABOVE_NORMAL_PRIORITY_CLASS,NORMAL_PRIORITY_CLASS, BELOW_NORMAL_PRIORITY_CLASS, IDLE_PRIORITY_CLASS
  • > All are variable except REALTIME
A thread within a given priority class has a relative priority
->TIME_CRITICAL, HIGHEST, ABOVE_NORMAL, NORMAL, BELOW_NORMAL, LOWEST, IDLE

Priority class and relative priority combine to give numeric priority

Base priority is NORMAL within the class

If quantum expires, priority lowered, but never below base

32
Q

Solaris Scheduling

A

Priority-based scheduling

Six classes available

  1. Time sharing (default) (TS)
  2. Interactive (IA)
  3. Real time (RT)
  4. System (SYS)
  5. Fair Share (FSS)
  6. Fixed priority (FP)

Given thread can be in one class at a time

Each class has its own scheduling algorithm

Time sharing is multi-level feedback queue
->Loadable table configurable by sysadmin

Scheduler converts class-specific priorities into a per-thread global priority

  • > Thread with highest priority runs next
  • > Runs until (1) blocks, (2) uses time slice, (3) preempted by higher-priority thread
  • > Multiple threads at same priority selected via RR
33
Q

Algorithm Evaluation

A

How to select CPU-scheduling algorithm for an OS?

Determine criteria, then evaluate algorithms

Deterministic modeling

  • > Type of analytic evaluation
  • > Takes a particular predetermined workload and defines the performance of each algorithm for that workload

Simple and fast, but requires exact numbers for input, applies only to those inputs

34
Q

Queueing Models

A

Describes the arrival of processes, and CPU and I/O bursts probabilistically

  • > Commonly exponential, and described by mean
  • > Computes average throughput, utilization, waiting time, etc

Computer system described as network of servers, each with queue of waiting processes

  • > Knowing arrival rates and service rates
  • > Computes utilization, average queue length, average wait time, etc

Can get really difficult quickly!!

35
Q
A
n = average queue length
W = average waiting time in queue
λ = average arrival rate into queue

Little’s law – in steady state, processes leaving queue must equal processes arriving, thus: n = λ x W
->Valid for any scheduling algorithm and arrival distribution

For example, if on average 7 processes arrive per second, and normally 14 processes in queue, then average wait time per process = 2 seconds

36
Q

Simulations

A

Queueing models limited

Simulations more accurate

  • > Programmed model of computer system
  • > Clock is a variable
  • > Gather statistics indicating algorithm performance
  • > Data to drive simulation gathered via
  • ->Random number generator according to probabilities
  • ->Distributions defined mathematically or empirically
  • ->Trace tapes record sequences of real events in real systems