Scheduling Flashcards

1
Q

What’s batch scheduling?

A

– When a job is scheduled to run, it gets its own cores
– Cores are dedicated (not shared)
– Each job runs to completion (until it terminates)
– Only after, the cores are allocated to other waiting jobs
– “Non-preemptive”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What’s wait time?

A

waitTIme = startTime - submitTime.

The shorter the average wait time, the better the performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What’s response time?

A
  • responseTime = terminationTime – submitTime

* responseTime = waitTime + runTime

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Prove that:
for batch schedulers, the difference between average wait time and average response time of a given schedule is a constant
– The constant is the average runtime of all jobs
(In our context, we assume that job runtimes (and thus their average)
are a given; they stay the same regardless of the scheduler)

A

Proof:

page 10 of scheduling lecture.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What’s slowdown (or expansion factor)?

A

slowdown = responseTime / runTime = Ti/Ri

The greater the slowdown, the longer the job is delayed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What’s utilization?

A

Percentage of time the resource (CPU in our case) is busy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What’s throughput?

A

How much work is done in one

time unit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What’s FCFS (first come first serve)? what are the cons and pros?

A

• Jobs are scheduled by their arrival time
– If there are enough free cores, a newly arriving job starts to run immediately
– Otherwise it waits, sorted by arrival time, until enough cores are freed

• Pros:
– Easy to implement (FIFO
wait queue)
– Typically perceived as most fair

• Cons:
– Creates fragmentation (unutilized cores)
– Small/short jobs might wait for a long, long while..
(“אני רק שאלה” disallow we(

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What’s backfilling?

A

The “backfilling” optimization
– A short waiting job can jump over
the head of the wait queue
– Provided it doesn’t delay the job @ head of the wait queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What’s EASY scheduling?

A

EASY is FCFS + Backfilling.

whenever a job arrives (=submitted) or terminates:
1. Try to start the job @ head of wait queue (FCFS)

  1. Then, iterate over the rest of
    the waiting jobs (in FCFS order)
    and try to backfill them

“להשלים חורים”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the pros & cons of EASY?

A

Pros:
– Better utilization (less fragmentation)
– Narrow/short jobs have better chance to run sooner

• Cons:
– Must know runtimes in advance
• To know width of holes
• To know if backfill candidates are short enough to fit holes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What’s SJF (shortest job first)?

A

Instead of
– Ordering jobs (or processes) by their arrival time (FCFS)

• Order them by
– Their (typically estimated) runtime

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Optimality of SJF for average wait time

Claim
– Given: a 1-core system where all jobs are serial
– If: (a) all processes arrive together and (b) their runtimes are known
– Then: the average wait time of SJF is equal to or smaller than the
average wait time of any other batch scheduling order S

A

Proof:

page 22

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What’s SJBF (Shortest-job Backfilled First)?

A
– Exactly like EASY in terms of servicing the head of the wait queue in
FCFS order (and not allowing anyone to delay it)
– But the backfilling traversal is done in SJF order
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What’s LXF (Largest eXpansion Factor) scheduling?

A

– LXF is similar to EASY, but instead of ordering the wait queue in FCFS, it
orders jobs based on their current slowdown (greater slowdown
means higher priority)
– The backfilling activity is done relative the job with the largest current
slowdown (= the head of the LXF wait queue)
– Note that every scheduling decision (when jobs arrive/finish) requires
a re-computation of slowdowns (because wait time has changed)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What’s preemption?

A

The act of suspending one job (process) in favor of another

– Even though it is not finished yet

17
Q

What’s a quantum?

A

The maximal amount of time a process is allowed to run

before it’s preempted

18
Q

What’s the difference between response time in batch scheduling and in preemptive schduling?
And why?

A

In batch: responseTime = waitTime + runTime
In preemptive: responseTime ≥ waitTime + runTime

Because
• Processes can be preempted, plus context switches have a price

19
Q

What’s RR scheduling?

A

Processes are arranged in a cyclic ready-queue
– The head process runs, until its quantum is exhausted
– The head process is then preempted (suspended)
– The scheduler resumes the next process in the circular list
– When we‘ve cycled through all processes in the run-list (and we reach the
head process again), we say that the current “epoch” is over, and the next
epoch begins

20
Q

Is there RR in parallel systems?

A

Yes! It’s called “gang scheduling”
– Time is divided to slots (seconds, or minutes)
– Every job has a “native” time slot
– Algorithm attempts to fill holes in time slots by assigning to them jobs
from other native slots (called “alternative” slots)

21
Q

Batching vs. preemption Theorem

Claim
– Let avgResp(X) be the average response time under algorithm X
– Assume a single core system, and that all processes arrive together
– Assume X is a preemptive algorithm with context switch price = 0
– Then there exists a non-preemptive algorithm Y such that
avgResp( Y /batch/ ) ≤ avgResp( X /preemptive/ )

A

Proof: page 35

22
Q

Connection between RR and SJF Theorem

Claim
– Assume:
• 1-core system
• All processes arrive together (and only use CPU, no I/O)
• Quantum is identical to all processes + no context switch overhead
– Then
• avgResponse(RR) ≤ 2 x avgResponse(SJF)

A

Proof: page 38

23
Q

What’s SRTF (Shortest-Remaining-Time First)?

A

SRTF is just like SJF, but
– Is allowed to use preemption
– Hence, it’s “optimal” (assuming a zero context-switch cost etc.)

• Whenever a new job arrives, or an old job terminates
– SRTF schedules the job with the shortest remaining time
– Thereby making an optimal decision

24
Q

What’s selfish RR?

A

• New processes wait in a FIFO queue
– Not yet scheduled
• Older processes scheduled using RR
• New processes are scheduled when
1. No ready-to-run “old” processes exist
2. “Aging” is being applied to new processes (some per-process counter
is increased over time); when the counter passes a certain threshold,
the “new” process becomes “old” and is transferred to the RR queue

25
Q

What happens when fast & slow aging are applied in selfish RR?

A

• Fast aging
– Algorithm resembles RR
• Slow aging
– Algorithm resembles FCFS

26
Q

What is Multi-level priority queue used for?

A

• Several RR queues
– Each associated with a priority
– Higher-priority queues at the top
– Lower-priority at the bottom

• Processes migrate between queues
– So they have a dynamic priority
– “Important” processes move up
(e.g., I/O-bound or “interactive”)
– “Unimportant” move down (e.g.,
CPU-bound or “non-interactive”)
27
Q

What is the Standard POSIX scheduling policies

For Linux 2.4?

A

– Each task is associated with one of three scheduling policies
• “Realtime” policies
– (1) SCHED_RR (round robin),
– (2) SCHED_FIFO (first-in, first-out), or
• The default policy
– (3) SCHED_OTHER
» As opposed to the realtime policies, POSIX defines that
the meaning of SCHED_OTHER is determined by OS

– Realtime tasks are always favored by the scheduler, if exist

28
Q

What are the static & dynamic aspects of a tasks priority?

A

Task’s static priority component
– Doesn’t change with time unless user invoke the nice() system call
– Indirectly determines the max quantum for this task

Task’s dynamic priority component
– The (i) remaining time for this task to run and (ii) its current priority
– Decreases over time (while the task is assigned a CPU and is running)
– When reaches zero, OS forces task to yield the CPU until next epoch
– Reinitialized at the start of every epoch according to the static
component

29
Q

What is task_struct?

A

• Every task is associated with a task_struct
• Every task_struct has 5 fields used by the scheduler:
1. nice
2. counter
3. processor
4. need_resched
5. mm

30
Q

How is a tasks nice value differs between user and kernel?

A

The static component

• User’s nice (POSIX parameter to the nice system call)
– Between -20 … 19;
– Smaller value indicates higher priority (<0 requires superuser)
• Kernel’s nice (field in task_struct used by the scheduler)
– As noted, between 1 … 40
– Higher value indicates higher priority (in contrast to user’s nice)
• Conversion
– Kernel’s nice = 20 - user’s nice

31
Q

What’s task’s counter?

A

The dynamic component (time to run in epoch, & priority)

– Upon task creation (integer arithmetic)
• child.counter = parent.counter/2; parent.counter -= child.counter;

– Upon a new epoch
• task.counter = task.counter/2 + NICE_TO_TICKS(task.nice)
= half of prev dynamic + convert_to_ticks(static)

– Decremented upon each tick (task.counter -= 1)

32
Q

What’s NICE_TO_TICKS?

A

– Scales 20 (=DEF_PRIORITY) to number of ticks comprising 50+ ms
– Namely, scales 20 to 5+ ticks (recall that each tick is 10 ms by default):
• #define NICE_TO_TICKS(kern_nice) ( (kern_nice)/4 + 1 )

33
Q

What is the processor field in task_struct means?

A

– Logical ID of core upon which task has executed most recently
– If task is currently running
• ‘processor’ = logical ID of core upon which the task executes now

34
Q

What is the need_resched field in task_struct means?

A

– Boolean checked by kernel just before switching back to user-mode
– If set, check if there’s a “better” task than the one currently running
– If so, context switch to it
– Since this flag is checked only for the currently running task
• Can think of it as per-core – rather than per-task – variable

35
Q

What is the mm field in task_struct means?

A

– A pointer to the task’s memory address space

36
Q

Scheduler is comprised of 4 functions, what are they?

A
  1. goodness(task,cpu)
    – Given a task and a CPU, return how “desirable” it is for that CPU
    – Compare tasks by this value to decide which will run next on CPU
  2. schedule()
    – Actual implementation of the scheduling algorithm
    – Uses goodness to decide which task will run next on a given core
  3. __wake_up_common()
    – Wake up task(s) when waited-for event has happened
    – (Event may be, for example, completion of I/O)
  4. reschedule_idle()
    – Given a task, check whether it can be scheduled on some core
    – Preferably on an idle core, but if there aren’t any, by preempting a
    less desirable task (according to goodness)
    – Used by both __wake_up_common() and by schedule()