P3L1 Flashcards

(47 cards)

1
Q

3 Options for scheduler

A
  1. Assign tasks immediately (FCFS)
  2. Assign simple tasks first (SJF)
  3. Assign complex tasks first
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

FCFS

A

First Come First Serve (FIFO)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

SJF

A

Shortest Job First

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

CPU Scheduler runs when

A
  1. CPU become idles
  2. New tasks become ready
  3. timeslice expired timeout
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

runqueue is equal to

A

scheduling algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

“Run-to-completion” scheduling

A

No preemption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Metrics for schedulers

A

throughput
avg job completion time
avg job wait time
cpu utilization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Data structure for FCFS

A

Queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Data structure for SJF

A

Ordered queue or tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

SJF + Preemption

A

heuristics based on history -> job running time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Priority Scheduling

A

Run highest priority next

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Data structure(s) for Priority Scheduling

A

Queues for each priority

Tree with priority

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How to deal with Priority Scheduling Starvation

A

priority aging function (actual priority, time spent in run queue)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is Priority Invesion

A

Priorities inverted due to lock

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Solution to Priority Inversion

A

Temp boost priority of mutex owner, lower on release

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Round Robin Scheduing

A

Pick up first task from queue (task only yields to wait on I/O)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Round Robin w/ Priorities

A

Include preemption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Round Robin w/ interleaving

A

Timeslicing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Timeslice

A

(aka Time Quantum)

maximum amount of uninterrupted time given to a task

20
Q

Advantages of timeslice

A
  • Shorter tasks finish first
  • More responsive
  • Lengthy I/O ops initiated sooner
21
Q

Timeslice needs to be much greater than

A

context switch

22
Q

CPU bound tasks prefer

A

larger timeslices

23
Q

I/O Bound tasks prefer

A

smaller timeslices

24
Q

CPU Bound Timeslices

A

higher timeslices

a) limits context switching overheads
b) keeps CPU utilization and throughput high

25
I/O Bound Timeslices
shorter timeslices a) I/O bound tasks can issue I/O ops earlier b) keeps CPU + device utilization high c) better user perceived performance
26
Multi-Level Feedback Queue
1. task enters top most queue 2. if task yields voluntarily, stay 3. if task uses entire time slice, push down level
27
Linux O(1) Scheduler
Constant time to select/add task - Pre-emptive, priority-based realtime (0-99) timesharing (100-139)
28
Linux O(1) Scheduler Data Structures
Run queue == 2 array of tasks
29
O(1) Active array
- used to pick next task to run | - tasks remain in queue until timeslice expires
30
O(1) Expired array
- inactive list | - when no more tasks in active array, swap active and expired
31
O(1) Feedback
sleep time: waiting/idling - longer sleep: interactive, priority -5 (boost) - smaller sleep: compute intensive, priority +5 (lowered)
32
Linux CFS Scheduler
Completely Fair Scheduler
33
Why CFS vs O(1)
O(1) - performance of interactive tasks suffered, lacked fairness
34
CFS Data structure
Run queue == Red-Black Tree Ordered by virtual runtime (ns) vruntime == time spent on CPU
35
CFS Scheduling
- always pick left-most node - periodically adjust vruntime - compare to leftmost vruntime - if smaller, continue running - if larger, pre-empt and place appropriately in the tree
36
vruntime progress rate
depends on priority and niceness - rate faster for low priority - rate slower for high priority
37
CFS Performance
O(1) for select | O(log N) for adding
38
LLC
Last Level Cache
39
SMP
Shared Memory Processor | Each CPU has LLC
40
Multi-core Processor
Shared LLC
41
NUMA
Non-Uniform Memory Access Platforms - multiple memory nodes - access to local mm node faster than access to remote
42
HyperThreading
Multiple hardware-supported execution contexts
43
Other names for HyperThreading
- hardware multithreading - Chip Multithreading (CMT) - Simultaneous Multithreading (SMT)
44
SMT Memory loading
100 cycles
45
SMT Optimal Mix
mix of CPU and memory-intensive threads - all component (CPU and memory) well utilized - avoid/limit content on processor
46
Mechanisms to track SMT ctx_switches
- Hardware counters - L1, L2,... LLC misses - IPC (instructions retired) - Power and energy data - Software interface and tools - e.g. oprofile, Linux perf tool
47
CPI
Cycle Per Instructions | - mixed CPI is good, but not realistic due to real world workloads