Scheduling Flashcards

1
Q

What is scheduling in the context of an OS?

A

It usually means the assignment of threads to processors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the problem variants for scheduling?

A
  • Monoprocessor / Multiprocessor
  • Dynamic or static thread set?
  • Scheduling at runtime or prior to runtime?
  • Execution times known in advance?
  • Preemption possible?
  • Dependencies between threads.
  • Communication times considered?
  • Setup times (switching) considered?
  • Priorities considered?
  • Deadline considered?
  • Which goal should be achieved? (either user-oriented such as maximum response time or system oriented goals such as throughput)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the goals and our assumptions for scheduling for multiprogramming?

A

Goals include:

  • High efficiency: processor highly utilized
  • Low response time: with interactive programs
  • high throughput: batch-processing
  • fairness and capacity: fair distribution of processor and waiting times to threads

Assumptions include:

  • Homogeneous multiprocessor system
  • Dynamic set of threads
  • No dependencies between threads
  • Dynamic on-line scheduling
  • No deadlines
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the scheduling levels in relation to thread states? And what is the focus in multiprogramming scheduling?

A
  • Long-term: creating/deleting - non-existent/inactive
  • Medium-Term: activating/deactivating - inactive/ready
  • Short-term: assigning/relinquishing - ready/running

Here we focus on short-term scheduling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the standard strategies for multiprogramming scheduling? and dicsuss preference to short/long threads for each.

A
  • FCFS: First Come First Served (execute threads in order of arrival at ready list, occupy processor until end voluntary yield)
  • LCFS: Last Come First Served (execute threads in reverse order of arrival at ready list, occupy processor until end or voluntary yield)
  • LCFS-PR (Preemptive Resume): Newly arrived thread at ready list preempts currently running thread. Preempted thread is appended to ready list, in case of no further arrivals, ready list is processed without preemption). Short thread has good chance to end before another thread arrives, long thread is likely to be preempted several times. => Preference to short threads.
  • RR : Round-Robin (processing of threads in order of arrival, after some prespecified time, preemption takes place and we switch to next thread). Goal is even distribution of processor capacity and waiting time to competing threads. Selection of time slice length is optimization problem (large time slice and we have FCFS, small time slice and we have penalty for frequent switching overhead).
  • PRIO-NP: Priorities-Non Preemptive (newly arriving threads are inserted in ready list according to their priority, once assigned they stay running on the processor until end of voluntary yield).
  • PRIO-P: Priorities-Preemptive (Like PRIO-NP but we check preemption if newly ready thread has higher priority than currently running thread).
  • SPN: Shortest Process Next (thread with shortest service time is executed next until it finishes or voluntary yield, like PRIO-NP if we consider service time as the priority criterion). Favor short threads and shorter mean response times than FCFS
  • SRTN: Shortest Remaining Time Next (thread with shortest remaining service time is executed next, currently running thread may be preempted). In SRTN and SPN, we need the knowledge of service times which can only be available as estimates. Long threads may starve if shorter threads keep coming.
  • Highest Response Ratio Next (HRRN): Response ratio is defined as (waiting time + service time) / service time, calculate and use as priority criterion (highest value is selected). Non-preemptive. Shorter threads favored but unlike SRTN and SPN, long threads score points by waiting to bump them up the priority.
  • FB: Multilevel Feedback (when we do not know service time a priori, but want to favor short threads, we can reduce its priority stepwise at each CPU-usage, individual waiting queues can be managed according to round robin, different values of time slices for individual queues are possible).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Categorize scheduling strategies to preemption (w/ or w/e), priorities (w/ or w/e) and service time dependency.

A

04_scheduling slide 27

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the difference between static and dynamic thread priorities?

A

Static priorities express absolute importance of a task (low overhead, O(1) complexity). They are subject to priority inversion.

Dynamic priorities: thread priority is adapted over time so that short running tasks can get preference over long running tasks). They can be realized on top of an implementation of static priorities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is priority inversion?

A

priority inversion is a scenario in scheduling in which a high priority task is indirectly superseded by a lower priority task effectively inverting the assigned priorities of the tasks. This violates the priority model that high-priority tasks can only be prevented from running by higher-priority tasks. Inversion occurs when there is a resource contention with a low-priority task that is then preempted by a medium-priority task.

See 04_scheduling slide 32,33

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the difference between normalized response times for shorter processes and longer processes?

A

Shorter Processes: priority with preemption (constant) < priority (linear) < no priority (exponential)

Longer Processes: no priority < priority < priority with preemption (all exponential)

See 04_scheduling slides 29,30,31

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the solutions for priority inversion?

A
  • Priority Inheritance: Lower priority thread that is blocking the resource from higher priority thread inherits higher priority until it releases the resource under contention while medium priority thread is left unchanged.

See 04_scheduling slide 34

  • Proportional Share Scheduling: a type of scheduling that preallocates certain amount of CPU time to each of the threads. In a proportional share algorithm every job has a weight, and jobs receive a share of the available resources proportional to the weight of every job. Tasks with a small share get CPU time, but not as much as tasks with larger shares. This avoids priority inversion and is much more complex than a strategy based on priorities.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Define and model the time related variables for scheduling.

A

t: mean service time (pure execution time)
w: mean waiting time
r: mean response time (including waiting times): r = w+t
r_n: normalized response time: r_n = r/t
ntt: normalized response time: ntt = response time / service time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is CPU burst and CPU burst time?

A

The duration (time slice) for which a thread gets control of the CPU is the CPU burst time, and the concept of gaining control of the CPU is the CPU burst.

Similarly, an I/O burst is the concept of performing I/O operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How to estimate the CPU Burst?

A

The behavior of the most recent past is a good predictor for the short-term future. Therefore, we can estimate the CPU burst time of the i’th compute phase of a thread by looking at the execution time recorded in previous < i compute phases for that thread.

The formula is in 04_scheduling slide 40

S[n+1] = 1/n * T[n] + ((n-1/n) S[n] = (1/n) * sum_1_n(T[i])

Because younger more recent values usually characterize the future behavior better than older ones, they should obtain higher weight so we can apply exponential averaging to the formula above:

S[n+1] = a * T[n] + (1-a) S[n] with 0 < a < 1.
If a > 1/n, younger values get higher weights.
When enrolling this recursion, the weights of older measurements fade out exponentially. High a leads to better responsiveness (response to change in behavior), smaller a leads to better stability (to filter outliers). The first estimate S[1] can be initialized with 0 to give new threads high priority. See 04_scheduling slides 42 and 43 for the effect of exponential decay of weights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the problem with multiprocessor scheduling?

A

We have to decide not only when to execute a task but also where (on which processor).
Additionally, even if processors are identical, these processors/cores might share limited resources including caches, memory bandwidth might be limited and share by cores, logical cores might share the execution units within one physical core.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the concept of processor affinity in multiprocessor scheduling?

A

If a thread was running on a processor, corresponding caches were filled with data belonging to it. If the thread is scheduled again, there is a chance that significant parts of the cache still belong to that thread so processor affinity means that the scheduler favors the recently used processor but it may happen that a thread with higher priority is waiting for its favored processor while another thread with lower priority is running.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is Gang scheduling / Coscheduling? What are the properties of the threads in a gang/group?

A

Scheduling of a group of threads to run on a set of processors at the same time, on a one-to-one basis with synchronized time-slices and preemption and forbidding idle processors in the set to execute tasks not belonging to the group. Threads of the scheduling group must not relinquish the CPU and will not be blocked.

This creates the illusion that the tasks of the gang scheduled/coscheduled group are running exclusively in the system.

A group of threads is coscheduled on a set of processors

The threads in the same group are usually heavily communicating (no need to block one as it will receive a response from another in the same group soon), data sharing (sharing processor caches) and have contrary resource demands (less resource contention when they are scheduled together)

16
Q

Give an example of a coscheduler?

A

TACO (Topology Aware Coscheduling): Builds synchronization domains as set of processors with own runqueue. Master CPU is responsible to enforce thread switch at all CPUs of the synchronization domain and each CPU picks a thread out of the run-queue and notifies master. Finally, map the sets of coschedules threads to synchronization domains.

17
Q

What is the difference between centralized, decentralized, semi-decentralized and hierarchical scheduler design?

A

Centralized: Have one runqueue for all processors. This global knowledge makes some types of scheduling easier (like coscheduling and absolute priorities scheduling) and some harder (realizing processor affinity).

Decentralized: One runqueue per processor core. This scales to very large systems. Needs load balancing on the runqueues to be versatile. Distributed knowledge makes global decisions nearly impossible but processor affinity is very cheap. (this is like most current operating systems).

Semi-Decentralized: Multiple cores share a runqueue. Tradeoff between both extremes.

Hierarchical: A hierarchy of runqueues, where runqueues further up in the hierarchy represent larger fractions of the system. This allows for more scalable coscheduling as multiple small coscheduled sets can be processed independently. Without coscheduled sets, this is like decentralized scheduler.

18
Q

What is real time scheduling? What is the additional property of threads in real time systems to be considered in scheduling?

A

In real time systems, the goal of scheduling is different as within short time constraints, measurements need to be evaluated and based on that, action must be taken.

Threads in real time systems are associated with deadlines, at which they must be finished. They are usually critical for the functioning of the whole system and they need to be considered in scheduling decisions.

19
Q

What is the difference between hard and soft real time systems?

A

In hard real-time systems, violation of the deadline means failure of the system that can not be tolerated (Airbag system). Here off-line scheduling algorithms are employed to guarantee that deadlines can be met.

In soft real time systems, violation of the deadline means quality reduction but can be tolerated (transmission and processing of video streams).

20
Q

What is lateness in the context of real time system threads? And what is the goal of soft real time scheduling in relation to lateness?

A

A thread in real time systems has a specification for the time the earliest time they can start and the latest time they can finish (deadline). Exceeding the deadline is called lateness and it is the amount of time by which the deadline is exceeded.

The goal here is to minimize the maximum total lateness so that the maximum amount of deadlines are respected.

21
Q

What are periodical threads in the context of real time systems?

A

In real time systems, periodical threads have to be finished in due time. Each thread is characterized by its period/rate.

22
Q

What are the scheduling strategies in soft real time systems to minimize maximum lateness for non-periodic threads on a single processor?

A

Earliest Due Date (EDD), no start times, no preemption, single processor, no thread dependencies: Threads need to be sorted according to their deadlines which can be done in O(n log n). But when introducing start times, problem becomes NP-hard.

Earliest Deadline First (EDF), start times, preemption, single processor, no thread dependencies: When a thread is assigned with an earlier deadline than the one running, then yield running thread and start assigned thread with earlier deadline.

23
Q

What are the scheduling strategies in soft real time systems to minimize maximum lateness for periodic threads on a single processor?

A

Rate Monotonic Scheduling (RMS): Assign a static priority such that thread with smallest period gets highest priority.
Assumptions:
- T_i periodical with period length p_i
- Deadline: d_i = p_i
- T_i is ready again immediately after p_i
- T_i has a constant execution time b_i ( <= p_i)
- The smaller the period, the higher the priority.
Following RMS criterion must hold for n threads: sum_1_n (b_i / p_i) <= n * (2**(1/n - 1). Left side is the required processor capacity (must be <= 1) and right side is an upper bound that must be valid in order to find a feasible schedule.
If the criterion is satisfied, use the static priority to schedule the threads.

Time Demand Analysis (TDA): Calculate worst case time demand for all threads and compare to available time to deadline. Then find point in time where enough time is available to finish.

Earliest Deadline First (EDF): Threads are assigned a dynamic priority where the thread with the next closest deadlines gets highest priority. Preemption is allowed. Following criterion must hold for n threads: sum_1_n (b_i / p_i) <= 1

24
Q

What are scheduling strategies in soft real time systems for multi-processor systems?

A

Earliest Deadline Zero Laxity (EDZL): Global EDF scheduling but threads with zero laxity (has to run now in order to meet deadline) get highest priority. It is never worse than global EDF but still NP-hard problem with lots of ongoing research.