C191-Terms-Chapter-4 Flashcards

1
Q

Long-term scheduling

A

Decides when a process should enter the ready state and start competing for the CPU.

A process entering the RL from the suspended list is subject to long-term scheduling. The OS may want to decide whether it should be re-admitted to the current mix of processes to compete for the CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Short-term scheduling

A

Decides which of the ready processes should run next on the CPU.

When a running process becomes blocked and later reenters the RL, short-term scheduling is again applied to let the process compete for the CPU.

Whenever a running p isn’t allowed to continue running, it’s moved back to the RL by the OS. The decision to stop and remove the p is subject to short-term scheduling. Likewise, moving p back to the RL is also a short-term decision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

non-preemptive scheduling algorithm

A

Allows a running process to continue until the process terminates or blocks on a resource.

When a process blocks and leaves the RL, a new p must be chosen to run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A preemptive scheduling algorithm

A

May stop the currently running process and choose another process to run. The decision is made whenever:

A new process enters the ready list.

A previously blocked or suspended process re-enters the RL.

The OS periodically interrupts the currently running process to give other processes a chance to run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

priority of a process (or thread)

A

Is a numerical value that indicates the importance of the process relative to other processes. The scheduler uses this value to decide which process to run next.

The priority can be a constant value assigned when the process is created or can change dynamically based on some combination of the following parameters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

arbitration rule

A

Decides which process should proceed if two or more processes have the same priority.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Arrival time

A

A common parameter used to compute short-term priority.

The point in time when the process enters the RL.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Departure time

A

A common parameter used to compute short-term priority.

The point in time when the process leaves the RL by entering the blocked or suspended state, or by terminating all work.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Attained CPU time

A

A common parameter used to compute short-term priority.

The amount of CPU time used by the process since arrival.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Real time in system

A

A common parameter used to compute short-term priority.

The amount of actual time the process has spent in the system since arrival.

The real time keeps increasing regardless of the state the process is in.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Total CPU time

A

A common parameter used to compute short-term priority.

The amount of CPU time the process will consume between arrival and departure. For short-term scheduling, total CPU time is sometimes called the CPU burst.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

External priority

A

A common parameter used to compute short-term priority.

A numeric priority value assigned to the process explicitly at the time of creation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Deadline

A

A common parameter used to compute short-term priority.

A point in time by which the work of the process must be completed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Period

A

A common parameter used to compute short-term priority.

A time interval during which a periodically repeating computation must be completed. The end of each period is the implicit deadline for the current computation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Other considerations

A

A common parameter used to compute short-term priority.

The resource requirements of a process, such as the amount of memory used, or the current load on the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

batch process

A

Performs a long-running and generally repetitive task that does not require any intervention from the user. Ex: Payroll, insurance claims processing, weather prediction, scientific calculations.

The main objective in scheduling batch processes is to maximize the number of processes completed per unit of time.

The order in which processes are executed cannot reduce the total CPU times of the individual processes but can reduce the waiting times.

In modern computing, any long-running computation that does not require interaction with a user is called a batch job or batch process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

FIFO (First-In-First-Out) algorithm

A

Also known as FCFS (First-Come-First-Served), schedules processes strictly according to the process arrival time.

The earlier the arrival, the higher the priority.

Theoretically, multiple processes could have the same arrival time, in which case the arbitration rule can pick a process at random. In practice, all requests are processed sequentially and thus all processes have different arrival times.

FIFO is non-preemptive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

SJF (Shortest Job First) algorithm

A

Also known as SJN (Shortest Job Next), schedules processes according to the total CPU time requirements.

The shorter the required CPU time, the higher the priority. If multiple processes have the same CPU time requirement, then the arbitration rule can select a process based on the arrival times.

SJF is non-preemptive. A more appropriate name for SJF would be “Shortest Total CPU Time First”. Good for a batch OS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

SRT (Shortest Remaining Time) algorithm

A

Schedules processes according to the remaining CPU time needed to complete the work.

The shorter the remaining CPU time, the higher the priority. If multiple processes have the same remaining time requirement, then the arbitration rule can select a process based on the arrival times.

SRT is the preemptive version of SJF.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

turnaround time

A

The turnaround time of a process is the time between arrival and departure, and is the sum of the total CPU time and the waiting time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

average turnaround time (ATT)

A

For a set of n processes is the mean of the n individual turnaround times.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Starvation

A

Another important objective in batch scheduling is to guarantee fairness.

Starvation is the indefinite postponement of a process while other processes are allowed to proceed. Both SJF and SRT can lead to starvation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

interactive process

A

Communicates with the user in the form of a dialog by receiving commands or data from the keyboard or a pointing device and responding by generating output on the user’s terminal or another output device.

The primary goal in scheduling interactive processes is to respond promptly to each input.

Consequently, interactive processes must time-share the CPU using preemptive scheduling to give each process a chance to make progress in a timely manner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

time quantum

A

Q, is a small amount of time (typically 10 to 100 milliseconds) during which a process is allowed to use the CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

round-robin (RR) algorithm

A

Uses a single queue of processes. The priority is determined solely by a process’s position within the queue.

RR scheduling treats all processes as equals. External priorities can be used to divide processes into groups based on importance.

The process at the head of the queue has the highest priority and is allowed to run for Q time units. When Q ends, the process is moved to the tail of the queue and the next process now at the head of the queue is allowed to run for Q time units.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

ML scheduling algorithm

A

Multilevel (ML) scheduling maintains a separate queue of processes at each priority level. Within each level, processes are scheduled using RR.

ML is prone to starvation because it relies on the fact that queues are empty at certain times.

If a stream of processes continues arriving at some priority level k, then no process at any level below k will be able to run, leading to starvation of all processes at levels below k.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

multilevel feedback (MLF) algorithm

A

Multilevel feedback scheduling is similar to ML but addresses the problems of starvation and fairness by:

Using a different time quantum at each priority level.

Changing the priority of every process dynamically.

Under the multilevel feedback (MLF) algorithm a newly arriving process enters the highest-priority queue, N, and is allowed to run for Q time units.

When Q is exceeded, the process is moved to the next lower priority queue, N-1, and is allowed to run for 2Q time units. The quantum size is doubled with each decreasing priority level. Thus at priority level L, the maximum time allowed is 2(N-L)Q time units.

MLF automatically favors short-running processes while processes with long running times gradually migrate to lower priority levels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

response time

A

Of a process is the elapsed time from the submission of a request (pressing the Enter key or clicking a mouse button) until the response begins to arrive. Guaranteeing adequate response time is the primary goal in the scheduling of interactive processes.

For a process running alone, the response time depends on the type of request.

When multiple processes time-share the CPU, the response time increases with the number of processes, the length of the time quantum, and the time it takes to perform a context switch.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

real-time process

A

Is characterized by continual input, which must be processed fast enough to generate nearly instantaneous output. Each arriving input item is subject to a deadline. Ex: Streaming of audio or video, processing and displaying radar data, control of robots or of fly-by-wire aircraft.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

period

A

Is a time interval (typically in milliseconds or even microseconds) within which each input item must be processed. The end of each period is the implicit deadline for processing the current item.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

rate monotonic (RM) algorithm

A

Schedules processes according to the period. The shorter the period, the higher the priority.

RM is preemptive.

RM is not guaranteed to produce a feasible schedule, but empirical evidence shows that RM is likely to produce a feasible schedule if CPU is less than approximately 0.7.

The actual value depends on the particular set of processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

earliest deadline first (EDF) algorithm

A

Schedules processes according to the shortest remaining time until the deadline. The shorter the remaining time, the higher the priority.

EDF is preemptive.

EDF is an optimal algorithm in that a feasible schedule is always guaranteed as long as U ≤ 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

feasible

A

A schedule is feasible if the deadlines of all processes can be met.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

CPU utilization (U)

A

Is the sum of the individual fractions of CPU times used by each process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

A 2-tier approach for a general-purpose OS

A

The simplest approach is to divide processes into two groups. Real-time processes run at the highest priority level but due to their short running times can use FIFO. Interactive and batch processes can be scheduled together using MLF.

36
Q

A 2-tier approach with floating priorities

A

Real time processes use either FIFO or RR in an ML scheme. Interactive and batch processes use a variation of MLF.

Every process is assigned a base priority at creation but an increment is added based on the past action the process has taken.

Ex: Returning from a request for keyboard input would add 6 to the base priority while returning from a request for disk input would add only 1. Thus the current priority depends on the type of process and the process’s behavior.

37
Q

CPU utilization

A

We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily loaded system).

38
Q

Throughput

A

If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes that are completed per time unit, called throughput. For long processes, this rate may be one process over several seconds; for short transactions, it may be tens of processes per second.

39
Q

Turnaround time

A

From the point of view of a particular process, the important criterion is how long it takes to execute that process.

The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting in the ready queue, executing on the CPU, and doing I/O.

40
Q

Waiting time

A

The CPU-scheduling algorithm does not affect the amount of time during which a process executes or does I/O. It affects only the amount of time that a process spends waiting in the ready queue. Waiting time is the sum of the periods spent waiting in the ready queue.

41
Q

Response time

A

In an interactive system, turnaround time may not be the best criterion. Often, a process can produce some output fairly early and can continue computing new results while previous results are being output to the user.

Thus, another measure is the time from the submission of a request until the first response is produced. This measure, called response time, is the time it takes to start responding, not the time it takes to output the response.

42
Q

load sharing

A

If multiple CPUs are available, load sharing, where multiple threads may run in parallel, becomes possible, however scheduling issues become correspondingly more complex.

43
Q

multiprocessor

A

Traditionally, the term multiprocessor referred to systems that provided multiple physical processors, where each processor contained one single-core CPU.

However, the definition of multiprocessor has evolved significantly, and on modern computing systems, multiprocessor now applies to the following system architectures:

Multicore CPUs
Multithreaded cores
NUMA systems
Heterogeneous multiprocessing

44
Q

asymmetric multiprocessing

A

One approach to CPU scheduling in a multiprocessor system has all scheduling decisions, I/O processing, and other system activities handled by a single processor — the master server. The other processors execute only user code.

This asymmetric multiprocessing is simple because only one core accesses the system data structures, reducing the need for data sharing.

The downfall of this approach is the master server becomes a potential bottleneck where overall system performance may be reduced.

45
Q

symmetric multiprocessing (SMP)

A

The standard approach for supporting multiprocessors is symmetric multiprocessing (SMP), where each processor is self-scheduling. Scheduling proceeds by having the scheduler for each processor examine the ready queue and select a thread to run.

Note that this provides two possible strategies for organizing the threads eligible to be scheduled:

All threads may be in a common ready queue.
Each processor may have its own private queue of threads.

46
Q

multicore processor

A

Most contemporary computer hardware now places multiple computing cores on the same physical chip, resulting in a multicore processor.

Each core maintains its architectural state and thus appears to the operating system to be a separate logical CPU.

SMP systems that use multicore processors are faster and consume less power than systems in which each CPU has its own physical chip.

47
Q

memory stall

A

When a thread is on CPU and accesses memory contents that is not in the CPU’s cache, the thread execution stalls while the contents of that memory is fetched.

However, a memory stall can also occur because of a cache miss (accessing data that are not in cache memory).

48
Q

hardware threads

A

A given CPU core can run a single thread (one hardware thread per core) or more than one per core (multiple hardware threads per core) to optimize core use, for example to avoid memory stalls by switching hardware threads if the current thread causes a stall.

To remedy the situation of a memory stall, many recent hardware designs have implemented multithreaded processing cores in which two (or more) hardware threads are assigned to each core.

That way, if one hardware thread stalls while waiting for memory, the core can switch to another thread.

49
Q

chip multithreading (CMT)

A

A CPU with multiple cores, where each core supports multiple hardware threads supports chip multithreading.

From an operating system perspective, each hardware thread maintains its architectural state, such as instruction pointer and register set, and thus appears as a logical CPU that is available to run a software thread. This technique—known as chip multithreading (CMT)

50
Q

hyper-threading (also known as simultaneous multithreading or SMT)

A

Intel processors use the term hyper-threading to describe assigning multiple hardware threads to a single processing core.

51
Q

coarse-grained multithreading

A

A thread executes on a core until a long-latency event such as a memory stall occurs. Because of the delay caused by the long-latency event, the core must switch to another thread to begin execution.

However, the cost of switching between threads is high, since the instruction pipeline must be flushed before the other thread can begin execution on the processor core. Once this new thread begins execution, it begins filling the pipeline with its instructions.

52
Q

Fine-grained (or interleaved) multithreading

A

Switches between threads at a much finer level of granularity—typically at the boundary of an instruction cycle. However, the architectural design of fine-grained systems includes logic for thread switching. As a result, the cost of switching between threads is small.

It is important to note that the resources of the physical core (such as caches and pipelines) must be shared among its hardware threads, and therefore a processing core can only execute one hardware thread at a time.

Consequently, a multithreaded, multicore processor actually requires two different levels of scheduling

53
Q

Two levels of scheduling

A

On one level are the scheduling decisions that must be made by the operating system as it chooses which software thread to run on each hardware thread (logical CPU).

A second level of scheduling specifies how each core decides which hardware thread to run. There are several strategies to adopt in this situation. One approach is to use a simple round-robin algorithm to schedule a hardware thread to the processing core.

Note that the two different levels of scheduling are not necessarily mutually exclusive. In fact, if the operating system scheduler (the first level) is made aware of the sharing of processor resources, it can make more effective scheduling decisions.

54
Q

Load balancing

A

On SMP systems, it is important to keep the workload balanced among all processors to fully utilize the benefits of having more than one processor.

Load balancing attempts to keep the workload evenly distributed across all processors in an SMP system. It is important to note that load balancing is typically necessary only on systems where each processor has its own private ready queue of eligible threads to execute.

On systems with a common run queue, load balancing is unnecessary, because once a processor becomes idle, it immediately extracts a runnable thread from the common ready queue.

55
Q

push migration

A

An approach to load balancing.

A specific task periodically checks the load on each processor and—if it finds an imbalance—evenly distributes the load by moving (or pushing) threads from overloaded to idle or less-busy processors.

56
Q

Pull migration

A

Occurs when an idle processor pulls a waiting task from a busy processor. Push and pull migration need not be mutually exclusive and are, in fact, often implemented in parallel on load-balancing systems.

57
Q

processor affinity

A

Consider what happens to cache memory when a thread has been running on a specific processor. The data most recently accessed by the thread populate the cache for the processor. As a result, successive memory accesses by the thread are often satisfied in cache memory (known as a “warm cache”).

Because of the high cost of invalidating and repopulating caches, most operating systems with SMP support try to avoid migrating a thread from one processor to another and instead attempt to keep a thread running on the same processor and take advantage of a warm cache.

This is known as processor affinity—that is, a process has an affinity for the processor on which it is currently running.

58
Q

soft affinity

A

When an operating system has a policy of attempting to keep a process running on the same processor—but not guaranteeing that it will do so.

Here, the operating system will attempt to keep a process on a single processor, but it is possible for a process to migrate between processors during load balancing.

59
Q

hard affinity

A

Some systems provide system calls that support hard affinity, thereby allowing a process to specify a subset of processors on which it can run. Many systems provide both soft and hard affinity.

60
Q

NUMA

A

Non-uniform memory access. If the operating system’s CPU scheduler and memory-placement algorithms are NUMA-aware and work together, then a thread that has been scheduled onto a particular CPU can be allocated memory closest to where the CPU resides, thus providing the thread the fastest possible memory access.

61
Q

heterogeneous multiprocessing

A

Although mobile systems now include multicore architectures, some systems are now designed using cores that run the same instruction set, yet vary in terms of their clock speed and power management, including the ability to adjust the power consumption of a core to the point of idling the core.

Such systems are known as heterogeneous multiprocessing (HMP). A feature of some mobile computing CPUs in which cores vary in their clock speed and power management.

62
Q

big.LITTLE

A

ARM processor implementation of HMP in which high performance big cores are combined with energy efficient LITTLE cores.

By combining a number of slower cores with faster ones, a CPU scheduler can assign tasks that do not require high performance, but may need to run for longer periods, (such as background tasks) to little cores, thereby helping to preserve a battery charge.

Similarly, interactive applications which require more processing power, but may run for shorter durations, can be assigned to big cores.

Additionally, if the mobile device is in a power-saving mode, energy-intensive big cores can be disabled and the system can rely solely on energy-efficient little cores.

63
Q

Soft real-time systems

A

provide no guarantee as to when a critical real-time process will be scheduled. They guarantee only that the process will be given preference over noncritical processes.

64
Q

Hard real-time systems

A

have stricter requirements. A task must be serviced by its deadline; service after the deadline has expired is the same as no service at all.

65
Q

event latency

A

the amount of time that elapses from when an event occurs to when it is serviced.

66
Q

Interrupt latency

A

refers to the period of time from the arrival of an interrupt at the CPU to the start of the routine that services the interrupt.

When an interrupt occurs, the operating system must first complete the instruction it is executing and determine the type of interrupt that occurred.

It must then save the state of the current process before servicing the interrupt using the specific interrupt service routine (ISR). The total time required to perform these tasks is the interrupt latency

67
Q

dispatch latency

A

The amount of time required for the scheduling dispatcher to stop one process and start another is known as dispatch latency. Providing real-time tasks with immediate access to the CPU mandates that real-time operating systems minimize this latency as well.

The most effective technique for keeping dispatch latency low is to provide preemptive kernels. For hard real-time systems, dispatch latency is typically measured in several microseconds.

68
Q

conflict phase

A

The conflict phase of dispatch latency has two components:

Preemption of any process running in the kernel.

Release by low-priority processes of resources needed by a high-priority process.

69
Q

periodic

A

A type of real-time process that repeatedly moves between two modes at fixed intervals- needing CPU time and not needing CPU time.

the processes are considered periodic. That is, they require the CPU at constant intervals (periods).

Once a periodic process has acquired the CPU, it has a fixed processing time
T, a deadline D by which it must be serviced by the CPU, and a period P.

The relationship of the processing time, the deadline, and the period can be expressed as 0 <= T <= D <= P. The rate of a periodic task is 1/p.

70
Q

rate

A

The rate of a periodic task is 1/p.

A periodic real-time process has a scheduling rate of 1 / p (where p is the length of its running period).

71
Q

admission-control algorithm

A

In real-time scheduling, the scheduler may not allow a process to start if its scheduling request as impossible - if it cannot guarantee that the task will be serviced by its deadline.

The scheduler does one of two things. It either admits the process, guaranteeing that the process will complete on time, or rejects the request as impossible if it cannot guarantee that the task will be serviced by its deadline. Used with priority based scheduling.

72
Q

rate-monotonic scheduling algorithm

A

schedules periodic tasks using a static priority policy with preemption. If a lower-priority process is running and a higher-priority process becomes available to run, it will preempt the lower-priority process.

Upon entering the system, each periodic task is assigned a priority inversely based on its period. The shorter the period, the higher the priority; the longer the period, the lower the priority. The rationale behind this policy is to assign a higher priority to tasks that require the CPU more often.

Furthermore, rate-monotonic scheduling assumes that the processing time of a periodic process is the same for each CPU burst. That is, every time a process acquires the CPU, the duration of its CPU burst is the same.

73
Q

Earliest-deadline-first (EDF)

A

scheduling assigns priorities dynamically according to deadline. The earlier the deadline, the higher the priority; the later the deadline, the lower the priority.

Under the EDF policy, when a process becomes runnable, it must announce its deadline requirements to the system. Priorities may have to be adjusted to reflect the deadline of the newly runnable process. Note how this differs from rate-monotonic scheduling, where priorities are fixed.

74
Q

Proportional share

A

Proportional share schedulers operate by allocating shares among all applications assuring each gets a specific portion of CPU time.

Schedulers operate by allocating T shares among all applications. An application can receive N shares of time, thus ensuring that the application will have N/T of the total processor time. Used in proportional share scheduling.

75
Q

analytic evaluation

A

One major class of evaluation methods is analytic evaluation. Analytic evaluation uses the given algorithm and the system workload to produce a formula or number to evaluate the performance of the algorithm for that workload.

76
Q

Deterministic modeling

A

one type of analytic evaluation. This method takes a particular predetermined workload and defines the performance of each algorithm for that workload.

Deterministic modeling is simple and fast. It gives us exact numbers, allowing us to compare the algorithms.

However, it requires exact numbers for input, and its answers apply only to those cases. The main uses of deterministic modeling are in describing scheduling algorithms and providing examples.

In cases where we are running the same program over and over again and can measure the program’s processing requirements exactly, we may be able to use deterministic modeling to select a scheduling algorithm.

Furthermore, over a set of examples, deterministic modeling may indicate trends that can then be analyzed and proved separately.

77
Q

Queueing models

A

On many systems, the processes that are run vary from day to day, so there is no static set of processes (or times) to use for deterministic modeling.

What can be determined, however, is the distribution of CPU and I/O bursts. These distributions can be measured and then approximated or simply estimated. The result is a mathematical formula describing the probability of a particular CPU burst.

We can describe the distribution of times when processes arrive in the system (the arrival-time distribution). From these two distributions, it is possible to compute the average throughput, utilization, waiting time, and so on for most algorithms.

78
Q

queueing-network analysis.

A

The computer system is described as a network of servers. Each server has a queue of waiting processes. The CPU is a server with its ready queue, as is the I/O system with its device queues. Knowing arrival rates and service rates, we can compute utilization, average queue length, average wait time, and so on.

At the moment, the classes of algorithms and distributions that can be handled are fairly limited.

The mathematics of complicated algorithms and distributions can be difficult to work with. Thus, arrival and service distributions are often defined in mathematically tractable—but unrealistic—ways.

It is also generally necessary to make a number of independent assumptions, which may not be accurate.

79
Q

Simulations

A

To get a more accurate evaluation of scheduling algorithms, we can use simulations. Running simulations involves programming a model of the computer system.

Software data structures represent the major components of the system. The simulator has a variable representing a clock. As this variable’s value is increased, the simulator modifies the system state to reflect the activities of the devices, the processes, and the scheduler.

As the simulation executes, statistics that indicate algorithm performance are gathered and printed.

80
Q

The data to drive the simulation can be generated in several ways

A

The most common method uses a random-number generator that is programmed to generate processes, CPU burst times, arrivals, departures, and so on, according to probability distributions. The distributions can be defined mathematically (uniform, exponential, Poisson) or empirically.

If a distribution is to be defined empirically, measurements of the actual system under study are taken. The results define the distribution of events in the real system; this distribution can then be used to drive the simulation.

81
Q

A distribution-driven simulation may be inaccurate

A

Because of relationships between successive events in the real system. The frequency distribution indicates only how many instances of each event occur; it does not indicate anything about the order of their occurrence.

To correct this problem, we can use trace files. We create a trace by monitoring the real system and recording the sequence of actual events.

We then use this sequence to drive the simulation. Trace files provide an excellent way to compare two algorithms on exactly the same set of real inputs. This method can produce accurate results for its inputs.

82
Q

Regression testing

A

confirms that the changes haven’t made anything worse, and haven’t caused new bugs or caused old bugs to be recreated (for example because the algorithm being replaced solved some bug and changing it caused that bug to reoccur).

83
Q

analytic evaluation

A

A means of comparing scheduling algorithm effectiveness by analyzing an algorithm against a workload and assigning it a score.

84
Q

deterministic modeling

A

One type of analytic evaluation, which takes a particular predetermined workload and defines the performance of each algorithm for that workload.

85
Q

queueing-network analysis

A

An area of computing study in which algorithms are analyzed for various aspects and effectiveness.

86
Q

Little’s formula

A

A scheduling equation (n = λ × W) that is particularly useful because it is valid for any scheduling algorithm and arrival distribution.

87
Q

trace files

A

A scheduling algorithm evaluation method in which thread details are captured on real systems and various scheduling algorithms analyzed to determine effectiveness.