Processes & Threads Flashcards

1
Q

In Fig. 2-2 (https://imgur.com/a/TlMWLhF), three process states are shown. In theory, with three states, there could be six
transitions, two out of each state. However, only four transitions are shown. Are there any
circumstances in which either or both of the missing transitions might occur?

A

(TUT4)
The transition from blocked to running is conceivable. Suppose that a process is blocked on I/O
and the I/O finishes. If the CPU is otherwise idle, the process could go directly from blocked to
running. The other missing transition, from ready to blocked, is impossible. A ready process
cannot do I/O or anything else that might block it. Only a running process can block.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Suppose that you were to design an advanced computer architecture that did process
switching in hardware, instead of having interrupts. What information would the CPU need?
Describe how the hardware process switching might work.

A

You could have a register containing a pointer to the current process-table entry. When I/O
completed, the CPU would store the current machine state in the current process-table entry.
Then it would go to the interrupt vector for the interrupting device and fetch a pointer to
another process-table entry (the service procedure). This process would then be started up.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

On all current computers, at least part of the interrupt handlers are written in assembly
language. Why?

A

Generally, high-level languages do not allow the kind of access to CPU hardware that is
required. For instance, an interrupt handler may be required to enable and disable the interrupt
servicing a particular device, or to manipulate data within a process’ stack area. Also, interrupt
service routines must execute as rapidly as possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

When an interrupt or a system call transfers control to the operating system, a kernel stack
area separate from the stack of the interrupted process is generally used. Why?

A

(TUT4)
The separate stack area is used for the interrupted processes. The causes for using the distinct
stack for kernel are as given below:
● By using distinct stack, the data is not overwritten on the kernel;
● By doing so, the operating system will not crash;
● To protect the information of processes from malicious users;
● The separate memory space can be used to make system calls.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A computer system has enough room to hold five programs in its main memory. These
programs are idle waiting for I/O half the time. What fraction of the CPU time is wasted?

A
CPU utilization = 1 – pⁿ
CPU time wasted = pⁿ
Number of programs: n = 5
Waiting for I/O: p = 50% = 0.5 = 1/2
CPU time wasted = (1/2)⁵ = 1/32
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A computer has 4 GB of RAM of which the operating system occupies 512 MB. The
processes are all 256 MB (for simplicity) and have the same characteristics. If the goal is 99%
CPU utilization, what is the maximum I/O wait that can be tolerated?

A

There is enough room for 14 processes in memory: (4 GB – 512 MB) / 256 MB = 14
Now we have equation: 0.99 = 1 – p¹⁴ => p¹⁴ = 0. 01 => p = 0.72
Answer: We can tolerate processes with up to 72% I/O wait.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Multiple jobs can run in parallel and finish faster than if they had run sequentially. Suppose
that two jobs, each needing 20 minutes of CPU time, start simultaneously. How long will the
last one take to complete if they run sequentially? How long if they run in parallel? Assume
50% I/O wait.

A

(FE17)
If each job has 50% I/O wait, then it will take 40 minutes to complete in the absence of
competition. If run sequentially, the second one will finish 80 minutes after the first one starts.
With two jobs, the approximate CPU utilization is 1 − 0.5² = 0.75. Thus, each one gets 0.375 CPU
minute per minute of real time. To accumulate 20 minutes of CPU time, a job must run for
20/0.375 minutes, or about 53.33 minutes.
Thus running sequentially the jobs finish after 80 minutes, but running in parallel they finish
after 53.33 minutes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Consider a multiprogrammed system with degree of 6 (i.e., six programs in memory at the
same time). Assume that each process spends 40% of its time waiting for I/O. What will be the
CPU utilization?

A

(FFE18) (TUT4) (TUT5)
The probability that all processes are waiting for I/O is 0.4⁶ which is 0.004096. Therefore, CPU
utilization is 1 − 0.004096 = 0.995904 = 99%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Assume that you are trying to download a large 2-GB file from the Internet. The file is
available from a set of mirror servers, each of which can deliver a subset of the file’s bytes;
assume that a given request specifies the starting and ending bytes of the file. Explain how
you might use threads to improve the download time.

A

The client process can create separate threads; each thread can fetch a different part of the file
from one of the mirror servers. This can help reduce downtime. Of course, there is a single
network link being shared by all threads. This link can become a bottleneck as the number of
threads becomes very large.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In the text it was stated that the model of Fig. 2-11(a) (https://imgur.com/a/npnSXA6) was not suited to a file server using
a cache in memory. Why not? Could each process have its own cache?

A

It would be difficult, if not impossible, to keep the file system consistent. Suppose that a client
process sends a request to server process 1 to update a file. This process updates the cache
entry in its memory. Shortly thereafter, another client process sends a request to server 2 to
read that file. Unfortunately, if the file is also cached there, server 2, in its innocence, will return
obsolete data. If the first process writes the file through to the disk after caching it, and server 2
checks the disk on every read to see if its cached copy is up-to-date, the system can be made to
work, but it is precisely all these disk accesses that the caching system is trying to avoid.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

If a multithreaded process forks, a problem occurs if the child gets copies of all the
parent’s threads. Suppose that one of the original threads was waiting for keyboard input.
Now two threads are waiting for keyboard input, one in each process. Does this problem ever
occur in single-threaded processes?

A

(TUT4) (TUT5)
A single-threaded process cannot fork if it is waiting for a keyboard input as it would remain in
waiting state until it receives the input from the keyboard. Once it receives the input, it would
resume its execution.

In a multithreaded process, as two processes are waiting for a keyboard input, only one of them
can resume execution once the keyboard input is received and the other would always stay
suspended. Thus, the problem of two threads waiting for an input will occur in a multithreaded
process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

In Fig. 2-8 (https://imgur.com/a/vDmpDVe), a multithreaded Web server is shown. If the only way to read from a file is the
normal blocking read system call, do you think user-level threads or kernel-level threads are
being used for the Web server? Why?

A

A worker thread will block when it has to read a Web page from the disk. If user-level threads
are being used, this action will block the entire process, destroying the value of multithreading.
Thus it is essential that kernel threads are used to permit some threads to block without
affecting the others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In the text, we described a multithreaded Web server, showing why it is better than a
single-threaded server and a finite-state machine server. Are there any circumstances in
which a single-threaded server might be better? Give an example.

A

Yes. If the server is entirely CPU bound, there is no need to have multiple threads. It just adds
unnecessary complexity. As an example, consider a telephone directory assistance number (like
555-1212) for an area with 1 million people. If each (name, telephone number) record is, say, 64
characters, the entire database takes 64 megabytes and can easily be kept in the server’s
memory to provide fast lookup.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

…the register set is listed as a per-thread rather than a per-process item. Why?
After all, the machine has only one set of registers.

A

The register is called a per-thread item because the context saved in a register is thread-specific
information. The register stores the state of every thread so that it can be used during context
switching – these data are saved and reloaded on the next execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why would a thread ever voluntarily give up the CPU by calling thread yield? After all,
since there is no periodic clock interrupt, it may never get the CPU back.

A

Threads in a process cooperate. They are not hostile to one another. If yielding is needed for the
good of the application, then a thread will yield. After all, it is usually the same programmer
who writes the code for all of them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Can a thread ever be preempted by a clock interrupt? If so, under what circumstances? If
not, why not?

A

User-level threads cannot be preempted by the clock unless the whole process’ quantum has
been used up (although transparent clock interrupts can happen). Kernel-level threads can be
preempted individually. In the latter case, if a thread runs too long, the clock will interrupt the
current process and thus the current thread. The kernel is free to pick a different thread from
the same process to run next if it so desires.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

In this problem you are to compare reading a file using a single-threaded file server and a
multithreaded server. It takes 12 msec to get a request for work, dispatch it, and do the rest
of the necessary processing, assuming that the data needed are in the block cache. If a disk
operation is needed, as is the case one-third of the time, an additional 75 msec is required,
during which time the thread sleeps. How many requests/sec can the server handle if it is
single-threaded? If it is multithreaded?

A

(FFE18)
In the single-threaded case, the cache hits take 12 msec and cache misses take 87 msec. The
weighted average is 2/3 × 12 + 1/3 × 87 = 37 msec. Thus, the mean request takes 37 msec and
the server can do about 1000 / 37 = 27 per second. For a multithreaded server, all the waiting
for the disk is overlapped, so every request takes 12 msec, and the server can handle 1000 / 12
= 83 requests per second.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the biggest advantage of implementing threads in user space? What is the biggest
disadvantage?

A

The biggest advantage is the efficiency. No traps to the kernel are needed to switch threads. The
biggest disadvantage is that if one thread blocks, the entire process blocks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

In Fig. 2-15 (https://imgur.com/a/CkNgCLR) the thread creations and messages printed by the threads are interleaved at
random. Is there a way to force the order to be strictly thread 1 created, thread 1 prints
message, thread 1 exits, thread 2 created, thread 2 prints message, thread 2 exists, and so on?
If so, how? If not, why not?

A

Yes, it can be done. After each call to pthread_create, the main program could do a pthread_join
to wait until the thread just created has exited before creating the next thread.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q
In the discussion on global
variables in threads, we used a
procedure create_global to
allocate storage for a pointer to the variable, rather than the variable itself. Is this essential, or could the procedures work
with the values themselves just as well?
A

The pointers are really necessary because the size of the global variable is unknown. It could be
anything from a character to an array of floating-point numbers. If the value were stored, one
would have to give the size to create_global, which is all right, but what type should the second
parameter of set_global be, and what type should the value of read_global be?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Consider a system in which threads are implemented entirely in user space, with the run-
time system getting a clock interrupt once a second. Suppose that a clock interrupt occurs
while some thread is executing in the run-time system. What problem might occur? Can you
suggest a way to solve it?

A

It could happen that the runtime system is precisely at the point of blocking or unblocking a
thread, and is busy manipulating the scheduling queues. This would be a very inopportune
moment for the clock interrupt handler to begin inspecting those queues to see if it was time to
do thread switching, since they might be in an inconsistent state. One solution is to set a flag
when the runtime system is entered. The clock handler would see this and set its own flag, then
return. When the runtime system finished, it would check the clock flag, see that a clock
interrupt occurred, and now run the clock handler.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Suppose that an operating system does not have anything like the select system call to see
in advance if it is safe to read from a file, pipe, or device, but it does allow alarm clocks to be
set that interrupt blocked system calls. Is it possible to implement a threads package in user
space under these conditions? Discuss.

A

Yes it is possible, but inefficient. A thread wanting to do a system call first sets an alarm timer,
then does the call. If the call blocks, the timer returns control to the threads package. Of course,
most of the time the call will not block, and the timer has to be cleared. Thus each system call
that might block has to be executed as three system calls. If timers go off prematurely, all kinds
of problems develop. This is not an attractive way to build a threads package.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Does the busy waiting solution using the turn variable (Fig. 2-23, https://imgur.com/a/m5GcICt) work when the two
processes are running on a shared-memory multiprocessor, that is, two CPUs sharing a
common memory?

A

Yes, it still works, but it still is busy waiting, of course.

24
Q

Does Peterson’s solution to the mutual-exclusion problem shown in Fig. 2-24 (https://imgur.com/a/jmvR9wQ) work when
process scheduling is preemptive? How about when it is nonpreemptive?

A

It certainly works with preemptive scheduling. In fact, it was designed for that case. When
scheduling is nonpreemptive, it might fail. Consider the case in which turn is initially 0 but
process 1 runs first. It will just loop forever and never release the CPU.

25
Q

Can the priority inversion problem discussed in Sec. 2.3.4 happen with user-level threads?
Why or why not?

A

The priority inversion problem occurs when a low-priority process is in its critical region and
suddenly a high-priority process becomes ready and is scheduled. If it uses busy waiting, it will
run forever. With user-level threads, it cannot happen that a low-priority thread is suddenly
preempted to allow a high-priority thread run. There is no preemption. With kernel-level
threads this problem can arise.

26
Q

In Sec. 2.3.4, a situation with a high-priority process, H, and a low-priority process, L, was
described, which led to H looping forever. Does the same problem occur if round-robin
scheduling is used instead of priority scheduling? Discuss.

A

With round-robin scheduling it works. Sooner or later L will run, and eventually it will leave its
critical region. The point is, with priority scheduling, L never gets to run at all; with round robin,
it gets a normal time slice periodically, so it has the chance to leave its critical region.

27
Q

In a system with threads, is there one stack per thread or one stack per process when
user-level threads are used? What about when kernel-level threads are used? Explain.

A

Each thread calls procedures on its own, so it must have its own stack for the local variables,
return addresses, and so on. This is equally true for user-level threads as for kernel-level
threads.

28
Q

When a computer is being developed, it is usually first simulated by a program that runs
one instruction at a time. Even multiprocessors are simulated strictly sequentially like this. Is
it possible for a race condition to occur when there are no simultaneous events like this?

A

Yes. The simulated computer could be multiprogrammed. For example, while process A is
running, it reads out some shared variable. Then a simulated clock tick happens and process B
runs. It also reads out the same variable. Then it adds 1 to the variable. When process A runs, if
it also adds 1 to the variable, we have a race condition.

29
Q

The producer-consumer problem can be extended to a system with multiple producers
and consumers that write (or read) to (from) one shared buffer. Assume that each producer
and consumer runs in its own thread. Will the solution presented in Fig. 2-28 (https://imgur.com/a/z2mj35T), using
semaphores, work for this system?

A

Yes, it will work as is. At a given time instant, only one producer (consumer) can add (remove)
an item to (from) the buffer.

30
Q

Consider the following solution to the mutual-exclusion problem involving two processes
P0 and P1. Assume that the variable turn is initialized to 0. Process P0’s code is presented
below.

/* Other code */
while (turn != 0) { } /* Do nothing and wait. */
Critical Section /* . . . */
turn = 0;
/* Other code */

For process P1, replace 0 by 1 in above code. Determine if the solution meets all the required
conditions for a correct mutual-exclusion solution. (TUT5)

A

First, remember four required conditions:
1. No two processes may be simultaneously inside their critical regions: Variable turn is
used to prevent P0 and P1 being inside their critical regions at the same time so mutual
exclusion is guaranteed. This condition is met.
2. No assumptions may be made about speeds or the number of CPUs: No assumptions
have been made. This condition is also met.
3. No process running outside its critical region may block other processes: Suppose,
process P1 starts first and wants to put something into buffer. Since it will not enter
critical region, buffer will remain empty causing blocking process P0. Therefore, P1 will
block P0 even if it is not in its critical region
4. No process should have to wait forever to enter its critical region.

31
Q

How could an operating system that can disable interrupts implement semaphores?

A

To do a semaphore operation, the operating system first disables interrupts. Then it reads the
value of the semaphore. If it is doing a down and the semaphore is equal to zero, it puts the
calling process on a list of blocked processes associated with the semaphore. If it is doing an up,
it must check to see if any processes are blocked on the semaphore. If one or more processes
are blocked, one of them is removed from the list of blocked processes and made runnable.
When all these operations have been completed, interrupts can be enabled again.

32
Q

Show how counting semaphores (i.e., semaphores that can hold an arbitrary value) can be
implemented using only binary semaphores and ordinary machine instructions.

A

Associated with each counting semaphore are two binary semaphores, M, used for mutual
exclusion, and B, used for blocking. Also associated with each counting semaphore is a counter
that holds the number of ups minus the number of downs, and a list of processes blocked on
that semaphore. To implement down, a process first gains exclusive access to the semaphores,
counter, and list by doing a down on M. It then decrements the counter. If it is zero or more, it
just does an up on M and exits. If M is negative, the process is put on the list of blocked
processes. Then an up is done on M and a down is done on B to block the process. To
implement up, first M is downed to get mutual exclusion, and then the counter is incremented.
If it is more than zero, no one was blocked, so all that needs to be done is to up M. If, however,
the counter is now negative or zero, some process must be removed from the list. Finally, an up
is done on B and M in that order

33
Q

If a system has only two processes, does it make sense to use a barrier to synchronize
them? Why or why not?

A

If the program operates in phases and neither process may enter the next phase until both are
finished with the current phase, it makes perfect sense to use a barrier.

34
Q

Can two threads in the same process synchronize using a kernel semaphore if the threads
are implemented by the kernel? What if they are implemented in user space? Assume that no
threads in any other processes have access to the semaphore. Discuss your answers.

A

With kernel threads, a thread can block on a semaphore and the kernel can run some other
thread in the same process. Consequently, there is no problem using semaphores. With user-
level threads, when one thread blocks on a semaphore, the kernel thinks the entire process is
blocked and does not run it ever again. Consequently, the process fails.

35
Q

Synchronization within monitors uses condition variables and two special operations, wait
and signal. A more general form of synchronization would be to have a single primitive,
waituntil, that had an arbitrary Boolean predicate as parameter. Thus, one could say, for
example,
waituntil x < 0 or y + z > n
The signal primitive would no longer be needed. This scheme is clearly more general than that
of Hoare or Brinch Hansen, but it is not used. Why not? (Hint: Think about the
implementation.)

A

It is very expensive to implement. Each time any variable that appears in a predicate on which
some process is waiting changes, the run-time system must re-evaluate the predicate to see if
the process can be unblocked. With the Hoare and Brinch Hansen monitors, processes can only
be awakened on a signal primitive.

36
Q

A fast-food restaurant has four kinds of employees: (1) order takers, who take customers’
orders; (2) cooks, who prepare the food; (3) packaging specialists, who stuff the food into
bags; and (4) cashiers, who give the bags to customers and take their money. Each employee
can be regarded as a communicating sequential process. What form of interprocess
communication do they use? Relate this model to processes in UNIX.

A

The employees communicate by passing messages: orders, food, and bags in this case. In UNIX
terms, the four processes are connected by pipes.

37
Q

Suppose that we have a message-passing system using mailboxes. When sending to a full
mailbox or trying to receive from an empty one, a process does not block. Instead, it gets an
error code back. The process responds to the error code by just trying again, over and over,
until it succeeds. Does this scheme lead to race conditions?

A

It does not lead to race conditions (nothing is ever lost), but it is effectively busy waiting.

38
Q

The CDC 6600 computers could handle up to 10 I/O processes simultaneously using an
interesting form of round-robin scheduling called processor sharing. A process switch
occurred after each instruction, so instruction 1 came from process 1, instruction 2 came from
process 2, etc. The process switching was done by special hardware, and the overhead was
zero. If a process needed T sec to complete in the absence of competition, how much time
would it need if processor sharing was used with n processes?

A

It will take nT sec.

39
Q
Consider the following piece of C code:
void main() {
fork();
fork();
exit();
}
How many child processes are created upon execution of this program?
A

Three processes are created. After the initial process forks, there are two processes running, a
parent and a child. Each of them then forks, creating two additional processes. Then all the
processes exit.

40
Q

Round-robin schedulers normally maintain a list of all runnable processes, with each
process occurring exactly once in the list. What would happen if a process occurred twice in
the list? Can you think of any reason for allowing this?

A

If a process occurs multiple times in the list, it will get multiple quanta per cycle. This approach
could be used to give more important processes a larger share of the CPU. But when the process
blocks, all entries had better be removed from the list of runnable processes.

41
Q

Can a measure of whether a process is likely to be CPU bound or I/O bound be determined
by analyzing source code? How can this be determined at run time?

A

In simple cases it may be possible to see if I/O will be limiting by looking at source code. For
instance a program that reads all its input files into buffers at the start will probably not be I/O
bound, but a problem that reads and writes incrementally to a number of different files (such as
a compiler) is likely to be I/O bound. If the operating system provides a facility such as the UNIX
ps command that can tell you the amount of CPU time used by a program, you can compare this
with the total time to complete execution of the program. This is, of course, most meaningful on
a system where you are the only user.

42
Q

Explain how time quantum value and context switching time affect each other, in a round-
robin scheduling algorithm.

A

If the context switching time is large, then the time quantum value has to be proportionally
large. Otherwise, the overhead of context switching can be quite high. Choosing large time
quantum values can lead to an inefficient system if the typical CPU burst times are less than the
time quantum. If context switching is very small or negligible, then the time quantum value can
be chosen with more freedom.

43
Q

Measurements of a certain system have shown that the average process runs for a time T
before blocking on I/O. A process switch requires a time S, which is effectively wasted
(overhead). For round-robin scheduling with quantum Q, give a formula for the CPU efficiency
for each of the following:
a) Q = ∞
b) Q > T
c) S < Q < T
d) Q = S
e) Q nearly 0

A

The CPU efficiency is the useful CPU time divided by the total CPU time. When Q ≥ T, the basic
cycle is for the process to run for T and undergo a process switch for S. Thus, (a) and (b) have an
efficiency of T/(S + T). When the quantum is shorter than T, each run of T will require T/Q
process switches, wasting a time ST/Q. The efficiency here is then T/(T + ST/Q) which reduces to
Q/(Q + S), which is the answer to (c). For (d), we just substitute Q for S and find that the
efficiency is 50%. Finally, for (e), as Q → 0 the efficiency goes to 0.

44
Q

Five jobs are waiting to be run. Their expected run times are 9, 6, 3, 5, and X. In what order
should they be run to minimize average response time? (Your answer will depend on X.)

A
Shortest job first is the way to minimize average response time.
● 0 < X ≤ 3: X, 3, 5, 6, 9.
● 3 < X ≤ 5: 3, X, 5, 6, 9.
● 5 < X ≤ 6: 3, 5, X, 6, 9.
● 6 < X ≤ 9: 3, 5, 6, X, 9.
● X > 9: 3, 5, 6, 9, X.
45
Q

Five batch jobs. A through E, arrive at a computer center at almost the same time. They
have estimated running times of 10, 6, 2, 4, and 8 minutes. Their (externally determined)
priorities are 3, 5, 2, 1, and 4, respectively, with 5 being the highest priority. For each of the
following scheduling algorithms, determine the mean process turnaround time. Ignore
process switching overhead.
a) Round robin.
b) Priority scheduling.
c) First-come, first-served (run in order 10, 6, 2, 4, 8).
d) Shortest job first.
For (a), assume that the system is multiprogrammed, and that each job gets its fair share of
the CPU. For (b) through (d), assume that only one job at a time runs, until it finishes. All jobs
are completely CPU bound.

A

(TUT6)
a) For round robin, during the first 10 minutes each job gets 1/5 of the CPU. At the end of
10 minutes, C finishes. During the next 8 minutes, each job gets 1/4 of the CPU, after
which time D finishes. Then each of the three remaining jobs gets 1/3 of the CPU for 6
minutes, until B finishes, and so on. The finishing times for the five jobs are 10, 18, 24,
28, and 30, for an average of 21.5 minutes.
b) For priority scheduling, B is run first. After 6 minutes it is finished. The other jobs finish at
14, 24, 26, and 30, for an average of 20 minutes.
c) If the jobs run in the order A through E, they finish at 10, 16, 18, 22, and 30, for an
average of 19.2 minutes.
d) Finally, shortest job first yields finishing times of 2, 6, 12, 20, and 30, for an average of
14.2 minutes.

46
Q

A process running on CTSS needs 30 quanta to complete. How many times must it be
swapped in, including the very first time (before it has run at all)?

A

The first time it gets 1 quantum. On succeeding runs it gets 2, 4, 8, and 15, so it must be
swapped in 5 times.

47
Q

Consider a real-time system with two voice calls of periodicity 5 msec each with CPU time
per call of 1 msec, and one video stream of periodicity 33 ms with CPU time per call of 11
msec. Is this system schedulable?

A

(TUT6)
Each voice call consumes 1 ms of CPU time each 5 msec which means that total CPU usage is
200 msec per second. Two voice calls consume 400 msec per second. Video stream consumes
about 367 msec (11 msec every 33.3 msec) of CPU time per second. The sum is roughly 767 ms
which makes the system schedulable.

48
Q

Consider a real-time system with two voice calls of periodicity 5 msec each with CPU time
per call of 1 msec, and one video stream of periodicity 33 ms with CPU time per call of 11
msec. Can another video stream be added and have the system still be
schedulable?

A

(TUT6)
Another video stream will require additional 367 msec. Total amount of CPU time per second
required to perform all the operations will exceed one second which makes the system to be
not schedulable.

49
Q

The aging algorithm with a = 1/2 is being used to predict run times. The previous four runs,
from oldest to most recent, are 40, 20, 40, and 15 msec. What is the prediction of the next
time?

A

(TUT6)
Let’s denote measured runtime as T and predicted runtime as t.
● t₁ = T₀ [40 msec]
● t₂ = T₁/2 + t₁/2 = T₁/2 + T₀/2 [30 msec]
● t₃ = T₂/2 + t₂/2 = T₂/2 + T₁/4 + T₀/4 [35 msec]
● t₄ = T₃/2 + t₃/2 = T₃/2 + T₂/4 + T₁/8 + T₀/8 [25 msec]
● The answer is 25 sec.

50
Q

A soft real-time system has four periodic events with periods of 50, 100, 200, and 250
msec each. Suppose that the four events require 35, 20, 10, and x msec of CPU time,
respectively. What is the largest value of x for which the system is schedulable?

A

(TUT6)
The fraction of the CPU used is 35/50 + 20/100 + 10/200 + x/250. To be schedulable, this must
be less than 1. Thus x must be less than 12.5 msec.

51
Q

In the dining philosophers problem, let the following protocol be used: An even-numbered
philosopher always picks up his left fork before picking up his right fork; an odd-numbered
philosopher always picks up his right fork before picking up his left fork. Will this protocol
guarantee deadlock-free operation?

A

Yes. There will be always at least one fork free and at least one philosopher that can obtain both
forks simultaneously. Hence, there will be no deadlock. You can try this for N = 2, N = 3 and N =
4 and then generalize.

52
Q

A real-time system needs to handle two voice calls that each run every 6 msec and
consume 1 msec of CPU time per burst, plus one video at 25 frames/sec, with each frame
requiring 20 msec of CPU time. Is this system schedulable?

A

Each voice call runs 166.67 times/second and uses up 1 msec per burst, so each voice call needs
166.67 msec per second or 333.33 msec for the two of them. The video runs 25 times a second
and uses up 20 msec each time, for a total of 500 msec per second. Together they consume
833.33 msec per second, so there is time left over and the system is schedulable.

53
Q

Consider a system in which it is desired to separate policy and mechanism for the
scheduling of kernel threads. Propose a means of achieving this goal.

A

The kernel could schedule processes by any means it wishes, but within each process it runs
threads strictly in priority order. By letting the user process set the priority of its own threads,
the user controls the policy but the kernel handles the mechanism.

54
Q

The readers and writers problem can be formulated in several ways with regard to which
category of processes can be started when. Carefully describe three different variations of the
problem, each one favoring (or not favoring) some category of processes. For each variation,
specify what happens when a reader or a writer becomes ready to access the database, and
what happens when a process is finished.

A

Variation 1: readers have priority. No writer may start when a reader is active. When a new
reader appears, it may start immediately unless a writer is currently active. When a writer
finishes, if readers are waiting, they are all started, regardless of the presence of waiting writers.
Variation 2: Writers have priority. No reader may start when a writer is waiting. When the last
active process finishes, a writer is started, if there is one; otherwise, all the readers (if any) are
started.
Variation 3: symmetric version. When a reader is active, new readers may start immediately.
When a writer finishes, a new writer has priority, if one is waiting. In other words, once we have
started reading, we keep reading until there are no readers left. Similarly, once we have started
writing, all pending writers are allowed to run.

55
Q

Write a shell script that produces a file of sequential numbers by reading the last number
in the file, adding 1 to it, and then appending it to the file. Run one instance of the script in
the background and one in the foreground, each accessing the same file. How long does it
take before a race condition manifests itself? What is the critical region? Modify the script to
prevent the race. (Hint: use ln file file.lock to lock the data file.)

A

A possible shell script might be
if [ ! –f numbers ]; then echo 0 > numbers; fi
count = 0
while (test $count != 200 )
do
count=′expr $count + 1′
n=′tail –1 numbers′
expr $n + 1&raquo_space;numbers
done
Run the script twice simultaneously, by starting it once in the background (using &) and again in
the foreground. Then examine the file numbers. It will probably start out looking like an orderly
list of numbers, but at some point it will lose its orderliness, due to the race condition created
by running two copies of the script. The race can be avoided by having each copy of the script
test for and set a lock on the file before entering the critical area, and unlocking it upon leaving
the critical area. This can be done like this:
if ln numbers numbers.lock
then
n=′tail –1 numbers′
expr $n + 1&raquo_space;numbers
rm numbers.lock
fi
This version will just skip a turn when the file is inaccessible. Variant solutions could put the
process to sleep, do busy waiting, or count only loops in which the operation is successful.