Exam1 more Flashcards

1
Q

What’s the difference between a process group and a session group?

A

A process group is a set of processes with the same PGID (process group ID). Having the same PGID allows them to be signaled at the same time.

A session group is a set of process groups that belong to a single user’s session. There could be more than one process group in a session group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the output of Dijkstra’s Algorithm (the Banker’s Algorithm)?

What’s a downside of this algorithm?

A

This algorithm produces a “safe state execution sequence,” i.e. an ordering in which we can run a set of processes that’s guaranteed to be deadlock-free.

It requires advanced knowledge of the memory needs for each process, and this isn’t always possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Name the four techniques for addressing deadlocks.

A

1) Live with the problem

2) Prevention
- static analysis on source code to perform an exhaustive search of possible states (doesn’t scale well beyond small programs)
- ad hoc genius (ex: the different ways we can come up with to address the Dining Philosopher’s problem)
- removing preconditions of deadlocks

3) Detection
- use graph theory to systematically detect deadlocks in a program, and then take steps to recover from them

4) Avoidance (ex. Banker’s Algorithm: use an execution sequence that will not let deadlocks happen)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Semaphores:

A) what does the p() call do?

B) what does the v() call do?

A

A) count –, or block

  • you could think of this as “grabbing the bathroom key to get in”… and you grab it when you have to go “p” :)
  • p() is the same call as semwait()

B) count ++, and wake up next process (if any) in blocked queue

  • think of this as putting the key back so someone else can take it
  • v() is the same call as sempost()

Note: p() and v() are both system calls. They ask the OS to do something.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why can busy waiting lead to starvation?

A

There could be a situation in which the lower priority process grabs the lock, but doesn’t get to run because it’s lower priority. Meanwhile the higher priority process gets chosen to run, and it uses its quantum time to keep trying/failing to grab the lock (held by the lower priority process).

(This is the busy-waiting part, where the higher priority process keeps checking to see if it can grab the lock. The lower priority process is starving, awaiting its turn, while it does that.)

Eventually the lower priority process will get a turn, finish its work in the critical section, and release the lock, and then the higher priority process can use it. But in the meantime, time/resources wasted.

An alternative would be to have a process block if it can’t grab the lock. Then the process holding it can get right to work, and wake up the blocked process when it’s done.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are two dangers of using semaphores?

A

1) programmer burden:
- could forget to call v() or p()
- could accidentally switch the order of the p() and v() calls; if v() gets called before p(), both threads will enter the critical section

2) deadlock:
- threads can become blocked in cyclical dependency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Dining Philosophers Problem: What 3 properties should a good solution have?

A

sound, deadlock-free, starvation-free

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Dining Philosophers Problem:

If every philosopher grabs chopstick i and then chopstick i+1, what happens?

A

Deadlock. Every philosopher will block when they try to grab i+1, because the philosopher to their right already grabbed it when grabbing chopstick i.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Dining Philosophers Problem:

A) Describe the “global semaphore” solution.

B) What’s a downside of this solution?

A

A) Rather than a semaphore for each resource, there is a single global semaphore. A philosopher can only pick up chopsticks (resources) if holding the global semaphore.

B) This limits concurrency, as it just becomes a one-at-a-time queue to get the global semaphore and access the wanted resources. Ideally we’d find a solution that lets some threads run concurrently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Dining Philosophers Problem:

A) Describe the “breaking symmetry” solution (in which the philosophers don’t all do the same thing).

B) Using this solution, how many philosophers can eat at one time?

A

A) Even-numbered philosophers grab chopstick i then chopstick (i+1 % N). Odd-numbered philosophers follow reverse order.

B) half of them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Dining Philosophers Problem:

A) Describe the “wait and retry” solution.

B) Downside of this solution?

A

A) Each philosopher picks up chopstick i and then tries to pick up chopstick i+1. If the philosopher can’t pick up the second one, drops the first, waits a bit, and then tries the whole process again.

B) This works, but it’s inefficient. Could involve wasted work of repeated effort to pick up the chopsticks until finally successful.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the difference between a zombie child and an orphan child?

A

zombie: child completed but is still in OS process table because the parent hasn’t called wait(); can be cleared by reboot
orphan: child is running but parent has terminated, so orphan will be reclaimed by init process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the 4 preconditions for deadlock?

A

1) mutual exclusion
2) no resource preemption (a process that locks a resource gets to own it until it explicitly releases it, i.e. another process can’t take it away)
3) hold and wait (process can hold resources and block if new resources can’t be acquired right away)
4) circular waiting (cyclical dependency with 2 or more processes blocked waiting for resource held by other process)

Note: You can prevent deadlock by removing any one of these preconditions. All 4 must be present for deadlock to occur.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

solutions to deadlock: living with the problem

A) Given this solution, what does one do if deadlock occurs?

B) Downside of this approach?

A

A) Kill all processes involved and restart

B) Undesirable side effects, tricky to roll back operations in OS and databases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Deadlock prevention via removing “mutual exclusion”:

A) Why would removing mutual exclusion prevent deadlock?

B) Drawback of this solution?

A

A) If multiple threads can access the resource at the same time, processes can’t become stuck waiting for the resource to be released.

B) Some resources are intrinsically unsharable, so not always a desirable/possible solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Deadlock prevention via removing “hold and wait”:

A) How would this work, and why would it prevent deadlock?

B) Drawback of this solution?

A

A) Could require process to request all resources it needs up front, so it will only run when they’re all available, and it will never end up blocked while holding some of them and waiting for others.

B) It’s not always possible to know all needed resources up front.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Deadlock prevention via removing “circular waiting”:

A) How would this work, and why would it prevent deadlock?

B) Drawback of this solution?

A

A) Enforce order of resources, i.e. number them and only let process take resources in order of increasing number. This makes a circle impossible, preventing a cyclical dependency.

B) This requires knowing all needed resources up front (not always possible) and assigning them a suitable ordering.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Deadlock prevention via removing “no resource preemption”:

A) How would this work, and why would it prevent deadlock?

B) Drawback of this solution?

A

A) Allow resource preemption, i.e. allow a process to take a resource from a process that hasn’t released it yet. Could be a priority scheme: allow higher-priority process to steal resource from lower-priority process.

B) Tricky to roll back execution of lower-priority process. Also, the lower-priority process could starve.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A) What is a sound program?

B) Is it always deadlock-free?

A

A) A program guaranteed to produce correct results at all stages without race conditions and synchronization issues.

B) No, a program can be sound but have deadlock or starvation potential. Mutual exclusion helps with soundness but can cause these problems sometimes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Dijkstra’s algorithm identifies certain execution states as “unsafe.”

A) Does an unsafe state always result in a deadlock?

B) Why?

A

A) No, deadlock situations are a subset of unsafe states, i.e. some unsafe states won’t result in a deadlock.

B) States are deemed safe or unsafe based on the max number of resources each process could request, but a process may not always request its max number of resources. Thus the notion of “safe” is conservative.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A) In Dijkstra’s algorithm, when is a request for resources valid?

B) If the request is valid, is it also safe?

A

A) When the number of resources requested by a process does not exceed the available number of resources, AND it would not push the number of currently held resources beyond the maximum claim for a process.

B) No, the resulting state could be unsafe even if the request is considered valid. Whether a request is safe depends on whether there exists a safe (deadlock-free) sequential ordering of all processes’ requests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

When we talk about deadlock avoidance in Dijkstra’s algorithm, we refer to the “number of resources” that each process requests. How are these resources different from the resources in the Dining Philosopher’s problem?

A

Dijkstra’s algorithm refers to multiple, interchangeable instances of resources, such as spaces in memory. Here only the number of resources is important.

In the Dining Philosopher’s problem, each resource is unique, such as a specific file that a process may want to access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A) Is resource preemption allowed in the Dijkstra’s algorithm environment?

B) Based on your answer to A), what will happen if there aren’t enough resources available?

A

A) No: once a process takes a resource, it can’t be taken by another process until the first process gives it back.

B) Process will block if they can’t access the resources they need. This could cause deadlock, hence the need to find a safe allocation state that would never cause deadlock.

24
Q

Dijkstra’s (Banker’s) Algorithm:

When the algorithm considers a request for resources, what three checks does it perform?

A

1) Make sure request does not exceed number of available resources.
2) Make sure the number requested plus the number currently held would not exceed maximum claim for that process.
3) Check for a safe state: can the request be granted without causing deadlock? If no safe state, deny request. If safe state, grant it.

25
Q

Dijkstra’s (Banker’s) Algorithm:

A) Using this algorithm, when does a process have to declare the maximum number of resources it will need at one time?

B) How could an unsafe state cause deadlock?

A

A) in advance, before running

B) If processes end up in a cyclical dependency where no process can get the resources it needs in order to finish its job and free its currently held resources.

26
Q

Dijkstra’s (Banker’s) Algorithm:

A) When is an allocation state considered safe?

B) Does this require sequential, one-at-a-time execution?

A

A) When there exists a sequential ordering of processes such that all of them will be able to finish executing (will have access to the # of resources they need) without entering a deadlock state.

B) Not necessarily. New processes can start running concurrently as long as the algorithm finds a safe state. If there isn’t a safe state, the new process will block until resources available.

27
Q

deadlock detection:

A) What is the systems-level problem involved in deadlock detection?

B) How does graph theory play a role in deadlock detection?

C) Once deadlock is detected, what must the OS do?

A

A) developing a resource allocation graph at runtime as processes are acquiring resources

B) Graph theory is used to detect a cycle in the resource allocation graph. This is a well-known problem that can be solved in O(n) time. There is a cycle if and only if there is deadlock.

C) Determine why the deadlock happened and how to roll back from it.

28
Q

If the resource allocation graph (RAG) for a program has a cycle, what do you know about that program in its current state?

A

There is a deadlock.

RAG has cycle if and only if there is deadlock.

29
Q

What’s the different between a resource allocation graph (RAG) and a wait-for graph?

A

The wait-for graph removes the resources and just shows the relationships between the processes.

30
Q

What are three ways to deal with a deadlock once it is detected?

A

1) preemption: take resource from its current owner (but often impossible or difficult to roll back operations to the time the resource was acquired)
2) roll back to a checkpoint (saved state) that occurred sometime before the resource was required (common technique used in databases)
3) manual killing: keep killing processes in cycle until no more deadlock

31
Q

A conservative approach to resource allocation would sacrifice ________ for _________.

A

sacrifice CONCURRENCE for DEADLOCK AVOIDANCE

Basically, a conservative approach could be to run processes one at a time to avoid deadlocks.

32
Q

What are four drawbacks to Dijkstra’s algorithm?

A

1) Requires knowing number of needed resources for every process in advance (not always possible)
2) Not dynamic (due to point above)
3) Time-consuming because it has to check all permutations in which processes could run
4) Potential for false positives: some allocation states could be labeled unsafe even if they wouldn’t necessarily cause deadlocks

33
Q

What gets stored in a Process Control Block (PCB)?

A

1) Execution context
- stack pointer
- program counter
- register values
- status (running/blocked/ready)

2) Memory context
- pointer to the process’s page table

3) Open File Descriptor Table
- all the files that have been opened by the process

34
Q

What are the segments of a process’s virtual memory space?

A

CODE

STATIC/GLOBAL DATA

HEAP

STACK

35
Q

Threads vs. Processes

A) What do threads and processes have in common?

B) Pro and con of processes?

C) Pro and con of threads?

A

A) They can both run concurrently.

B)

pro: isolation
con: expensive overhead to context-switch and communicate between processes

C)

pro: less work to context-switch between threads because they share a virtual memory space
con: synchronization challenges, race conditions, deadlock issues

36
Q

3 reasons why processes are more expensive than threads?

A

1) resource requirements: each process needs its own PCB
2) inter-process communication requires signals, which are handled in kernel mode and require context-switching (likewise, data transfer requires kernel data structures such as pipes)
3) high cost of process creation: every new process takes time to allocate and initialize all required data structures for PCB

37
Q

Threads

A) What do threads share?

B) What do they keep separate?

A

A) they share heap memory, static data segment, code segment, and open files

B) they have their own execution states (stack space, stack pointer, program counter)

38
Q

Threads

A) Where is thread table be stored for user-level threads VS kernel-level threads?

B) How does that affect program execution?

A

A) for user-level threads, stored in user space; for kernel-level threads, stored in kernel space

B) Kernel-level threads require context-switching, which could slow program execution.

39
Q

When do we prefer multi-threaded applications to multi-process applications?

A

Both threads and processes are useful when we want to run multiple tasks at once, parallelize computations, and interleave tasks like I/O and computation.

Threads are preferable when we want to share data structures among these tasks, there is no need for resource isolation, and we want the application to scale well.

40
Q

If threads are in user space…

A) How does this affect runtime of the process that owns the threads?

B) What power does this give to the process that owns the threads?

C) What disadvantage does this have for making system calls?

A

A) Runtime for that process gets divided among its threads; the OS doesn’t distinguish single-threaded processes from multi-threaded processes.

B) It can override the OS scheduling algorithm to choose time allocation among its threads.

C) If any one of the process’s threads makes a system call, the entire process could block.

41
Q

What are race conditions?

A

A situation in which processes are racing toward updating a common variable, with non-deterministic results depending on the race winner.

42
Q

semaphores:

A) What is a semaphore?

B) What do the p() and v() calls do?

C) What is the equivalent name for these calls in the Linux API?

D) Can context-switching occur during a p() or v() call (before it is complete)?

E) Where do you call p() and where do you call v()?

A

A) a primitive (kernel object) in the OS that maintains an integer value and a queue of blocked processes; it supports two operations, p() and v()

B)
p():
-decrements the value
-if the value is negative after decrementing, blocks the calling process (adding it to queue)

v():

  • increments the value
  • if the value is negative after decrementing, removes the process at head of blocked queue and signals it to wake up

C) p() is called sem_wait(), v() is called sem_post()

D) No! Part of what makes semaphores valuable is that the OS treats the p() and v() actions as atomic, in order to solve the Lost Wakeup problem.

43
Q

semaphores:

When would you use a counting semaphore instead of a binary semaphore?

A

When a critical section is inherently sharable up to a certain number.

44
Q

semaphores:

A) What is held in each entry of a semaphore’s blocked queue?

B) How is queue used to avoid the busy-wait problem?

A

A) The PID (process ID) of a blocked process, and a pointer to the next entry in the queue

B) Processes don’t need to keep checking the semaphore value to see if they can enter the critical section. If they call p() and the value is too low, they will enter the blocked queue (stop running). Then they’ll get woken up when it’s their turn.

45
Q

semaphores:

Aside from providing mutual exclusion, how can we use semaphores for controlling execution sequence?

A

1) Initialize the semaphore.
2) Create first thread and have it call p(). It won’t block because it’s first.
3) Then create the second thread and have it call p(). It will block because the first thread hasn’t called v() yet.
4) First thread calls v() when done, which wakes up second thread to run.

Essentially, you are treating the semaphore like a baton that gets passed around from thread to thread, ensuring that a thread doesn’t run until an earlier thread passes it the baton.

46
Q

tool for mutual exclusion: disabling interrupts

A) Why is disabling interrupts an effective tool for ensuring mutual exclusion?

B) Drawbacks of this approach?

A

A) Disabling interrupts will prevent interleaving of actions that could cause bugs. This bug-causing interleaving is a result of unluckily-timed context-switching between threads. Context-switching is driven by clock interrupts, so it can’t happen if interrupts are disabled.

B)

  1. This approach is overkill. It disables ALL context-switching, not just context-switching to other threads that want to enter the critical section.
  2. This doesn’t work well on a multi-core machine, since it might disable interrupts on one CPU but not the others.
  3. Burden on programmer to remember to disable and re-enable interrupts at the right places in the code.
47
Q

How many assembly language instructions are needed to….

A) increment a variable?

B) check if a variable = 1 ?

Bonus question: can context-switching happen during the actions described in A) and B)?

A

A) 3 assembly instructions

  1. load current value
  2. increment value
  3. store the new value

B) 1 assembly instruction (compare)

Answer to bonus question:
Context-switching could happen between any of the instructions in A). Since B) is only one instruction, it is atomic, and no context-switching can occur in the middle of it.

48
Q

tool for mutual exclusion: TSL (test and set lock)

A) What does the assembly instruction “TSL R0, lock” do?

B) Could context-switching occur during this instruction?

C) If R0 holds a value of 1 after this instruction is complete, what does that mean?

D) What if R0 holds a value of 0 after the TSL instruction?

E) What should a thread do upon leaving the critical section?

A

A) Load value of lock into R0, and also set lock to 1.

B) No, it is a single atomic hardware instruction.

C) It means that the lock value was 1 before calling TSL, so do not enter the critical section.

D) The lock was not already set, so enter the critical section.

E) Set lock back to 0.

49
Q

tool for mutual exclusion: TSL (test and set lock)

A) What is a drawback of the TSL approach?

B) How does it improve upon Peterson’s solution?

A

A) It’s a busy-wait solution. Threads must repeat TSL until the lock is free.

B) Unlike Peterson’s, it generalizes easily to more than two threads.

50
Q

What is “thread affinity”?

A

If threads run on a multi-core processor, they will be assigned repeatedly to the same core, which makes the core more likely to cache the thread’s data, thus improving performance.

51
Q

Why are thread pools used?

A

Creating threads is expensive (even though it’s less expensive than creating processes), so we can make a pool of threads up front and then use idle threads to service new requests. This is more efficient than creating a new thread for each new request.

52
Q

Let’s say we have two threads, T0 and T1, interleaved with race conditions. Let “count” = 0. They both increment “count” 100 times. At the end, what is the:

A) minimum possible value of “count”?

B) maximum possible value of “count”?

A

A) min possible value of count = 2
Why:
Both read initial value of count, which is 0.
Then T0 runs 99 times, incrementing count to 99.
Then T1 runs 1 time, but had already read count = 0, so it increments count to 1 and stores that.
Then T0 reads count = 1.
Then T1 runs 99 more times, incrementing count to 100.
Then T0 runs last time, but had already read count = 1, so now count = 2.

B) max possible value of count = 200
Why: If there is no switching between reading, incrementing, and storing, each process will properly increment count 100 times each, for a total of 200.

53
Q

Problematic Mutual Exclusion Strategy #1: the “turn” variable

Imagine we want to implement mutual exclusion with a single variable called “turn.” We have two threads, T0 and T1. T0 can enter the critical section (CS) when turn = 0 and sets turn =1 when it leaves, and T1 can enter when turn = 1 and sets turn = 0 when it leaves.

A) Does this guarantee mutual exclusion?

B) What are 2 drawbacks of this strategy?

A

A) Yes. The threads will never be able to enter the CS at the same time.

B) First, it’s a busy-wait solution. Each thread has to keep checking the value of turn to see when it’s allowed to enter.
Second, this requires that the threads enter the CS in strict alternation. If T0 finishes and leaves, it can’t go back in until T1 enters and leaves, resetting the turn to 0. But what if T0 wants to enter again before T1 has any reason to enter and leave? It will starve.

54
Q

Problematic Mutual Exclusion Strategy #2: the “flag[2]” array

Imagine we want to implement mutual exclusion with an array called “flag[2]” that holds two integers, a flag for T0 and a flag for T1. Anytime a thread wants to enter the critical section (CS), it checks the flag for the other thread, and if that flag is 1, it waits. When the other thread’s flag is 0, it declares its own intent to enter the CS by setting its flag to 1, and then enters. Both threads always reset their flags to 0 when they leave the CS.

A) Does this guarantee mutual exclusion?

B) Another drawback of this strategy?

C) How does TSL (test-and-set-lock) address a problem with the strategy?

A

A) Nope! We could end up with both threads getting into the CS at the same time. Here’s how:
Say both flags are initially 0. Then T0 wants to enter the CS, so it checks flag[1] and sees it is 0 - great! But before T0 can set its own flag to 1, we switch to T1, who also wants to enter the CS. T1 checks T0’s flag and it’s still 0 - great! Then we switch back to T0, who sets flag[0] to 1 and enters the CS. Then we switch back to T1 who sets flag[1] to 1 and enters the CS. Mutual exclusion broken!!!
(The take-home lesson is that there could be interleaving between checking the other flag and updating your own flag, and this is dangerous.)

B) It’s a busy-wait solution. When a thread wants to enter the CS, it has to keep checking the value of the other thread’s flag. This wastes CPU cycles.

C) TSL (test and set lock) addresses this issue by making the testing and setting of the lock into one atomic instruction, during which there can be no interleaving.

55
Q

Mutual Exclusion: Peterson’s Solution

In Peterson’s solution, we have threads T0 and T1 and we have these two variables:
flag[2];
turn;

When T0 wants to enter the critical section (CS), it sets flag[0] to 1 and sets turn to 0. When T1 wants to enter the CS, it sets flag[1] to 1 and sets turn to 1.

If both threads want to enter the CS simultaneously, there’s a race condition in updating “turn”: it ends up with the value of whichever thread updated it LAST.

Both threads set their flags back to 0 when they leave the CS.

A) T0 does not enter the CS as long as:
flag[1] == ? and turn == ?

B) T1 does not enter the CS as long as:
flag[0] == ? and turn == ?

C) What would be a more intuitive name for the “turn” variable?

D) What are two downsides to Peterson’s solution?

A

A) flag[1] == 1 && turn == 0

B) flag[0] == 1 && turn == 1

C) loser
Because it holds the number of the thread who lost the race to update it. Whenever loser = 0, T0 has to wait (assuming T1 sets its flag to declare that it wants to enter the CS). Whenever loser = 1, T1 has to wait (assuming T0 sets its flag to declare that it wants to enter the CS).

D) First, it’s a busy-wait solution. The threads have to keep checking the value of the turn variable and the other thread’s flag.
Second, it’s hard to generalize to more than 2 threads.