Midterm 1 Flashcards

1
Q

What are they key roles of an operating system?

A
  • Hides complexity of underlying hardware
  • Manages hardware on behalf of applications
  • Provides isolation & protection
  • Directly has privileged access to underlying hardware
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the differences between OS: Abstractions, Mechanisms, and Policies?

A

Abstractions: Hiding of the hardware complexity into a simple interface. Eg, process, thread, file, socket, memory page
Mechanisms: How to perform an action on an abstraction. Create/Schedule or thread or process, open, write, allocate
Policies: Defined behavior - what should be done. Eg, LRU, earliest deadline first (EDF) , Least Frequently Used (LFU), eg, max # of sockets process has access to

Example:
Abstraction: Memory
Mechanism: Allocate, map to a process
Policies: LRU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the principle of separation of mechanism and policy mean?

A

A mechanism should be able to employ and support a range of policies. Understand the common case then pick a specific policy that makes sense and can be supported given underlying mechanisms and abstractions operating system supports.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does the principle optimize for the common case mean?

A

Find the most common case that the application is likely to encounter and optimize for that case. Don’t sacrifice common case performance to improve an edge case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What happens during a user-kernel mode crossing?

A
  • Privilege Bit is set
  • trap occurs and OS decides if it should execute system call
  • System calls gets executed (or OS decides it won’t execute request and terminates program)
  • [Probably put the results somewhere]
  • Privilige Bit is cleared
  • control returns to user process
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are some of the reasons why user-kernel mode crossing happens?

A

Whenever a user needs access to a piece of the underlying hardware (system call):

  • Open a file
  • Send data on a socket
  • Fork a process
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a system call?

A

A request to access hardware [or likely a hardware abstraction] on the behalf of a user-level process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Does user-level mode have access to hardware?

A

No - hardware is only accessible through kernel-level calls.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Contrast the design decisions and performance tradeoffs among monolithic, modular and microkernel-based OS designs.

A

Monolithic:
* Every service that could be required is part of the operating system (eg, multiple file system types)
+ Everything there!
+ inlining, compile time optimization
- customization, portability, manageability (HUGE), memory footprint, performance

Modular (more commonly used):
* Includes the most basic services and APIs, but you can add specific services (schedulers, FS’s, etc)
* Each module programs to the appropriate interface so it can be swapped out
+ maintainability, smaller footprint, less resource needs
- indirection can impact performance (b/c you must go through an interface)
- maintenance can still be an issue

Micro-kernel:
* Only require most basic primitives at OS level
* IPC, Address space, threads
* All other software components (DB, FS, Device Drivers) run outside kernel in user-mode.
* Requires lots of IPC (since most of it is implemented at user level)
+ size, verifiability
- portability, complexity of software dev
- cost of user/kernel crossing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe the states in a lifetime of a process?

A

New: Allocate/init PCB and resources
Ready: Ready to start executing until scheduled dispatched to CPU
Running: Actively Running on CPU
Waiting: Waiting for I/O or event completion
Terminated: Exit w/ exit code

New -> Ready -> Running
Running -> Ready
Running -> Terminated
Running -> Waiting
Waiting -> Ready
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Describe the lifetime of a thread?

A

Runnable, active, sleeping, stopped, zombie

Runnable -> Stopped
Runnable -> Active
Active -> Runnable
Active -> Zombie (DONE)
Active -> Stopped
Active -> Sleeping
Sleeping -> Runnable
Sleeping -> Stopped
Stopped -> Runnable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe all the steps which take place for a process to transition form a waiting (blocked) state to a running (executing on the CPU) state.

A

Blocked (wait for I/O event to complete) -> Ready State (waiting to be scheduled) -> eventually running

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the pros-and-cons of message-based vs. shared-memory-based IPC.

A

Message Based:
* Establish communication channel (such as a shared buffer)
* Then they can send/receive messages across the channel
+ OS Manages (same standard system calls)
+ Lower setup costs than shared memory, but a per use cost
- Overheads (every msg is copied from user space into kernel memory then back into memory of user process)

Shared Memory IPC:
* OS establishes shared memory and maps it into each process’ address space
* Processes can read/write to that memory like normal
+ OS is out of the way!
- Re-Implement code, more error prone
- Setup is expensive, so best if you can amortize cost of setup across large number of messages

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are benefits of multithreading?

A
  • Parallelization (If multiple CPUs)
  • Specialization (cache locality, hot cache)
  • [MORE?]
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe the boss-worked multithreading pattern.

A

You have a single ‘boss’ thread that sends jobs to the ‘worker’ threads. The boss often puts work in a queue and then signals to the workers that there is work to be done. The workers will take work off the queue and perform the appropriate tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe the pipelined multithreading pattern.

A

There is one or more threads for each stage of a pipeline. After work completes from one stage, it moves onto the next.

Generally limited by the slowest stage of the pipeline. Best if the pipeline stages are equal in processing time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are mutexes?

A

Mutual Exclusion - A ‘lock’ that only allows a single thread to hold it. If the mutex is held by another thread and someone tries to acquire this mutex, they are put in a queue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What’s a simple way to prevent deadlocks? Why?

A

To maintain a global lock order for all mutexes. This ensures that two threads aren’t holding the lock which the other is waiting for.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Can you explain the relationship among kernel vs. user-level threads?

A

1:1 -
* each user level thread has a kernel level thread associated with it. Pretty much everything is up to the kernel-level manager.
+ OS sees/understands threads, synch, blocking, etc
- Must go to OS for all operations
- OS may have limits on policies, thread #
- portability

M:1 -
* All user level threads in process mapped to a single kernel level thread. Pretty much everything is up to the thread level manager.
+ totally portable; doesn’t depend on OS limits and policies in kernel
- OS has no insights into application needs
- OS may block entire process is one user level thread blocks on I/O

M:N -
* Some can be M:1 or 1:1, etc. 
\+ can be best of both worlds
\+ can have bound or unbound threads
- requires coordination between user and kernel level thread managers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Can you explain why some of the mechanisms described in the Solaris papers (for configuring the degree concurrency, for signaling, the use of LWP…) are not used or necessary in the current threads model in Linux?

A
  • Modern Linux uses 1:1 model, kernel sees info for each ULT
  • Setting concurrency not really necessary b/c of 1:1 model. Kernel concurrency is always equal to ULT concurrency.
  • Kernel Traps are cheaper
  • Changing kernel level signal masks isn’t such a big deal
  • No real limit on # of threads because of larger amount of memory supported today, large range of IDs, etc
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What’s an interrupt? handler?

A

A hardware event that tells the OS something has happened. The handler is the function that is called when such an event occurs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What’s the potential issue if a interrupt or signal handler needs to lock a mutex? What’s the workaround described in the Solaris papers?

A

The process that was interrupted could already have a lock on the mutex needed in the handler. Since the interrupted thread had the lock and is now in the handler routine, it will block forever on the mutex.

Solutions:

  • Spawn a separate thread to handle the interrupt.
  • Disable appropriate signal handlers prior to acquiring the mutex, then re-enable them once the lock is released.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Contrast the pros-and-cons of a multithreaded (MT) and multiprocess (MP) implementation of a webserver, as described in the Flash paper.

A

MP:
+ More simple to implement b/c lack of synchronization
- More costly context switches
- Larger memory footprint

MT:
\+ Smaller memory footprint
\+ faster context switching
- Must support synchronization
- kernel library must support MT
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the benefits of the event-based model described in the Flash paper over MT and MP? What are the limitations? Would you convert the AMPED model into a AMTED (async multi-threaded event-driven)? How do you think ab AMTED version of Flash would compare to the AMPED version of Flash?

A

Benefits:
+ Single thread of execution that is able to do work even after blocking I/O calls are made
+ Small memory footprint since there is only one thread of execution
+ No synchronization to worry about

Limitations:

  • Not necessarily applicable to all software problems
  • Some additional complexities w/ event routing in multi-cpu systems

Yes, I would make it a MT library today. I think MT would be a little better because there would be smaller context switching costs and possibly better cache locality since everything is in the same virtual address space.

In MT/MP, each thread/process must perform the full request, which leads to a larger memory footprint

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

There are several sets of experimental results from the Flash paper discussed in the lesson. Do you understand the purpose of each set of experiments (what was the question they wanted to answer)? Do you understand why the experiment was structured in a particular why (why they chose the variables to be varied, the workload parameters, the measured metric…).

A

NOT DONE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

If you ran your server from the class project for two different traces: (i) many requests for a single file, and (ii) many random requests across a very large pool of very large files, what do you think would happen as you add more threads to your server? Can you sketch a hypothetical graph?

A

NOT DONE

27
Q

How are processes created?

A

Fork -> Exec

28
Q

What is the difference between Fork and Exec?

A

Fork:

  • Create new PCB
  • Copies parent PCB into Child PCB
  • Child continues execution at instruction after Fork

Exec:

  • Takes a PCB (created from Fork)
  • Replace child image
  • load new program and start from first instruction
29
Q

What are the steps that take place during a system call?

A
  • Copy args to well defined location
  • Make the specific system call number
  • Kernel-Cross occurs
  • Get results
30
Q

What is preemption?

A

Interrupting and saving the current execution context

31
Q

What happens on a process vs. thread context switch?

A

Process Context Switch:
* Switch out the full PCB, including virtual address space

Thread Context Switch:
* Switch out thread specific PCB information, but the virtual address space stays the same

32
Q

Process vs. thread, describe the distinctions.

A

Process:

  • Has it’s own virtual address space
  • Created via a Fork (+ exec)

Thread:

  • Shares the virtual address space among all threads in the same process
  • Has thread local storage, TID, stack pointer, PC, etc.
33
Q

What is the role of the CPU scheduler?

A

Determines which of the currently ready processes will be dispatched to the CPU to start running and determines how long it should be allowed to run for.

Want scheduler to do as little as possible. Scheduling time not doing useful work.

34
Q

In what ways could a process get back onto the ready queue?

A
  • I/O Request completes
  • Time slice expired
  • Fork a child (child executes)
  • Process waiting for an interrupt (and it occurs)
35
Q

How are threads useful on a single CPU?

A

They can hide I/O latency. While one thread is blocking on an I/O call of some kind [and the I/O time is sufficiently long], you can switch to another thread and do useful work, then switch back.

36
Q

Contrast Processes VS Threads

A

Process:

  • Has its own address space, registers, stack, pc, etc.
  • All contained in a PCB
  • Higher memory requirements
  • Higher IPC costs
  • Higher context switching costs (largest cost from switching the Virtual Address Map)

Threads:

  • An execution context of a process
  • Share the same virtual address space (code, data, files)
  • Each has its own registers, stack, and PC
  • The PCB will be more complex
    • Info about the shared information
    • Info about the contexts of each of the threads in the process
  • Lower memory requirements
  • Cheaper IPC (Inter Thread)
  • Lower context switching time
37
Q

How are threads useful on a single CPU?

A

They can hide I/O latency. While one thread is blocking on an I/O call of some kind [and the I/O time is sufficiently long], you can switch to another thread and do useful work, then switch back.

38
Q

When is it useful to add more threads, when does adding threads lead to pure overhead?

A

Useful:

  • When threads become blocked due to I/O
  • When there are multiple CPU’s to be used

Overhead:
* When the workload is purely CPU bound and there are more threads than CPUs

39
Q

What are the possible sources of overhead associated with multithreading?

A
  • Context Switching

* Synchronization

40
Q

What are condition variables?

A

A variable that a thread will wait on until it is signaled to continue. Occurs after initially acquiring the mutex and you decide you shouldn’t continue yet.

lock(&mutex);
while (something /w shared variable) {
    wait(&mutex, &condition);
}
// Do action with shared data
unlock(&mutex);

When you wait on a condition, it automatically releases the mutex, then reacquires it when you exit the condition.

41
Q

Can you quickly write the steps/code for entering/existing a critical section for problems such as reader/writer, reader/writer with selective priority (e.g., reader priority vs. writer priority)?

A
lock(&mutex);
while (shared_variabile_predicate) {
    wait(&mutex, &condition);
}
// Adjust predicate value
unlock(&mutex);

// Do critical section

lock (&mutex);
// Adjust predicate value again
// possibly signal/broadcast
unlock (&mutex);

Priority:

  • Reader: standard implementation of writer doesn’t get access until num readers is 0;
  • Writer: Requires additional mutex, but if writer requests access then no more readers can get into the queue until the writer has finished. Current reader finish, then writer goes, then reader can go again.
42
Q

What are spurious wake-ups, how do you avoid them, and can you always avoid them?

A

Waking a thread up (broadcast/signal) and the threads are not able to proceed, perhaps, because you signaled while you still held the mutex. This could cause the threads waiting on the condition to then get shifted to waiting on the mutex again (because they couldn’t reacquire the mutex).

It can partially be avoided if you signal/broadcast after you have released the mutex. This doesn’t always work though. If you need access to the shared variable in order to determine if you should signal/broadcast, then you must still hold the mutex while you do this.

43
Q

What is a signal/broadcast?

A

A mechanism for notifying the waiters on a condition that one or more of them should attempt to continue. Signal notifies a single thread in the queue while broadcast notifies all threads in the wait queue. Broadcast is safer than signal.

44
Q

What is fine grained and course grained locking?

A

Fine Grained: using multiple locks at various depths for specific areas

Course Grained: Using a single ‘mega lock’ to lockout a long section of code

45
Q

If you need to improve a performance metric like throughput or response time, what could you do in a boss-worker model?

A
  • Add workers on demand (least effective, thread construction time)
  • Static or dynamic pool of workers
    + simplicity
    • ignores locality (since each worker gets random work)
    • thread pool management
  • Specialized workers
    • Boss must do more work because it must decide which [set] of workers should get the task (likely offset by improvements in locality and cache hits)
      + better locality; QoS management (assign more threads to tasks that are more important)
    • load balancing is more difficult
46
Q

What are the limiting factors in improving performance with the boss worker pattern?

A

The processing time of the boss.

Throughput = 1 / boss_time_per_request

47
Q

If you need to improve a performance metric like throughput or response time, what could you do in a pipelined model?

A
  • Assign more threads to the slowest stages of the pipeline
    + allows for highly specialized threads
  • balancing and synchronization overheads
48
Q

What are the limiting factors in improving performance with the pipeline pattern?

A

Can’t go faster than the slowest stage of the pipeline.

49
Q

Describe process and system scope and what they mean to threads.

A

Process Scope: User level library manages threads within a single process. Kernel doesn’t really know about the threads, so it may allocate equal kernel threads among the processes, even if one process has more threads than the other.

System Scope: system wide thread management by the OS level thread managers (eg, cpu scheduler). System will see the threads in each user process and allocate the necessary number of kernel threads.

50
Q

What is the layered MT model?

A
  • Each layer is a group of related subtasks
  • end-to-end task must pass up and down through all layers
    + Specialization
    + less fine-grained than pipeline
  • not suitable for all applications
  • synchronization (must talk to layer above and below)
51
Q

What is the rationale for having multiple data structures in PCBs?

A
  • smaller data structures
  • easier to share
  • on context switch, only change what is necessary
  • user -level library need only update portion of the state
52
Q

What are the ‘red zones’ with regard to the stack?

A

An area on either side of the stack that you are no allowed to write to. So if you do try to write there, you will cause a fault.

53
Q

When does the UL Scheduler run?

A
  • ULTs explicitly yield
  • timer set my UL library expires
  • ULTs call library functions like lock()/unlock()
  • Blocked threads become runnable
  • Response to signals from timer or kernel
54
Q

What is a Zombie thread?

A

When a child process terminates and is awaiting cleanup.

55
Q

For each of these data structures, list at least two elements they must contain:

  • Process
  • LWP
  • Kernel Threads
  • CPU
  • ULT
A

Process:

  • List of Kernel level threads
  • Virtual Address space
  • User Credentials
  • Signal Handlers

Kernel Threads:

  • Pointers to:
    • LWP, process, CPU strucures
  • Stack Pointer
  • Kernel-Level Registers
  • Schedule info (class, …)
  • NOTE: info needed even when process is not running! eg, not swappable!

CPU:

  • Current Thread
  • list of kernel-level threads
  • dispatching and interrupt handling info

LWP:

  • User Level Registers
  • System Call args
  • Resources usage info
  • signal mask
  • NOTE: Similar to ULT, but visible to kernel, not needed when process is not running!
56
Q

What is a kernel trap and why does it happen?

A

Occurs when a user application is interrupted and control is passed to the kernel. This happens because there was an attempt to perform a privileged instruction.

57
Q

What is a signal? handler?

A

Sent from the CPU or an application. Not generated from external hardware.

OS has default handlers for signals, but each process can set its own handlers and override the defaults.

Each thread can have its own signal mask.

58
Q

What are some examples of signals?

A

Synchronous:
SIGSEGV - access to protected memory
SIGFPE - arithmetic exception (divide by zero)
SIGKILL - (kill, id) can be directed to specific thread

Asychronous:
SIGALARM - timer expiration

59
Q

What types of signals are there?

A

One Shot Signals:
“n signals pending == 1 signal pending”: at least once

Real Time Signals:
“if n signals raised, then handler called n times”

60
Q

What is the top and bottom half of an interrupt?

A

Top Half:

  • In the event handler
  • fast, non blocking
  • min amount of processing

Bottom Half:
* arbitrary complexity (this should be in another thread if you want this arbitrary complexity)

61
Q

How do you calculate throughout time for a number of requests and average throughput (req/sec) for a boss-worker model?

A

Total Time:
processing_time * ceiling(num_ordered/num_workers)

Average Time (ms/request):
pt * [SUM(T*n, 0, n) + n + 1]  / R where n = floor( R/T)
62
Q

How do you calculate throughout time for a number of requests and average throughput (req/sec) for a pipeline model?

A

Total Time:
First Unit Time + (R-1) * last_stage_time
Where R = # requests

Avg Time:
(T1 + T2 + T3 +… + Tn) / R
Those are abs time from start for each unit to be done

63
Q

What is an adaptive thread?

A

dynamically decides whether it should spin or sleep on the queue. Only applicable to multi-CPU systems.