Midterm 1 Flashcards Preview

CS 6200 > Midterm 1 > Flashcards

Flashcards in Midterm 1 Deck (63):

What are they key roles of an operating system?

* Hides complexity of underlying hardware
* Manages hardware on behalf of applications
* Provides isolation & protection
* Directly has privileged access to underlying hardware


What are the differences between OS: Abstractions, Mechanisms, and Policies?

Abstractions: Hiding of the hardware complexity into a simple interface. Eg, process, thread, file, socket, memory page
Mechanisms: How to perform an action on an abstraction. Create/Schedule or thread or process, open, write, allocate
Policies: Defined behavior - what should be done. Eg, LRU, earliest deadline first (EDF) , Least Frequently Used (LFU), eg, max # of sockets process has access to

Abstraction: Memory
Mechanism: Allocate, map to a process
Policies: LRU


What is the principle of separation of mechanism and policy mean?

A mechanism should be able to employ and support a range of policies. Understand the common case then pick a specific policy that makes sense and can be supported given underlying mechanisms and abstractions operating system supports.


What does the principle optimize for the common case mean?

Find the most common case that the application is likely to encounter and optimize for that case. Don't sacrifice common case performance to improve an edge case.


What happens during a user-kernel mode crossing?

* Privilege Bit is set
* trap occurs and OS decides if it should execute system call
* System calls gets executed (or OS decides it won't execute request and terminates program)
* [Probably put the results somewhere]
* Privilige Bit is cleared
* control returns to user process


What are some of the reasons why user-kernel mode crossing happens?

Whenever a user needs access to a piece of the underlying hardware (system call):
* Open a file
* Send data on a socket
* Fork a process


What is a system call?

A request to access hardware [or likely a hardware abstraction] on the behalf of a user-level process.


Does user-level mode have access to hardware?

No - hardware is only accessible through kernel-level calls.


Contrast the design decisions and performance tradeoffs among monolithic, modular and microkernel-based OS designs.

* Every service that could be required is part of the operating system (eg, multiple file system types)
+ Everything there!
+ inlining, compile time optimization
- customization, portability, manageability (HUGE), memory footprint, performance

Modular (more commonly used):
* Includes the most basic services and APIs, but you can add specific services (schedulers, FS's, etc)
* Each module programs to the appropriate interface so it can be swapped out
+ maintainability, smaller footprint, less resource needs
- indirection can impact performance (b/c you must go through an interface)
- maintenance can still be an issue

* Only require most basic primitives at OS level
* IPC, Address space, threads
* All other software components (DB, FS, Device Drivers) run outside kernel in user-mode.
* Requires lots of IPC (since most of it is implemented at user level)
+ size, verifiability
- portability, complexity of software dev
- cost of user/kernel crossing


Describe the states in a lifetime of a process?

New: Allocate/init PCB and resources
Ready: Ready to start executing until scheduled dispatched to CPU
Running: Actively Running on CPU
Waiting: Waiting for I/O or event completion
Terminated: Exit w/ exit code

New -> Ready -> Running
Running -> Ready
Running -> Terminated
Running -> Waiting
Waiting -> Ready


Describe the lifetime of a thread?

Runnable, active, sleeping, stopped, zombie

Runnable -> Stopped
Runnable -> Active
Active -> Runnable
Active -> Zombie (DONE)
Active -> Stopped
Active -> Sleeping
Sleeping -> Runnable
Sleeping -> Stopped
Stopped -> Runnable


Describe all the steps which take place for a process to transition form a waiting (blocked) state to a running (executing on the CPU) state.

Blocked (wait for I/O event to complete) -> Ready State (waiting to be scheduled) -> eventually running


What are the pros-and-cons of message-based vs. shared-memory-based IPC.

Message Based:
* Establish communication channel (such as a shared buffer)
* Then they can send/receive messages across the channel
+ OS Manages (same standard system calls)
+ Lower setup costs than shared memory, but a per use cost
- Overheads (every msg is copied from user space into kernel memory then back into memory of user process)

Shared Memory IPC:
* OS establishes shared memory and maps it into each process' address space
* Processes can read/write to that memory like normal
+ OS is out of the way!
- Re-Implement code, more error prone
- Setup is expensive, so best if you can amortize cost of setup across large number of messages


What are benefits of multithreading?

* Parallelization (If multiple CPUs)
* Specialization (cache locality, hot cache)
* [MORE?]


Describe the boss-worked multithreading pattern.

You have a single 'boss' thread that sends jobs to the 'worker' threads. The boss often puts work in a queue and then signals to the workers that there is work to be done. The workers will take work off the queue and perform the appropriate tasks.


Describe the pipelined multithreading pattern.

There is one or more threads for each stage of a pipeline. After work completes from one stage, it moves onto the next.

Generally limited by the slowest stage of the pipeline. Best if the pipeline stages are equal in processing time.


What are mutexes?

Mutual Exclusion - A 'lock' that only allows a single thread to hold it. If the mutex is held by another thread and someone tries to acquire this mutex, they are put in a queue.


What’s a simple way to prevent deadlocks? Why?

To maintain a global lock order for all mutexes. This ensures that two threads aren't holding the lock which the other is waiting for.


Can you explain the relationship among kernel vs. user-level threads?

1:1 -
* each user level thread has a kernel level thread associated with it. Pretty much everything is up to the kernel-level manager.
+ OS sees/understands threads, synch, blocking, etc
- Must go to OS for all operations
- OS may have limits on policies, thread #
- portability

M:1 -
* All user level threads in process mapped to a single kernel level thread. Pretty much everything is up to the thread level manager.
+ totally portable; doesn't depend on OS limits and policies in kernel
- OS has no insights into application needs
- OS may block entire process is one user level thread blocks on I/O

M:N -
* Some can be M:1 or 1:1, etc.
+ can be best of both worlds
+ can have bound or unbound threads
- requires coordination between user and kernel level thread managers


Can you explain why some of the mechanisms described in the Solaris papers (for configuring the degree concurrency, for signaling, the use of LWP…) are not used or necessary in the current threads model in Linux?

* Modern Linux uses 1:1 model, kernel sees info for each ULT
* Setting concurrency not really necessary b/c of 1:1 model. Kernel concurrency is always equal to ULT concurrency.
* Kernel Traps are cheaper
* Changing kernel level signal masks isn’t such a big deal
* No real limit on # of threads because of larger amount of memory supported today, large range of IDs, etc


What’s an interrupt? handler?

A hardware event that tells the OS something has happened. The handler is the function that is called when such an event occurs.


What’s the potential issue if a interrupt or signal handler needs to lock a mutex? What’s the workaround described in the Solaris papers?

The process that was interrupted could already have a lock on the mutex needed in the handler. Since the interrupted thread had the lock and is now in the handler routine, it will block forever on the mutex.

* Spawn a separate thread to handle the interrupt.
* Disable appropriate signal handlers prior to acquiring the mutex, then re-enable them once the lock is released.


Contrast the pros-and-cons of a multithreaded (MT) and multiprocess (MP) implementation of a webserver, as described in the Flash paper.

+ More simple to implement b/c lack of synchronization
- More costly context switches
- Larger memory footprint

+ Smaller memory footprint
+ faster context switching
- Must support synchronization
- kernel library must support MT


What are the benefits of the event-based model described in the Flash paper over MT and MP? What are the limitations? Would you convert the AMPED model into a AMTED (async multi-threaded event-driven)? How do you think ab AMTED version of Flash would compare to the AMPED version of Flash?

+ Single thread of execution that is able to do work even after blocking I/O calls are made
+ Small memory footprint since there is only one thread of execution
+ No synchronization to worry about

- Not necessarily applicable to all software problems
- Some additional complexities w/ event routing in multi-cpu systems

Yes, I would make it a MT library today. I think MT would be a little better because there would be smaller context switching costs and possibly better cache locality since everything is in the same virtual address space.

In MT/MP, each thread/process must perform the full request, which leads to a larger memory footprint


There are several sets of experimental results from the Flash paper discussed in the lesson. Do you understand the purpose of each set of experiments (what was the question they wanted to answer)? Do you understand why the experiment was structured in a particular why (why they chose the variables to be varied, the workload parameters, the measured metric…).



If you ran your server from the class project for two different traces: (i) many requests for a single file, and (ii) many random requests across a very large pool of very large files, what do you think would happen as you add more threads to your server? Can you sketch a hypothetical graph?



How are processes created?

Fork -> Exec


What is the difference between Fork and Exec?

* Create new PCB
* Copies parent PCB into Child PCB
* Child continues execution at instruction after Fork

* Takes a PCB (created from Fork)
* Replace child image
* load new program and start from first instruction


What are the steps that take place during a system call?

* Copy args to well defined location
* Make the specific system call number
* Kernel-Cross occurs
* Get results


What is preemption?

Interrupting and saving the current execution context


What happens on a process vs. thread context switch?

Process Context Switch:
* Switch out the full PCB, including virtual address space

Thread Context Switch:
* Switch out thread specific PCB information, but the virtual address space stays the same


Process vs. thread, describe the distinctions.

* Has it's own virtual address space
* Created via a Fork (+ exec)

* Shares the virtual address space among all threads in the same process
* Has thread local storage, TID, stack pointer, PC, etc.


What is the role of the CPU scheduler?

Determines which of the currently ready processes will be dispatched to the CPU to start running and determines how long it should be allowed to run for.

Want scheduler to do as little as possible. Scheduling time not doing useful work.


In what ways could a process get back onto the ready queue?

* I/O Request completes
* Time slice expired
* Fork a child (child executes)
* Process waiting for an interrupt (and it occurs)


How are threads useful on a single CPU?

They can hide I/O latency. While one thread is blocking on an I/O call of some kind [and the I/O time is sufficiently long], you can switch to another thread and do useful work, then switch back.


Contrast Processes VS Threads

* Has its own address space, registers, stack, pc, etc.
* All contained in a PCB
* Higher memory requirements
* Higher IPC costs
* Higher context switching costs (largest cost from switching the Virtual Address Map)

* An execution context of a process
* Share the same virtual address space (code, data, files)
* Each has its own registers, stack, and PC
* The PCB will be more complex
* Info about the shared information
* Info about the contexts of each of the threads in the process
* Lower memory requirements
* Cheaper IPC (Inter Thread)
* Lower context switching time


How are threads useful on a single CPU?

They can hide I/O latency. While one thread is blocking on an I/O call of some kind [and the I/O time is sufficiently long], you can switch to another thread and do useful work, then switch back.


When is it useful to add more threads, when does adding threads lead to pure overhead?

* When threads become blocked due to I/O
* When there are multiple CPU's to be used

* When the workload is purely CPU bound and there are more threads than CPUs


What are the possible sources of overhead associated with multithreading?

* Context Switching
* Synchronization


What are condition variables?

A variable that a thread will wait on until it is signaled to continue. Occurs after initially acquiring the mutex and you decide you shouldn't continue yet.

while (something /w shared variable) {
wait(&mutex, &condition);
// Do action with shared data

When you wait on a condition, it automatically releases the mutex, then reacquires it when you exit the condition.


Can you quickly write the steps/code for entering/existing a critical section for problems such as reader/writer, reader/writer with selective priority (e.g., reader priority vs. writer priority)?

while (shared_variabile_predicate) {
wait(&mutex, &condition);
// Adjust predicate value

// Do critical section

lock (&mutex);
// Adjust predicate value again
// possibly signal/broadcast
unlock (&mutex);

- Reader: standard implementation of writer doesn't get access until num readers is 0;
- Writer: Requires additional mutex, but if writer requests access then no more readers can get into the queue until the writer has finished. Current reader finish, then writer goes, then reader can go again.


What are spurious wake-ups, how do you avoid them, and can you always avoid them?

Waking a thread up (broadcast/signal) and the threads are not able to proceed, perhaps, because you signaled while you still held the mutex. This could cause the threads waiting on the condition to then get shifted to waiting on the mutex again (because they couldn't reacquire the mutex).

It can partially be avoided if you signal/broadcast after you have released the mutex. This doesn't always work though. If you need access to the shared variable in order to determine if you should signal/broadcast, then you must still hold the mutex while you do this.


What is a signal/broadcast?

A mechanism for notifying the waiters on a condition that one or more of them should attempt to continue. Signal notifies a single thread in the queue while broadcast notifies all threads in the wait queue. Broadcast is safer than signal.


What is fine grained and course grained locking?

Fine Grained: using multiple locks at various depths for specific areas

Course Grained: Using a single 'mega lock' to lockout a long section of code


If you need to improve a performance metric like throughput or response time, what could you do in a boss-worker model?

* Add workers on demand (least effective, thread construction time)
* Static or dynamic pool of workers
+ simplicity
- ignores locality (since each worker gets random work)
- thread pool management
* Specialized workers
- Boss must do more work because it must decide which [set] of workers should get the task (likely offset by improvements in locality and cache hits)
+ better locality; QoS management (assign more threads to tasks that are more important)
- load balancing is more difficult


What are the limiting factors in improving performance with the boss worker pattern?

The processing time of the boss.

Throughput = 1 / boss_time_per_request


If you need to improve a performance metric like throughput or response time, what could you do in a pipelined model?

* Assign more threads to the slowest stages of the pipeline
+ allows for highly specialized threads
- balancing and synchronization overheads


What are the limiting factors in improving performance with the pipeline pattern?

Can't go faster than the slowest stage of the pipeline.


Describe process and system scope and what they mean to threads.

Process Scope: User level library manages threads within a single process. Kernel doesn't really know about the threads, so it may allocate equal kernel threads among the _processes_, even if one process has more threads than the other.

System Scope: system wide thread management by the OS level thread managers (eg, cpu scheduler). System will see the threads in each user process and allocate the necessary number of kernel threads.


What is the layered MT model?

* Each layer is a group of related subtasks
* end-to-end task must pass up and down through all layers
+ Specialization
+ less fine-grained than pipeline
- not suitable for all applications
- synchronization (must talk to layer above and below)


What is the rationale for having multiple data structures in PCBs?

* smaller data structures
* easier to share
* on context switch, only change what is necessary
* user -level library need only update portion of the state


What are the 'red zones' with regard to the stack?

An area on either side of the stack that you are no allowed to write to. So if you do try to write there, you will cause a fault.


When does the UL Scheduler run?

* ULTs explicitly yield
* timer set my UL library expires
* ULTs call library functions like lock()/unlock()
* Blocked threads become runnable
* Response to signals from timer or kernel


What is a Zombie thread?

When a child process terminates and is awaiting cleanup.


For each of these data structures, list at least two elements they must contain:
* Process
* Kernel Threads

* List of Kernel level threads
* Virtual Address space
* User Credentials
* Signal Handlers

Kernel Threads:
* Pointers to:
* LWP, process, CPU strucures
* Stack Pointer
* Kernel-Level Registers
* Schedule info (class, ...)
* NOTE: info needed even when process is not running! eg, not swappable!

* Current Thread
* list of kernel-level threads
* dispatching and interrupt handling info

* User Level Registers
* System Call args
* Resources usage info
* signal mask
* NOTE: Similar to ULT, but visible to kernel, not needed when process is not running!


What is a kernel trap and why does it happen?

Occurs when a user application is interrupted and control is passed to the kernel. This happens because there was an attempt to perform a privileged instruction.


What is a signal? handler?

Sent from the CPU or an application. Not generated from external hardware.

OS has default handlers for signals, but each process can set its own handlers and override the defaults.

Each thread can have its own signal mask.


What are some examples of signals?

SIGSEGV - access to protected memory
SIGFPE - arithmetic exception (divide by zero)
SIGKILL - (kill, id) can be directed to specific thread

SIGALARM - timer expiration


What types of signals are there?

One Shot Signals:
"n signals pending == 1 signal pending": at least once

Real Time Signals:
"if n signals raised, then handler called n times"


What is the top and bottom half of an interrupt?

Top Half:
* In the event handler
* fast, non blocking
* min amount of processing

Bottom Half:
* arbitrary complexity (this should be in another thread if you want this arbitrary complexity)


How do you calculate throughout time for a number of requests and average throughput (req/sec) for a boss-worker model?

Total Time:
processing_time * ceiling(num_ordered/num_workers)

Average Time (ms/request):
pt * [SUM(T*n, 0, n) + n + 1] / R where n = floor( R/T)


How do you calculate throughout time for a number of requests and average throughput (req/sec) for a pipeline model?

Total Time:
First Unit Time + (R-1) * last_stage_time
Where R = # requests

Avg Time:
(T1 + T2 + T3 +... + Tn) / R
Those are abs time from start for each unit to be done


What is an adaptive thread?

dynamically decides whether it should spin or sleep on the queue. Only applicable to multi-CPU systems.