Practice Questions Flashcards
(13 cards)
Do the following statements refer to an OS mechanism (M), abstraction (A) or policy (P)?
a) The OS represents communication endpoints as sockets.
b) A process can wait for incoming connections on sockets via accept()
c) A socket number may be reused
d) A process can open up to SOME_MAC_NUMBER sockets
e) All threads in an address space can access all sockets opened by anyone in the process.
a) A - A socket is an abstraction
b) M - Implementation of a data structure and supported operations (eg accept()) are mechanisms.
c) P
d) P
e) P
- OS limits on # of sockets, how socket identifiers should be assigned (eg number can be reused, limit on number of files, sharing of sockets in address space) are policies.
State whether the following pieces of information describing an execution context are process-wide (PW) or thread-specific (TS):
a) stack pointer
b) virtual-to-physical memory mappings
c) signal handlers
d) signal mask
e) open files
a) TS
b) PW
c) PW
d) TS
e) PW
When can a process enter the RUNNABLE or READY state?
Mark all correct answers:
A. Once it has been created and initialized
B. When it starts waiting on a signal
C. When it issues an I/O call
D. When it is preempted and context switched so that another process can run on the CPU.
E. When it gets scheduled to run on the CPU
A,D
What are the different states of a Process lifecycle?
New -> when a process gets created.
Ready -> OS allocates/initiates a PCB and confirms there are enough resources.
Running -> Process is actually running.
After running, a few things can happen:
1) Process is interrupted for a context switch and moves back to “Ready”
2) Process needs to do a long I/O or event and is moved into “Waiting”. Once the event is over, it is moved back into “Ready”.
3) Process completes and is terminated
Consider the various sources of overhead involved during a context switch among two processes (for among threads from two different processes).
Write the terms DIRECT or INDIRECT in the space next to each of the terms below, based on whether the terms refer to direct or indirect overhead.
A. save registers
B. read new stack pointer
C. cold cache
D. cache pollution
A. DIRECT
B. DIRECT
C. INDIRECT
D. INDIRECT
Direct: time to save and restore PCB and dispatch new thread (save regs, read SP register)
Indirect: loss of performance due to loss of cache locality (cold cache, cache pollution)
In a file server, the following steps have to be performed on each request: 1) parse request, 2) find and read file 3) respond with file or error message. Assume steps 1) and 3) take 1 ms and step 2) takes 2ms. Assume any delays due to inter-thread communication, synchronization, context-switching, etc are negligible. Assume you have infinite resources.
Consider a boss-worker and a pipelined model designs, each with a total of 4 threads. In both cases, you need to process a total of 6 requests.
Answer the questions below. Please try to follow these rules:
- Do no write the units
- Do not use a comma in your answer (eg write 1000 instead of 1,000 or write .75 instead of .75)
- Show the results with up to two decimal places.
For the boss-worker model, where the boss simply accepts requests and passes them onto a shared worker’s queue:
a) How many requests can be processed concurrently?
b) How long does it take to process a total of 6 requests in ms?
c) What is the average throughput for this workload, in req/ms?
d) What is the average response time for this workload, in ms?
For a balanced implementation of the pipeline model:
e) How many threads are allocated to the first stage?
f) How many threads are allocated to the second stage?
g) How many requests are concurrently being processed?
h) How long does it take to process a total of 6 requests in ms?
i) What is the average throughput for this workload, req/ms?
j) What is the average response time for this workload, in ms?
Notes:
- 4 threads
- Step 1 (1ms), Step 2 (2ms), Step 3 (1ms)
- Process 6 requests
a) 3 worker threads => 3 concurrent requests max
b) 6 requests is 2(1+2+1) = 8ms
- 3 requests at a time
c) throughput: 6 req/8ms = 0.75 req/ms
d) avg. response time = ((34 for the first three requests) + (3*8 for the second three requests)) / 6 = 6ms
e) 1
f) 2
g) Total 4 requests are concurrently processed, 1 in stage 1, 2 in stage 2 and 1 in stage 3
h) First request is processed in 4ms, each of subsequent 5 needs extra 1ms for last stage: 4 + 5*1ms = 9ms
i) Throughput = 6 req/9ms = 0.67
j) Response time = first completes in 4, second in 5… so = (4+5+6+7+8+9)/6 = 6.5ms
It takes 4 ms for 4 threads to complete an order in a boss-worker model. What is the average response time to completed 7 requests?
- In a boss-worker model, only 3 worker threads are available to handle requests concurrently.
- 3 requests * 4 ms = 12 ms for the first three requests
- 3 requests * 8 ms = 24 ms for the second 3 requests
- 1 requests * 12 ms = 12 ms for the last request
- Total 48 ms / 7 requests = 6.86 ms avg response time
It takes 4 ms (1+2+1) for 4 threads to complete an order in a pipeline model. What is the average response time to completed 7 requests?
How long does it take to complete 7 requests?
- Stage 1 (1ms), Stage 2 (2ms), Stage 3 (1ms)
- First order complete in 4, next in 5…
- (4+5+6+7+8+9+10)/7 = 7 ms avg response time
- First is done in 4 ms, and +1 for each remaining request. 4 + 1 *6 = 10 ms
A critical section is accessed by three types of threads: red, green, and blue. Red threads have priority over green threads, which in turn have priority over blue threads, which in turn have priority over blue threads. The critical section may be accessed by up to 3 red threads at a time, or only 1 blue, or only 1 green thread.
The listing below shows the pseudo code for the critical section enter and exit for the green threads.
Mark all lines that are incorrect or where something is missing. If there is a line of code that is missing, select the corresponding blank line where you think the code should be inserted.
A. 01. cs_enter_green { B. 02. mutex_lock(m); C. 03. D. 04. while (red >=3 or blue > 0 or green > 0 or red_waiting > 0) E. 05. wait(green_cv); F. 06. green++; G. 07. H. 08. mutex_unlock(m); I. 09. J. 10. } //end of cs_enter_green K. 11. cs_exit_green { L. 12. mutex_lock(m); M. 13. N. 14. green = 0; O. 15. P. 16. if(red_waiting > 0) signal (red_cv); Q. 17. R. 18. else if (blue_waiting > 0) signal (blue_cv); S. 19. T. 20. } // end of cs_exit_green
Notes:
- Red > Green > Blue
- Max = 3 Red OR 1 Blue OR 1 Green
In cs_enter:
- missing green_waiting ++, also green_waiting – (needs something to keep track of pending green requests so that priority can be implemented)
- while (red>=3… ) -> any red in the CS means green has to wait, so even if red > 0 it needs to wait
- wait(green_cv) – missing mutex
In cs_exit:
- if (red_waiting) signal… - up to 3 red can be in CS, should be broadcast
- missing check and signaling for green threads
- missing mutex_unlock
Tips:
1) Mutex Lock AND unlock present?
2) Ensure thread priority order
3) Track pending requests for priority thread use cases…
- > green_waiting++/–
4) wait() call includes both the conditional variable AND the mutex
5) If more than 1 thread is waiting, use Broadcast
Example Code of Priority: //writer lock(&m); waitingWriter++ while(status != 0); wait(&cond_read, &m) status = -1; waitingWriter-- unlock(&m);
Fill in SPIN or BLOCK in the following sentences:
A. On a uniprocessor system, if thread T1 tries to lock a mutex that has already been locked by T2, by default T1 will:
B. On a multiprocessor system, if thread T1 tries to lock a mutex that has already been locked by T2, by default T1 will:
C. Given the adaptive mutex proposed in the Solaris papers, if thread T1 tries to lock a mutex that has already been locked by T2, T1 will […..] if T2 is currently executing
D. Given the adaptive mutex proposed in the Solaris papers, if thread T1 tries to lock a mutex that has already been locked by T2, T1 will [….] if T2 is not currently scheduled.
A) BLOCK
B) BLOCK
C) SPIN
D) BLOCK
By default, thread trying to get a locked mutex will be blocked, regardless of uni/multiprocessor
If adaptive mutex then thread will spin if mutex owner running on another CPU (in hopes of getting lock in less time than needed to do context switch), otherwise will block.
In Vivek Pai’s paper on the Flash webserver, the authors evaluated several webserver designs using two traces - the Owlnet trace, whose dataset size was 100MB, and the CS trace, whose dataset size was 7.8GB.
The webservers they compared include Flash (which allowed the AMPED model), SPED (the Zeus single-process event-driven server), MT (a multi-threaded web server), MP (a multi-process webserver), Apache (an unoptimized Apache webserver).
The tests were performed using a 333MHz Pentium ll CPU with 128 MB of memory and multiple 100 Mbit/s Ethernet interfaces.
Give the correct answer to the following questions:
- In the event of the CS trace, which of the following will perform better:
a. Flash vs MT
b. Flash vs MP
c. Flash vs SPED
d. SPED vs MT
e. MT vs MP - In the event of the Owlnet trace, which of the following will peform better:
f. Flash vs MT
g. Flash vs MP
h. SPED vs MT
i. MT vs MP
j. MP vs Apache - Without modifying the implementation described in the paper, when running on a more recent quad-core platform with 12GB of memory, which one would you expect to perform better:
k. Flash vs MT
l. MT vs MP
a) FLASH
b) FLASH
c) FLASH
d) MT
e) MT
CS trace is large, so there will be blocking. Flash performs better than MT/MP/SPED. MT performs better than SPED because it can hide I/O latency. MT performs better than MP because it has smaller memory footprint, so more memory is available for caching, plus synchronization is cheaper.
f) FLASH
g) FLASH
h) SPED
i) MT
j) MP
Owlnet trace is small, and fits in cache. Flash and SPED perform better than MT and MP (no context switching overhead). MT better than MP for same reason as above. MP better than Apache because it has extra optimizations.
k) MT
i) MT
If tests were run on multiprocessor, then MT better than Flash because in Flash only blocking I/O will result in separate thread, otherwise remaining cores will be idle. MT better than MP for same reason as above.
Operating systems’ (e.g. in Linux or UNIX in general) signals are: (select all that apply)
A. an interface from the kernel to the user-level processes
B. a synchronous notification mechanism from the kernel to a process or a thread within the process
C. an asynchronous notification mechanism from the kernel to a process or a thread within the process
D. a way to notify a user-level thread waiting on a condition variable
E. a way to notify many user-level threads waiting on a condition variable (eg. via broadcast)
F. invented so that the kernel can engage the user-level threading library scheduler
A,B,C,D,E,F
OS signals are an interface from OS to apps to pass sync or async notifications/information from the kernel to processes. So first three are correct. Signals are relevant regardless of whether threading is used at kernel-level or user-level. They existed before multithreading was used, and user-level threading libraries just complicated how they are handled.
Signals in threading packages/libraries (pthread_signal) are used as a mechanism to wake up one or more waiting threads.
Select the correct completion for the following sentence:
The Native POSIX Threads Library is an implemention of POSIX threads for Linux, and it follows the…
A. 1x1 model of mapping user-level to kernel-level threads
B. Mx1 model of mapping user-level to kernel-level threads
C . MxN model of mapping users-level to kernel-level threads
A.
Linux Native POSIX Threads is 1x1 model.
Adopting a 1x1 model solved all the pains of figuring out how to signal the right user level thread.