Exam h21 Flashcards

The whole thang, not just multiple choice

1
Q

Which one of the following is correct?

(a) Program counter contains the instruction the processor is currently executing
(b) Program counter contains the address of the instruction the processor is currently executing
(c) Program counter is a register
(d) All of the above
(e) None of the above

A

(c) Program counter is a register

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which one of the following is incorrect about pipelining?

(a) Pipelining allows executing multiple instructions at the same time on one core
(b) The stages of an instruction cycle in the pipeline require different processing times
(c) Sometimes the prefetched instruction is not the instruction that is executed next
(d) All of the above
(e) None of the above

A

(a) Pipelining allows executing multiple instructions at the same time on one core

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A page fault occurs when . . .

(a) A requested page is in the backing store
(b) A requested page number is found in translation look-aside buffer (TLB)
(c) A requested page is in memory
(d) All of the above
(e) None of the above

A

(e) None of the above

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Consider a processor that needs 5 ns to access the cache memory and 200 ns to access the main
memory. Assume the cache hit ratio is 95%, what is the average memory access time of the
processor?

(a) 5 ns
(b) 10 ns
(c) 15 ns
(d) 20 ns
(e) 100 ns
(f) None of the above

A

Solution: 0.95 × 5 + 0.05 × (5 + 200) = 15 ns

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Consider the instruction mov edx [hello]. How many operands does this instruction have?

(a) 3
(b) 2
(c) 1
(d) 0
(e) None of the above

A

Solution: (b) 2.
The first is the destination and the second sepcifies the source.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Consider an instruction with two operands O1, O2. If O1 uses PC-relative addressing mode, O2 uses
indirect addressing mode. How many memory accesses are required in total to fetch O1 and O2?

(a) 4
(b) 3
(c) 2
(d) 1
(e) 0
(f) None of the above

A

Solution: 1+2=3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

PC-Relative Addressing

A

Typically, PC-relative addressing involves fetching the operand from a memory location whose address is calculated as the sum of the Program Counter (PC) and an offset.

One memory access is required to fetch.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Indirect Addressing

A

Indirect addressing involves using the contents of a memory location as an address to fetch the actual operand.

Two memory accesses are required for indirect addressing: one to fetch the address from the specified memory location, and another to fetch the actual operand using the obtained address.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
A

(e) None of the above

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which one of the following is a function of a dispatcher?

(a) To context-switch between processes
(b) To give control of the CPU to the process selected by the short-term scheduler
(c) To jump to the proper location in the user program to restart that program
(d) All of the above
(e) None of the above

A

(d) All of the above

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which one of the following is incorrect?

(a) Paging allows memory allocated to a process to be noncontiguous
(b) Paging solves the problem of internal fragmentation
(c) Paging solves the problem of external fragmentation
(d) Paging avoids the problem of thrashing
(e) All of the above
(f) None of the above

A

(b) Paging solves the problem of internal fragmentation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which one of the following scheduling algorithms has starvation problem?

(a) Shortest job first
(b) Multilevel Queue Scheduling
(c) Shortest-remaining-time-first
(d) All of the above
(e) None of the above

A

(d) All of the above

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which one of the following is correct?

(a) PCBs are not actually needed by the operating system, but is only maintained in order to
provide the user about statistics for the processes
(b) PCBs are the data structure needed for the operating system to implement multi-programming, context shift, and even process forks
(c) A PCB does not contain a program counter
(d) A PCB contains the computer’s host name
(e) All of the above
(f) None of the above

A

(b) PCBs are the data structure needed for the operating system to implement multi-programming, context shift, and even process forks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which one of the following is incorrect?

(a) The fork() system call is used to create a new process on a UNIX computer
(b) A new child process created using the fork() system call is initially identical to the parent
process
(c) A process can use the return value from the fork() system call to decide if it is the parent or
child process
(d) The exec() system call can be used to replace the program content of a child process
(e) All of the above
(f) None of the above

A

(f) None of the above

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which one of the following is correct?

(a) Creating a new process hardly requires any resources at all, thus there is no need to limit the
usage of this
(b) A process and a thread are just different names for exactly the same thing
(c) A thread is also called a light process
(d) It is only possible to create threads on a multi-CPU/core system
(e) All of the above
(f) None of the above

A

(c) A thread is also called a light process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which one of the following is incorrect?

(a) A system call is used by a user program to request the operating system to perform a task on its behalf
(b) A system call may perform a task that a user program would not be allowed to perform itself
(c) Usually system calls are used through high-level system libraries
(d) A system call is also called a software interrupt
(e) None of the above

A

(e) None of the above

17
Q

Problem 2:

A

Solution: 0, 1, 2, 3

18
Q

When an application has a need to perform multiple tasks at the same time (in parallel), this can be achieved by either starting additional processes or threads. What are the differences between processes and threads? Give examples of “typical” use cases for both.

A

Solution: In general, starting a new process is significantly more resource demanding than starting a thread. A new thread only requires a new thread control block, while memory and oher resources are shared. A new process, on the other hand, requires its own copy of the memory block and all child threads. Typically, a thread is created when a program has a need to perform several tasks concurently, in such way that memory and other resources may be shared. An example of this may
be a media player that needs to download media from a server and decode it at the same time. A process on the other hand may be more suitable when a program has a need to launch a child
process that will operate more “independently” in separate memory and having the executable code (text) replaced.

19
Q

Hardware components like CPU, memory, I/O controllers, etc., can communicate either via a shared bus or via a point-to-point–connection. What are the differences between these two types of
connections, and what are the relative advantages/disadvantages of each? Point-to-pont–connections are becoming increasingly popular. Why is this?

A

Solution: A shared bus will typically connect several pieces of hardware, for example CPU, RAM, disk controller, graphics controller, network controller, sound controller, etc. A point-to-point–connection, on the other hand, only connects exactly two two pieces of hardware. For example RAM and CPU or CPU and hard disk controller. A shared bus needs more advanced methods to prevent several pieces of hardware to utilise the bus at the same time. The overhead of preventing collisions will typically make it difficult to utilse the full bandwidth of the bus. Also, a shared bus is harder to design robustly for higher data rates as the increased number of connections will distort the electrical signals. For these reasons, point-to-point–connections are becoming increasingly popular.

20
Q

What are the differences between a CPU and a core? Are there any potential implications with respect to the performance if two cores are physically located on the same chip, or if they are
located on two different chips? Why and why not?

A

Solution: A CPU usually refers to a physical die, that may contain several cores internally. For some applications it does not matter if the cores are on different physical CPU die or on different
ones. An example of this is if the processes running on them do not need to excange data. However, if two processes need to share data, they can do this via the on-chip cache memory of the cores are on the same physical die, but if they run on different CPU dies, they will need to excanhe the data via a significantly slower system bus.

21
Q

Early computer systems did not have the ability to store the program (processing instructions) as “data”. Rather, they had to be “hard-coded” using wires. Why is it important to be able to store the program as data? What are the implications if this is not possible?

A

Solution: To be able to store a program as data is one of the main foundations for modern computer systems. Reprogramming a computer using cables and wires is very time consuming, and is not practically possible with the size and complexity of modern programs. One may imagininging installing an app on a mobile phone requiering the user to reconnect millions of tiny wires using a microscope…!

22
Q

Problem 3

A
23
Q

Even candidate numbers: 4 KB
Odd candidate numbers: 8 KB

Consider a computer system that uses 64-bit logical addresses and has physical memory that is divided into F frames

(a) What is the logical address space?
(b) What is the size of each frame in the physical memory?
(c) If the system uses an an inverted page table, how many entries does the table have?

A

a) Solution: 2^64 bytes
b) Solution: Even: 4 KB; Odd: 8 KB
c) Solution: F entries

24
Q

In general, a process in a system is in one of these five states: new, ready, running, waiting, terminated. A state transition of a process refers to a process that is moving from one state to
another, e.g., a state transition, ready → running refers to a process switching from ready state to running state.
Write down the state transition(s) where scheduling decision will happen if the operating system uses preemptive scheduling algorithm?

A

Solution:
new → ready (optional)
running → ready
running → terminated
running → waiting
waiting → ready

25
Q
A
26
Q

Consider a paging hardware with a translation look-aside buffer (TLB). Assume that the page table and all the pages a process requests are in the physical memory. Suppose the percentage of times that the page number of interest is found in the TLB1
is 85%, each TLB lookup takes 1 ns, and each memory access takes 200 ns. What is the effective access time (EAT)?

A

Solution: 0.85 × (1 + 200) + 0.15 × (1 + 200 + 200) = 231 ns

27
Q

Consider a process having the page table and some of its requested pages in the physical memory.
Assume that the process tries to access a page and there is a page fault. How many page transfers, i.e., to move pages in to (or out of) the backing store, are required to fetch this page if

(a) there are free frames in the main memory?
(b) there is no free frame in the main memory, but all frames are clean?

A

a) Solution: one
b) Solution: one

28
Q

A single-threaded application needs to process a significant amount of data. The data consists of
values that should be processed in a number of processing operations (steps), in a similar way, and
independently of the other data values. This means that the processing can be performed in two
ways (approaches):
* Same operation first: first perform the same (single) operation on all values, then move on to the next (single) operation.
* Same value first: first perform all operations on the same (single) value, then move on to the
next (single) value.
(Instructions that can work on vectors of values are not available.) To decide which approach is
more efficient, both approaches have been implemented and tested. The tests reveal that the later
method is faster. Based on your knowledge on CPU caches, explain how this can happen

A

Solution: Using the first method requires the CPU to constantly switch between different data to process. This means thet there will constantly be cache misses, and data has to be fteched from
the (slow) system bus. The later method, on the other hand, will allow the the CPU to process the same data for as long as possible, which will lead to fewer cache misses. Note however that this assumes that the size of the executable code for processing the data is (much) smaller thatn the size of the data to be processed.

29
Q

During the testing mentioned above, it is also noticed a significant speed-up on a CPU that has
twice more L1 cache than another CPU that is otherwise identical. What are possible reasons for
this? Is there anything you can try to also improve the performance of the application on the CPU
with less L1 cache (without modifying the hardware)?

A

Solution: This difference in processing speed may be becasue in one case, all values and executable code for one set of data will fit into the L1 cache of the CPU, while in the other case it will not,
constantly causing cache misses, and data having to be retrieved from the (slow) system bus. It may be possible to optimise the processing code, for example using assembly programming, to
make both values and processing code fit into the size of the L1 cache. Perhaps this will not be entiredly possible, but it may at least be possible to reduce the number of cache misses

30
Q

Between the CPU and the main system memory (RAM), there are often three layers (L1, L2, L3) of cache memory. This memory is decreasingly fast, and increasingly large. Explain the reason
for this, and how these different layers interact. Why is it so “complicated” — why not just use a single layer of cache?

A

Solution: In general, becausue of the way memory is implemented, a larger memory will also be a slower memory. Further, a fast memory is more complicated to implement than a slow memory, hence more expensive. Thus, a price/value trade-off will favour a cache memory with several layers of decrasingly fast and increasingly large memory.