Operating Systems Flashcards

(37 cards)

1
Q

Operating System Structures

A
  1. Monolithic
  2. Layered
  3. Microkernel
  4. Modular
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Monolithic Structure

A

All devices are implemented
by a large kernel. Any new feature is added to the kernel. “Everything is connected to everything”.

Pros and Cons of the Monolithic Structure
Communications within the kernel is fast. However, the OS may be difficult to understand, difficult to modify and less secure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Layered Structure

A

An Operating System with a layered structure organizes its services into distinct layers. Each layer can only communicate with adjacent layers.

Pros and Cons of the Layered Structure
The OS is easy to debug. Development can easily be organized into separate units. However, performance may be poor since access to a specific service often requires traversal through multiple layers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Microkernel Structure

A

An OS with a microkernel structure has services implemented by servers, and a small kernel delivering messages between them.

Pros and Cons of the Microkernel Structure
The OS is secure and reliable. Performance may be poor due to increased communication overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Modular Structure

A

The operating system begins with a small kernel and adds or removes additional modules as needed

Pros and Cons of the Modular Structure
The OS is fast since unnecessary services do not need to be loaded and since it allows for direct communication between modules. However, as the number of loaded module increases, the OS begins to resemble a monolithic structure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Process Control Blocks

A

Process Number
Process State
Process address space
Process I/O

Each process is represented by a process control block (PCB). The process control block is stored in memory (or on disk).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How does Context Switch work (A,B)

A

Stop running process A and start process B:
1. change the process schedule state of process A
2. save the context of process A
3. load the context of process B
change the process scheduling state of process B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Context Switches (ctd)

A
  • A context switch is pure overhead
    The time it takes depends on several factors, including the number of registers that must be copied.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Sections of Process Address Space

A

A process addresses space has four sections:
1. A stack for temporary data
2. A heap for dynamically-allocated data
3. A data section for static data
a. size is constant, so data can be changes, but not in the size bigger than initial
a text section for program code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The Heap

A

In the heap, blocks of memory are allocated and removed in an arbitrary order. The heap is used for variables that need to persist beyond the scope of a single function or need a flexible amount of memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Process Spawning

A

A process is created at the request of a different process. In Linux there are four system calls for process spawning.

fork, exec, wait, exit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Interprocess Communication can be implemented using? (0/2]

A

Interprocess communication may be implemented using either:
1. shared memory
message passing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Shared Memory

A

Interprocess communication by shared memory involves processes writing data to/reading data from an agreed area of memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Message Passing

A

Interprocess communication by message passing involves processes sending messages to/receiving messages from an agreed mailbox.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Shared Memory vs Message Passing

A

Message passing requires regular system calls, while shared memory only requires system calls to set up the shared memory space. As a result, shared memory is generally more efficient for large data transfers, whereas message passing is better suited for smaller data exchanges.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Controller Registers

A

The CPU communicates with the controller by reading and writing to the controller’s registers. (data in, data out, status, control)

24
Q

Port-based I/O vs Memory-mapped I/O

A

The CPU can access the controller registers using special instructions (port-based I/O) or standard memory instructions (memory-mapped I/O). Memory-mapped I/O is today much more common than port-based I/O.

25
Polling
The CPU repeatedly checks the controller's status register to see whether the controller is busy. When the controller is ready and the CPU wants to give new instructions, the CPU writes to the data-out register and signals that is has done so through the control register.
26
Interrupts
The CPU regularly senses an interrupt-request line. When the CPU gets an interrupt signal through the interrupt-request line, it stops the current process and initiates a response.
27
Polling vs Interrupt
With interrupts, the CPU can use an interrupt controller (IC) to monitor the status of several devices at the same time, and serve them based on priority. With polling, the CPU needs to check each device individually in a round-robin fashion.
28
When pollling may be preferable?
May if preferable if: * the controller and the device are fast * the I/O rate is high * some I/O data can be ignored the CPU has nothing better to do
29
Direct Memory Access (DMA)
A CPU may offload large data transfers to a direct memory access (DMA) controller. The CPU writes a command block into memory, specifying the source and destination of the transfer. The DMA controller can then perform multiple transfers via a single command. When the transfer is complete, the CPU receives an interrupt from the DMA controller.
30
Character I/O (System Calls)
Example devices: keyboards, computer mice, microphones, speakers. Characters must be processed in order that they arrive in the stream. The interface includes read and write operations. Character devices are low volume devices.
31
Block I/O (System Calls)
Typically non-volatile mass storage devices. Block devices are used for to transfer blocks of data. The interface includes read and write operations. Block devices are high volume devices.
32
Block I/O: Memory Mapped Files
A memory-mapped interface provides access to disk storage via a location in main memory: there is a system call that maps a file on the device into memory. The OS deals with transferring data between memory and the device and can perform transfers when needed, thereby reducing the number of mode switches.
33
Network I/O
The interface includes operations for creating and connecting sockets, sending and receiving packets, checking whether a transfer was successful, etc. The main difference between network I/O devices and other I/O devices is that with network I/O devices things routinely go wrong (missing packets, etc.).
34
Define wear-levelling and its purpose
Spreads writes across flash blocks to prevent early cell failure and extend SSD lifespan.
35
Waht is dynamic wear leveling
Remaps only rewritten (hot) blocks to fresh blocks; static data stays put.
36
What is static wear-levelling?
Moves cold data periodically so all blocks age evenly.
37
Compare dynamic vs static.
Dynamic = simpler, fewer moves, uneven wear. Static = more moves, controller overhead, uniform wear → longer life.