Operating Systems Flashcards

1
Q

What is an Operating System?

A

An operating system is a program (OS kernel) that manages different aspects of the operation of the machine and runs the highest privilege in a protected domain.

  • Manages processes running on the machine.
  • Manages data storage like RAM and file systems
  • Manages input/output devices
    - Manages networks and communications with other networks.
    - Manages the user interface.
  • Manages security and protection issues.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is spooling

Where are jobs stored

A

Input spooling - Placing data into buffers or queues so it can be processed by the system when it is ready to do so.

Output spooling - Sending data from output device to be stored in a buffer, and then the output device can retrieve and process the data from the buffer at its own pace.

It stores these incoming jobs, and output from jobs in the system.
- The secondary storage medium is a disk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are time-sharing operating systems?

A

Allows multiple users or tasks to use a single computer simultaneously. They do this by dividing the CPU into small slices and rapidly switching between users or tasks, giving the illusion of simultaneous operation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are real-time operating systems?

A

Attends jobs / users based on the priority.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are process control blocks?

What do they contain?

A

The operating system keeps a record of each process in the system, with a data structure.

  • Process ID number
  • Pointer
  • Process state
  • Program counter
  • Contents of CPU registers
  • Memory management information
  • I/O status information
  • Accounting information
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do scheduling process work?

A

Ready queue
- All the processes which are ready to execute.

Device queue:
- All the processes waiting to use a particular device (one queue per device).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain system calls.

A

Systems calls are requests for services to OS kernel, implemented via special instructions e.g., SYSCALL, INT or SVC. These instructions cause interrupts in the CPU.

Instructions called ‘programmer interface’ to OS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do system calls (interrupt) work?

A

The CPU hardware has an ‘interrupt request line’ that can be activated (voltage applied) by system calls or external devices, prompting the OS to respond.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does OS respond to interrupt?

A

CPU switches from user to kernel mode, so the control of the CPU is given to the OS.

  • Saves the computing context of current process
  • Transfers control to a fixed memory location holding an appropriate ‘interrupt-handling routine’.
  • When the routine is finished, it restores the context of the interrupted process.

The ‘interrupt vector’ is an array of locations that hold the addresses of these interrupt-handling routines.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How is a Process created?

A

A process is created by another process: we talk of parent and child processes.

Initial: ‘init’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Give explanation of shapes used in a diagram for life cycle of processes in OS.

A
  • Rounded rectangles are events.
  • Rectangles are queues where processes wait.
  • Circles are processing resources that serve the events/queues.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do you implement files - File Systems?

A

A file is broken into ‘logical blocks’, to make the mapping to disk blocks easier to manage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is Disk Allocation Methods?
List and describe each one.

A

Disk Allocation Methods determine how files are stored on a storage device by specifying how and where data is allocated.

Contiguous Allocation:
File is 100 bytes, it is stored in memory as a contiguous block of 100 bytes.

Linked Allocation:
Each file is a linked list of disk blocks.

File Allocation:
A table is created at the beginning of each partition, with an entry block in the file system. So, each entry points to the next part of the file.

Indexed: Each file has it’s own inode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are some disadvantages of Contiguous Allocation?

A

External Fragmentation:
- A situation where the free space is fragmented. The total space is enough for a file but non of the contiguous space.

Internal Fragmentation:
- When the allocated block is larger than what is needed, therefore wasted space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are some advantages and disadvantages of Linked Allocation?

A
  • No external fragmentation.
  • Only effective for sequential access
  • Pointers take up space
  • Internal fragmentation (which gets worse the larger your blocks get)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are some advantages and disadvantages of the File Allocation Table (FAT).

A

Same as linked allocation, with even more head seeks, in fact. Unless the fat is cached.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a Device Driver

A

A device driver is a special kernel module (software) that controls the operations of a device with the device-specific information (i.e knowledge of the device controller)

In simple terms, a device driver contains information and instructions needed for the operating system to communicate with and control the device. It serves as a bridge between the operating system and device controller.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is a Device Controller

A

Device controller is a hardware unit on a device that can know and control a device’s status or behaviour. The CPU communicates with the controller via the device driver.

In simple terms, a device controller is responsible for managing and controlling a specific type of hardware device.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

List a few I/O devices, along with their corresponding functionality.

A

Block Devices:
- Optimize the sequence of the read/write requests in order to improve performance, e.g., I/O scheduling.

Character Devices:
- Handle one character at a time, making them suitable for tasks like reading input from keyboards, mice, and serial ports, where data is processed character-by-character.
- Can support multiple preprocessing facilities: Such as reading line by line (buffering), or handling backspaces.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are the three I/O models for system calls.

A

Blocking I/O:
- If a process B issues an I/O system call, it (process B) waits until the I/O is completed.

Non-Blocking I/O:
- If a process issue an I/O system call, it (the process) waits for a fixed (but small) interval, and returns (possibly with some data still untransferred). It keeps trying the system call until I/O is completed.

Asynchronous I/O:
- If a process issues an I/O call, it does not wait for the I/O to be completed, but the OS informs the process with a signal when the I/O is completed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What does I/O buffering do?

A

I/O buffering caches frequently used data in memory to reduce secondary storage access and write operations until data access is complete.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are cooperating processes? and what conditions make a processes ‘cooperative’.

A

Execution of one process can effect the execution of another, e.g., processes for parallel computing, threads for multi-threading.

Naturally, any processes which share data are cooperating processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

In terms of cooperating processes, what is the producer / consumer explanation?

A

The producer generate data and place it in a buffer, while consumers retrieve and process this data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Briefly explain creating child processes.

What is the value of the parent process and child process?

A

Parent -> forks (then waits)
Forks -> child process -> exec -> exit
Parent continues

Child: 0
Parent: Child’s PID

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is an ‘address space’ in terms of Parent and Child processes?

A

It provides a logical and isolated memory environment for a process in the computer’s virtual memory system.

The child process initially inherits the parents address space, but can make changes with no direct impact to the parent’s address space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Process termination, explain it, and how it occurs.

A

When a process has executed its last statement, it executes a special function: the exit system call:
- Which may return its status to the parent process.
- All it’s resources are deallocated by the operating system.

Processes can also be terminated by other processes, but:
- You can only kill your own processes (created by the same user).
- Orphan process: A process whose parent terminated (inherited by init).
- Zombie process: One which terminated, but its live parent is NOT waiting for it. Since no process is receiving its exit status, it stays in the process table.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are process signals?

A

Signals are important communication channels between the OS kernel and processes. For example, a SIGCHILD signal is sent to a parent when a child process terminates.

Handlers handle signals, if there is no handler, the default is to kill/terminate the process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is a Pipe?

A

A communication channel for connecting one process’ input with another process’ output.

A pipe is used for a typical communication between producer/consumer processes using circular buffer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Briefly explain shared memory (mmap).

A

It enables two or more processes to share memory regions, allowing them to exchange data or communicate with each other through it.

It can be achieved by two processes mapping to the same file, or using fork() after mmap().

Private mapping is often used to set up new memory sections (even the child can’t access).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

How does private mapping important performance?

A

Map the file into memory, as memory is more effective than hard-drive access.

So we can just use a pointer to memory, NO system call.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are threads, describe what they are.

A

Operating systems frequently support a special kind of processes called a thread to allow more convenient data sharing.

  • Threads within a single process use the same information, except they have their own program counter, registers and stack space.

This makes processes more lightweight.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What are threads, and why use them?

A

Thread:
- Unit of execution within process
- Has it’s own stack, registers, program counter.
- Share memory space which threads can communicate through: heap and memory.
- Does not need to perform context switching (which is expensive)

33
Q

Servers as Threads? Explain it

A

A web server is a process that provides data over the web to requesting clients.

The web server creates a thread to deal with each seperate request that arrives.

34
Q

Explain process states

A

Processes will move through different states depending on certain conditions, such as ‘ready’ state and ‘running’ state. These conditions include interrupts and waits.

35
Q

Briefly explain the scheduling algorithm: First Come First Serve.

A

The order in which the jobs arrive, is the order in which they are executed.

36
Q

Briefly explain the scheduling algorithm: Shortest job first.

A

The job with the shortest estimated execution time will run first, and so on.

Disadvantages include:
- You never know in advance what the length of the next CPU burst is going to be.

37
Q

Briefly explain the scheduling algorithm: Priority scheduling.

A

Jobs have a value attached which reflects their priority.

Disadvantages include:
- ‘Fairness’

38
Q

Briefly explain the scheduling algorithm: Round robin.

A

Run jobs for a set amount of time, before moving on to the next. It will loop around till all jobs have been attended.

Disadvantages include:
- Some jobs DO have higher priority.
- If the job attention time is too large we get FCFS problems.
- If the job is too small, we get faster response time, but slower throughput. Why? Because we spend more time on context-switching.

39
Q

Why do we want a scheduler (scheduling algorithm for processes)?

A

To keep the CPU busy (CPU utilisation)
- Processes tend to exhibit CPU burst cycles.

So processes don’t need to spend TOO much time waiting for the cpu (fairness).
- Even if the CPU is always busy, executing processes in different orders can change the average amount of time a process spends queueing for the CPU.

So that interactive processes are responsive.
- Take into account that for non-interactive the time spent on each job can increase (which makes less overhead, for example reducing contact switching time).

40
Q

List criteria’s for scheduling algorithms.

(Aspects that Scheduling Algorithms need to take into account)

A

CPU utilisation:
- The percentage of time that the CPU is busy.

Throughput:
- The number of processes that are completed per unit time.

Turnaround time (for a single process):
- The length of time from when the process was submitted (arrived) and when it is completed.

Waiting time (for a single process):
- The total amount of time the process spends waiting for the CPU.

Response time (for a single process):
- The average time from the submission of request to a process until the first response is produced.

41
Q

What are the 4 situations for scheduling decisions to take place?

State which of these can be run by non-preemptive, and preemptive scheduling systems.

A

Non-preemptive & Preemptive:
- A process switches from running to waiting.
- A process terminates.

Preemptive:
- A process switches from running to ready (due to an interrupt).
- A process switches from waiting to ready state (due to completion of I/O)

42
Q

How do you calculate waiting time?
Do so for the following (arrival time, burst time):

P1 (0, 24)
P2 (1, 3)
P3 (2, 3)

and the following:
P1 (2, 24)
P2 (0, 3)
P3 (1, 3)

A

P1 = 0ms
P2 = 23ms (24 -1)
P3 = 25ms (27 - 2)

P1 = 4ms (6-2)
P2 = 0ms
P3 = 2ms (3-1)

43
Q

Priority scheduling comes with a problem called starvation, explain this briefly and explain a fix.

A

Starvation is when higher priority jobs make it so lower priority jobs never run.

To fix this, we do ‘aging’, aka., increase priority overtime.

44
Q

In round-robin scheduling algorithm it’s possible a process re-enters the ready queue at the same time that another process arrives in the queue. How do solve this?

A

Use FCFS to tie-break it.

45
Q

What is a mutex?

What is a semaphore?

A

Mutex:
Makes sure only on thread / process can access shared resources or critical sections at one time.

Semaphore:
Used to manage access to a specified number of resources or coordinate multiple threads based on available resources.

(Note: can have negative values telling us how many processes are waiting)

46
Q

What is the ‘Data Race Problem’?

A

Data race is where multiple processes/threads are accessing the same data/memory location and at-least one of them is modifying the data.

These areas where data race can occur are called critical areas, which must be locked to them.

47
Q

What functions does a semaphore have?

A

Wait(S):
Checks if S <= 0, if so do nothing.
Otherwise, decrease S.

Signal(S):
S = S + 1.
(Done when a process has executed or has terminated).

48
Q

What is this called?

wait(e_space)
wait(lock)
line++;
signal(lock)
signal(f_space)
_______________________

wait(f_space)
wait(lock)
line–;
signal(lock)
signal(e_space)

A

Semaphores: are synchronization primitives used to control access to shared resources and manage concurrency in multithreaded or multiprocessing environments.

Mutex (short for “mutual exclusion”) is a synchronization mechanism that ensures only one thread at a time can access a shared resource or critical section, preventing concurrent access and potential data corruption.

49
Q

What is a ‘CAS’ lock?

A

Compare and Swap Lock:
An atomic, uninterruptible action. Their atomicity is guaranteed by locking the memory bus.

50
Q

How does a ‘deadlock’ occur?

A

Process 1:
acquire_lock(1)
acquire_lock(2)

Process 2:
acquire_lock(2)
acquire_lock(1)

Overall explanation:
Processes cannot progress, as they are waiting on eachother.

51
Q

Necessary conditions for a deadlock to occur.

A

Mutual exclusion:
- At least one of the held resources must be non-sharable (locked).

Hold and Wait:
- There must be at least one process that’s holding a resource and waiting on another.

No preemption:
- A resource can only be released by a process that’s holding it.

Circular wait:
- Multiple processes all waiting for each other.

52
Q

Explain the storage hierarchy briefly.

A

Primary storage (RAM):
- Can be referenced directly by the CPU.

Secondary storage (harddisk):
- Must be loaded into memory before being referenced.

53
Q

What is a page table?

A

Used to map logical/virtual address (referred to in a piece of executable code) to the physical address (secondary memory / hard-disk).

54
Q

What are three different strategies for contiguous allocation?

A

First-fit:
- Allocate the first free block that’s big enough.

Best-fit:
- Find the free block that leaves the smallest leftover.

Worst-fit:
- Find the free block that leaves the biggest leftover.

55
Q

What is paging in terms of memory mapping?

(NEED TO STUDY MORE)

A

A process’ logical address is broken into fixed-size units called pages (identifying a page) and offset (address within that page), and main memory is broken into units of the same size, called frames.

Represented as such:
Page | Frame
P3 | F2
P2 | F1
P1 | F3

56
Q

Can you access non-mapped memory?

A

No, it will give a segmentation fault.

57
Q

What is swapping in terms of memory?

A

If there is not enough physical memory, we need to swap part of or the whole process out of the main memory to the secondary memory (e.g., disk).

58
Q

What is dynamic loading & demand paging (discuss it’s aspects)?

A

Processes load specific modules or components into memory only when they are required during the program’s execution, thus conserving memory resources and optimising program efficiency.

Page Tables: Each process running on the system has its own page table, which is a data structure used by the operating system to track the mapping between the process’s virtual memory addresses and the corresponding physical memory locations (frames).

Demand paging uses invalid/valid bits to show whether the mapping is valid. Invalid bits can appear due to invalid memory address, or valid memory address but page frame not available.

Demand paging also has a protection bit:
Read-write (0) / read-only (1).

59
Q

What does the OS do when there is a page fault?

A

If memory address is invalid, terminate.

Otherwise,
- Find free frame in memory.
- Read the page from the disk into the free frame.
- Modify the table appropriately.
- Restart the instruction.

60
Q

What happens if the main memory is full when a page fault is generated?

A

One of the pages currently being held needs to be replaced.

‘Reference strings’ after often used in order to determine which page to swap out.

61
Q

Draw a reference string for pages for the following pages using the following algorithms (and briefly explain them, including advantages and disadvantages) with 3 frames:
- First In First Out
- Optimal Page Replacement
- Least Recently Used

Using these ‘pages’:
7, 2, 5, 3, 6, 2, 1

A

FIFO disadvantage: Maybe the first page to be referenced is often being referenced.

Optimal Page Replacement disadvantage: You can’t predict the future reference string (Similar to SJF).

Least Recently Used disadvantage: It’s time-consuming to keep a record of LRU.

62
Q

What assumptions do we make with LRU (Least Recently Used) demand paging.

A

The future memory references resemble the last one.

E.g., the pages most recently used, will be used again in near future.

63
Q

Why is there a minimum number of frames that must be allocated to a process?

A
  • For a process to run, it needs a minimum amount of memory to store its code, data and stack.
  • Too few frames, means frequently experiencing page faults, due to swapping.
  • Prevent deadlocks
64
Q

What are some methods to distribute frames among n processes, in a multitasking scenario?

A

Equal allocation:
- Gives each process equal number of physical pages.
- This is NOT fair, as processes come in different sizes, are have different demands.

Proportional allocation:
- Gives each process proportional the number of physical pages.

Local vs Global replacement schemes:
Local allocation is when each process manages its own memory allocation independently, while global allocation involves the operating system making system-wide decisions regarding memory allocation and sharing among processes.

  • Maybe do it proportionally, but adjust page allocation according to page faults. (E.g., a process which continues to get pages faults, may take some page slots from a process that is getting none)
65
Q

What is thrashing

A

Thrashing is when a process is not allocated sufficient frames.

A process is thrashing when it spends more time paging than executing.

To avoid this we find the size of the working set, which is the number of pages a process needs in order to execute without causing page faults for a reasonable amount of time.

66
Q

Give a brief explanation of I/O hardware.

A

An I/O device is linked to a machine via a port.
- The link is usually a set of wires called a bus.
- At each end of the link, there’s a device controller (Basically a processor.
- A signal on a bus is basically a temporal sequence of electrical voltages for each wire.
- The port is a memory address, that can address registers inside the device controller.

67
Q

L23 (NEED TO STUDY)

A
68
Q

What is a Direct Memory Controller (DMA)?

A

Allows external devices or hardware components to directly access the system’s memory without involving the CPU, thereby improving data transfer efficiency and reducing CPU overhead.

69
Q

Explain ‘Device Control Registers’, including how the CPU gives commands to device via the bus, and how the CPU can specify which device it’s giving its commands to.

A

The device controller has a number of registers.
- CPU can write and read these.
- Some registers hold data, and some hold control signals.

Each different port in the system has an address range.
- The CPU can issue I/O instructions to particular addresses.
- Communication can either be 1 byte at a time, 4 bytes at a time, or via ‘direct memory access’

70
Q

What is polling

A

The CPU cycles waiting for the busy bit to be clear, which is known as ‘polling’ (busy waiting)
- Which can be a waste of CPU time.
- If the CPU is polling for input from devices too, then all devices have to be monitored.

An alternative to polling is interrupts.

71
Q

What are interrupts in terms of I/O systems?

A

The CPU hardware contains a wire called the ‘interrupt request line’.
- The CPU senses this line after every instruction it executes.
- If a signal is detected, the CPU saves its current location and jumps to the address in the ‘interrupt vector’ indicated by the signal.

If devices can raise interrupts to communicate with the CPU, the CPU doesn’t need to poll.

72
Q

How do we deal with different degrees of urgency with I/O interrupts?

A

Maskable/non-maskable interrupts.

Maskable:
Device-generated interrupts and traps. These can be disabled when the OS is the middle of processing a more urgent interrupt.

Non-maskable:
For signalling error messages… which is never disabled.

73
Q

What measures are taken to increase I/O performance?

A
  • Reduce the number of context switching (Done with threading)
  • Reduce the number of data copies in memory.
  • Reduce the number of interrupts (e.g., using polling)
  • Increase concurrency (e.g., DMA)
  • Move data processing to hardware (e.g., network interface card)
  • Balance CPU, memory system, bus, I/O performance.
74
Q

What type of accesses can a user have to files?

A

Read
Write
Execute
Delete
List

75
Q

Users are identified as classes for each file (In Linux/Unix), what are these groups?

A

Owner:
User who created the file.

Group:
A set of users who are sharing the file; a group is defined by system.

Others:
All other users in the system.

(Note: when the file is created, it has an owner id and a group id to identify the users of the file).

76
Q

What are permission bits?

A

For each class of users: owner, group, others, there are three bits rwx to represent the permissions:
- r: readable
- w: writable
- e: executable

Order of bits is as follows:
owner-group-others

(Note: there is ‘-‘ which represents permission not allowed)

77
Q

There are also advanced file attributes (bits), list and explain these.

A

Setuid:
If this bit is set, regardless of who runs the program, the process has owner privileges.

Setgid:
If this bit is set, regardless of who runs the program, the process has group privileges.

Sticky:
A directory with this bit restricts deletion of files within, so that only the owner and superuser can delete those files.

78
Q

List and explain the access control models

A

Discretionary Access Control (DAC): DAC restricts object access based on subject identity and allows subjects to grant permissions to others. It’s flexible but can be less secure.

Mandatory Access Control (MAC): MAC restricts access based on object sensitivity and subject clearance, providing strong security but potentially costly and complex.

Access Control List (ACL): ACL is a list of permissions for objects (e.g., files), specifying who can access them and what actions are allowed, offering fine-grained control.

Role-Based Access Control (RBAC): RBAC groups users into roles with similar privileges, simplifying access control management for large user groups and supporting both DAC and MAC.