OSC Flashcards

1
Q

Can a user program write to a specific location in physical memory?

A

No, because of significant risk to system security and hardware protection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Can a user find out where a variable lives in physical memory

A

No, modern OS and programming languages use virtual memory addresses as an abstraction layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How can a system call allow a user process to run on kernel space code?

A

A system call can allow a user process to run on kernel space code by acting as a bridge, when a user needs to perform a privileged task or access resources managed by the OS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why use C for implementation

A

Adaptability, Functionality, Efficiency, Portability and also predictability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What would be the pros/cons of a small kernel with limited collection of systems calls?

A

the cons would be limited access to privileged resources or to perform a privileged task, development constraint and lack of functionality, pros it would reduce complexity, resource-efficient as less systems calls, efficiency less overheads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the 5 processes states

A

New = The process is being created and initialized by the operating system., ready = waiting for cpu to be available, running = executed by the cpu, blocked = waiting for I/O or an event, terminated = no longer executable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why might you run fork() without running subsequent exec?

A

If there is a need for parallel execution of different tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

do you always need to call exit to end a process?

A

No, when reaching the end of its execution it’s terminated by the OS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why does a PCB contain data about register contents?

A

For 3 main reasons, interrupt handling when an interrupt occurs the state of the process needs to be saved, so it later can be resumed. To make decisions about process scheduling. Because of context switching the OS needs to restore the state of each process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why might it be useful to retain a PCB for a terminated process?

A

Parental Notification (Allowing the parent to know if it has finished) , Process accounting (tracking metrics, etc..)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why is Round robin is said to favour CPU-bound processes over I/O bound processes?

A

because round-robin has a fixed time slice and usually cpu-bound processes need the entire time or more to complete their task, where as I/O bound-processes might release their CPU usage because of spending time waiting for an event.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Using the non-preemptive shortest job first scheduler, does the shortest job run on the CPU until is completed?

A

Non-preemptive means no interruption by the scheduler until it finishes its execution or enters a waiting state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a non-preemptive scheduler?

A

Processes are only interrupted voluntarily (e.g. I/O waiting or system calls)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a preemptive scheduler?

A

Processes can be interrupted forcefully or voluntarily, typically driven from interrupts from a system clock (regular intervals).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is defined as average response time? and

A

Average of the time taken for all processes to start

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is defined as turn around time?

A

Average of the time taken for all
processes to finish

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Explain First Come First Served scheduling algorithms?

A

Non-preemptive scheduling, operates as a strict queue, schedules the processes in the same ordered they were added to the queue, favour long processes over short ones, could compromise resource utilisation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Explain Shortest job first

A

Non-Preemptive scheduling, operates as a shortest processing time using a known estimate of processing. Processing times have to be known before hand, starvation, predictability might be at compromised.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Explain Round Robin

A

Round robin is an preemptive version of FCFS that forces periodic context switches at periodic intervals or time slices, processes run in the ordered added to the queue, advantages are improved response time, increased context switching and thus overhead, favours CPU-BOUND processes over I/O bound processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Explain priority queues scheduling algorithm

A

A preemptive algorithm that schedules process by priority, round robin is used if the same priority

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the main difference between threads and processes?

A

Threads share memory and processes don’t this brings a few things to consideration such as lower resource overhead, however threads are much less fault-tolerant because of having to being very careful with synchronization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Why use threads?

A

Processes will often contain multiple blocking tasks (interrupts) certain activities should be carried in parallel/concurrently. Multiple related activities apply to the same resources so these resources should be shared/accessible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How is user thread managed in the user space?

A

(creating, destroying, scheduling, thread control block manipulation) is all done using a user library

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Advantages of using USER threads? and disadvantages?

A

No context switch required since threads are in user space. Full control over the thread scheduler. OS independent. No true parallelism, No clock-interrupts threads are non-preemptive. Page faults result in blocking the process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Advantages and Disadvantages of Kernel threads

A

True Parallelism
No run-time system needed

It will have frequent context switches resulting in lower performance

(WINDOWS AND LINUX TAKE THIS APPROACH.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

If the threads in a process share the same memory, why do they have independent stacks?

A

Because it provides a level of isolation and control which is essential, because each thread need its own function call information, independent stack ensure each thread can manage its own functions calls and also errors if encounters one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Is it always necessary to call pthread_exit when ending a thread?

A

No, it is not always necessary to call pthread_exit when ending a thread. The termination of a thread can occur in different ways such as return 0 in the main function, return null in the function the thread is called.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is the minimum number of threads a process can have?

A

One, which is the main thread created and represents the primary flow of control within the program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Can user threads make good use of concurrent hardware?

A

User threads can make use of concurrent hardware to some extent, but their ability to fully exploit the benefits of multiple processor cores is limited compared to kernel threads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Why is it efficient to schedule threads that communicate with each other at the same time?

A

Because then they will share the same memory and access the same resources at the same time that implies with good synchronisation it will be better to have them executing at the same time. Faster Communication, can also lead to overall better throughput

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What does throughput mean?

A

In essence, throughput is an indicator of the system’s efficiency in executing workloads,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

How does the CFS avoid starvation?

A

CFS uses time slicing so that every process gets a fair share of CPU time, it also dynamically adjusts the priority of processes based on their CPU usage history

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

If a thread is interrupted by I/O where is the process state saved?

A

in the PCB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Why is the outcome of programs might be unpredictable

A

the outcome of executing might depend on the order in which code gets to run in the CPU. Sharing data can also be inconsistent and we can be interrupted part way through doing something

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is a race condition?

A

A race condition typically occurs when multiple threads access shared data and the result is dependent on the order in which the instructions are ran.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is meant by critical section?

A

A critical section is a section of code that can only be executed by one thread at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is meant by mutual exclusion?

A

Mutual exclusion, is a concept involving techniques or mechanisms to ensure that only one thread or process at a time can access the critical section. (locks?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is a deadlock?

A

A set of threads is deadlocked if each thread in the set is waiting for an event that only the other thread in the set can release.

39
Q

The code x != y doesn’t modify anything. Is it certain to occur atomically?

A

No, it’s not certain due to the fact that x and y might be a shared variable and if so, trying to access them without a lock might result in a race condition.

40
Q

Can race conditions or deadlocks occur in practice on a machine with a single hardware thread?

A

No because that only happens in multiple threads.

41
Q

Can two threads running the same function deadlock against each other?

A

Yes, deadlocks occur when each thread holds a resource and waits for another resource acquired by a different thread, creating a circular dependency.

42
Q

What is meant by atomic in processes/threads?

A

=. uninterruptible

43
Q

Would you need to use mutexes / a critical region to protect code that is
only reading from variables?

A

There are certain situations where you might still need synchronization, especially if there is a possibility of concurrent writes happening concurrently with reads. In such cases, using a read-write lock or other synchronization mechanisms might be more appropriate.

44
Q

What will happen if we call wait() and the internal counter is not positive?

A

It will block the process, the process state will be changed from running to blocked, control is transferred to the process scheduler.

45
Q

Is a binary semaphore the same thing as a mutex?

A

they are not exactly the same due to differences in ownership and intended use:

46
Q

What is priority inversion?

A

Priority inversion is a situation in concurrent systems where a higher-priority task is delayed by a lower-priority task. This occurs when the lower-priority task holds a resource that the higher-priority task needs. And the inversion happens when a medium-priority task may preempt the execution of the lower-priority task.

47
Q

Briefly explain what is a context switch?

A

A context switch is the process of saving and restoring the state of a CPU for one task and switching to another task.

48
Q

List two examples where it is important for the operating system to take the architecture of the
hardware into account.

A

Device Drivers and Memory Management

49
Q

What is the OS responsibilities in memory management?

A

Allocate/Deallocate memory when requested by processes and simulating infinite memory
Keep track of used/unused memory
Move data from memory (RAM) to disk (SSD/HDD) and vice versa

50
Q

What is the difference between contiguous and non-contiguous memory management models

A

Contiguous allocates memory in one single block without any gaps

Non-contiguous allocates memory anywhere in the physical memory / in multiple blocks or segments

51
Q

How is partitioning applied to mono-programming?

A

One single partition for user processes meaning only one single user process is in memory/executed at any point in time. A fixed region of memory is allocated to the OS/kernel and the remaining memory is reserved for a single process. This single process has direct access to physical memory

52
Q

How is partitioning applied to multi-programming?

A

It can be used with fixed sized partitions, or fixed non-equal sized partitions or dynamic partitions

53
Q

What is a disadvantage of mono-programming?

A

Low utilisation of hardware resources (CPU, I/O devices, etc.)
Since a process has direct access to the physical memory, it may have access to OS memory posing security risks

54
Q

Could we simulate multi-programming in mono-programming? how?

A

Yes by Swapping, which means swapping processes out the disk and load a new one (context switches would become time consuming )

55
Q

Advantages and Disadvantages of fixed sized partitions?

A

Advantage: No overhead for storing sizes, facilitates multi-programming | Disadvantage : internal fragmentation: partition may be unnecessarily large, limited flexibility as they are fixed sized.

56
Q

The compiler allocates memory addresses
What is the issue?

A

The issue potentially being : memory leaks

57
Q

Explain relocation and protection

A

Relocation adjusts addresses during program loading into memory. it’s done with Physical Address=Base Address+Offset. Protection ensures that certain areas of memory have restricted access.

58
Q

Relocation at load time vs Relocation at runtime

A

At load time advantages : efficiency during execution and early error detection (doesn’t detect at runtime).
Disadvantages: slows down the loading of a process / doesn’t account for swapping

Advantages runtime -> adaptability and reduced load times
disadvantages: potential for delayed errors

59
Q

Which are the two special purpose registers are maintained in the CPU (the MMU)

A

base register which stores the start address of a memory block /(of a partition)

and the bound register which stores the size of that memory block /(of the partition)

60
Q

How would dynamic partitioning work? and what is the reasons and issues with using swapping?

A

A variable number of partitions, the process is allocated to the exact amount of contiguous memory and it reduces internal fragmentation

However, we need to consider swapping: swapping may happen because process only runs occasionally, or because of the total amount of memory required exceed available memory / memory requirement change, we have more process than partitions (assuming fixed partitions). These processes can be reloaded into a different memory location (base register changes).

When using swapping another problem shows up, external fragmentation as the new process may not use the entire block leaving a small unused block or it might be too large for the block

61
Q

Why is the gap between the stack and the heap a problem? and what is the solution?

A

Because although we placed that gap so they can grow without conflicting with each other, if they don’t grow we are wasting memory. causing fragmentation.

Splits the logical address space into separate contiguous segments (code, data, heap, stack) each with a base and bound pair and accessed using a segmentation table.

62
Q

Explain How first fit finds the space for a new process

A

It scans the list until it finds the first hole which the new process can fit in it. If is the exact size then all that space is allocated, else it allocates the first entry to the size requested and marked as used and then second entry is allocated with the remaining size and is set to free

63
Q

Explain how next fit finds the space for a new process

A

It keeps track of where it got to, and then it restarts its search from where it stopped last time.

64
Q

Explain how best fit finds the space for a new process

A

Best fit looks through the entire linked list and finds the smallest hole big enough to satisfy the request. (slower than first fit) and still results in small left over holes which means wasting memory.

65
Q

Explain how worst fit finds the space for a new process

A

It chooses the largest possible block (hole) and splits it just like first fit.

66
Q

Explain Paging briefly

A

Paging uses fixed partitioning and code re-location to devise a new non-contiguous management scheme which splits the memory into much smaller blocks and this memory is allocated in one or more blocks inside a page.

67
Q

Benefits of Paging

A

Internal fragmentation reduced to the last block only. and No external fragmentation

68
Q

What is a page?

A

A page is a small block of contiguous memory in the logical address space, i.e. as seen by the process

69
Q

What is a frame?

A

A frame is a small contiguous block in physical memory which shows where the processes are stored

70
Q

What pages and frame have in common?

A

Pages and frames (commonly) have the same size:
The size is usually a power of 2
Sizes range between 512 bytes and 1Gb

71
Q

What are the main benefits of paging?

A

Not all pages have to be loaded in memory at the same time because of virtual memory, loading all programs into at the same time would be wasteful

72
Q

When does a page fault happen? What is it?

A

A page fault is generated if the processor tries to access a page in memory that is not in memory. A page fault is an interrupt (process goes to blocked state) an I/O operation is started to bring the missing page into main memory, a context switch may happen and an interrupt signals the I/O operation is complete

73
Q

Benefits of Virtual Memory

A

improves CPU utilisation, more efficient use of memory which means less internal fragmentation and no external fragmentation.

74
Q

How is memory organisation done in multi-level page tables

A

The root page table is always maintained in physical memory and page tables themselves are maintained in virtual memory due to their size.

75
Q

What are Translation look aside buffer (TLBs)

A

(usually) located inside the memory management unit they cache the most frequently used page table entries.

They can be searched in parallel

76
Q

Explain Demand paging

A

Demand paging is a memory management technique used by operating systems to bring data into the computer’s memory only when it is needed. So the first instruction immediately causes a page fault.

77
Q

Explain pre-paging

A

When the process is started, all pages expected to be used (i.e. the working set) could be brought into memory at once. Drastically reducing page fault rate

78
Q

What is the formula to calculate effective access time?

A

The effective access time is then given by:

Ta = access time
p = pauge fault rate
pft = page fault time (time taken to solve a page fault)
ma = memory access time (time taken to access memory when no page fault occurs)
Ta = (1−p)× ma+pft ×p

79
Q

What are page replacement concepts

A

The OS should choose which page to unload when a new page is loaded (and all are occupied)

The OS chooses by 2 reasons, when the page was last used and if it has been modified

80
Q

What is thrashing?

A

Thrashing happens when there is excessive page activity and the system performance becomes slower.

81
Q

Explain the optimal page replacement

A

Replaces the page that will not be used for the longest time in the future. However not really used because predicting future memory access is not really feasible.

82
Q

Explain the FIFO Page Replacement

A

FIFO algorithm keeps a linked list and new pages are added to the end of the list, the oldest page (at the head) of the least is removed when a page fault occur.

easy to understand
heavily used pages are just as likely to be evicted as a lightly used pages

83
Q

Explain the Second Chance FIFO Page replacement

A

If a page at the front of the list has not been referenced it is removed, if reference bit is set the page is placed at the end of the list and it’s reference bit is reset.

costly to implement because the list is constantly changing.

It can be improved by implementing a circular list but still low for long lists

84
Q

Explain the Not Recently Used (NRU) Page replacement

A

All of its referenced and modified bits are kept in the page table, referenced bits are set to 0 at the start and reset periodically (through a system clock or when searching the list)

There are four different class types -> class 0: not referenced and not modified
class 1: not referenced and modified
class 2: referenced and not modified
class 3: referenced and modified

upon every page fault page table entries are inspected so they can try and find a class 0 to be removed, if they can’t they scan again for class 1 and always setting the reference bit to 0 for each page visited, if they can’t find class 1 as well they will try class 0 again (as elements from class 2 would be now class 0 or 1 )

Good performance, easy to understand and implement.

85
Q

Explain the Least recently used

A

It removes the page that has not been used the longest

The OS keeps track of when the page was last used and it can be found in the page table

high in cost to implement as it needs a list of sorted pages from least to most used

86
Q

How to calculate context switch time

A

Presuming both context switches and time slices are 1 -> for 100 processes:

99 x (1+1) = 198ms

87
Q

How to calculate CPU utilisation (multi-programming?)

A

n = no. of processes in memory
p = % of its spent waiting for I/O

1 - p to the power of n

88
Q

How to calculate the total address for an n-bit machine?

A

2 to the power of n is the answer.

89
Q

How to Calculate a TLB hit? and a miss

A

Hit = TLB lookup time + memory access time
n = page table levels + 1
Miss = TLB lookup time + n+1(memory access time)

90
Q

How do you calculate page size?

A

total physical memory / total pages

91
Q

how to calculate in what page number is a virtual address?

A

Page number = virtual address / page size

92
Q

How to calculate the offset?

A

Offset = virtual address % page size

93
Q

What are the 4 conditions that must hold for deadlocks to occur (known as coffman’s condition?)

A

Mutual Exclusion -> a resource can be assigned to at most one process at a time
Hold and Wait -> a resource can be held whilst requesting new resources
No Preemption -> resources cannot be forcefully taken away from a process
Circular wait -> there is a circular chain of two or more processes waiting for a resource held by the other processes

94
Q

Three conditions to a solution of a critical section

A

mutual exclusion: only one thread can enter the critical section at a time
progress: a thread waiting to enter the critical section must enter eventually (threads not entering doesn’t affect other threads from entering)
fairness/bounded waiting: waiting times are fairly distributed