DAT320 Flashcards

1
Q

What is an operating system?

A

An operating system is the software layer that manages a computer’s resources for its users and their applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What happens when you run a program?

A

CPU:

  • Fetches instructions stored in exe file
  • Loads data stored in exe file
  • Decodes and executes the instructions
  • Stores result to memory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the jobs of an OS?

A
  • Allocates memory
  • Manages memory
  • Manages CPU
  • OS provides process abstraction
  • Manages devices
  • Basically manages hardware and provides process abstraction.
  • OS also allows you to communicate with the computer without knowing how to speak the computer’s language.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the design goals of an operating system?

A
  • Convenience

- Efficiency of usage of CPU, memory etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Explain short: What is the goal of process abstraction?

A

The goal of process abstraction is to run multiple processes concurrently, by time sharing the cpu. We then need mechanisms (context switch) and policy(scheduler). This makes the illuision that each program has it’s own isolated machine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is virtualization of the CPU?

A
  • It involves a single cpu acting as if it were multiple seperate CPUs
  • Enables users to run many concurrent programs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What can a program read and update when it is running?

A
  • Memory (Data that the running program reads/writes is in memory)
  • Register (Because many instructions read/update the registers)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is I/O?

A

I/O stands for input and output and is reffered to devices connected to a computer. This could be a mouse and keyboard.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which states can a process be? Explain them

Give the possible transitions between the three states.

A
  • Running: Process is running on a processor, executing instructions
    • Ready: process is ready to run , but Os has chosen not to run it at this given moment
    • Blocked: process har performed some operation that makes it not ready to run until some other event takes place.
      For example: when a process initiates an I/O request to disk or network, it becomes blocked and thus some other process can use the processor.

Running to ready: OS preempts the running process.

Running to Blocked: Process makes system call to innitiate IO

Ready to running: OS schedules process to run next.

Blocked to ready: IO requested by process is done.

  • New
  • Dead
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which data structure is where the operating system keeps the program counter, stack pointer and other information about a process.

A

PCB (process control block)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does a PCB (process control block) contains?

A
  • Process identifier
  • Process state
  • Pointers to other related processes (parent)
  • CPU context of the process (saved when the process is suspended)
  • Pointers to memory location
  • Pointers to open files
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does API stand for?

A

Application programming interface (API)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why do we have an API in the OS?

A
  • The API contains sets of system calls, including create, wait and destroy.
  • System call is a function call into OS code that runs at a higher privilege level of the CPU
  • There are usually interfaces to get some status information about a process as well, such as how long it has run for, or what state it is in.

-Makes it possible for all services in the operating system to communicate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does fork() do?

A
  • Creates a new child process
    • All processes are created by forking from a parent
    • The init process is ancestor of all processes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does exec() do?

A

In computing, exec() is a functionality of an operating system that runs an executable file in the context of an already existing process, replacing the previous executable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does exit() do?

A

Exit() terminates a process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What does wait() do?

A

Wait() causes a parent to block until child terminates/exits or a signal is received.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What happens during fork()?

A
  • A new process is created by making a copy of parent´s memory image
  • The new process is added to the OS process list and scheduled
  • Parent and child start execution just after fork (with differnet return values)
  • Parent and child execute and modify the memory data independently
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What happens during wait()?

A
  • Terminated process exist as a zombie.
  • The parent process is then supposed to execute the wait() system call to read the dead process’s exit status and other information.
  • This allows the parent process to get information from the dead process. After wait() is called, the zombie process is completely removed from memory.
  • When a parent calls wait(), zombie child is cleaned up or “reaped”
  • Wait() block in parent until child terminates (non-blocking ways to invoke wait)
  • What id parent terminates before child? Init process adopts orphans and reaps them
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What happends during exec()?

A
  • A process can run exec() to load another executable to its memory image
  • So a child can run a different program from parent
  • Variants of exec() to pass commandline arguments to new executable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What different modes can an OS run a executable from?

Which two modes does the OS have?

A

User and kernel mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is kernel mode?

A
  • It enables the OS to run system calls
  • Kernel does not trust user stack
  • Kernel does not trust user provided addresses

In Kernel mode, the executing code has complete and unrestricted access to the underlying hardware. It can execute any CPU instruction and reference any memory address. Kernel mode is generally reserved for the lowest-level, most trusted functions of the operating system. Crashes in kernel mode are catastrophic; they will halt the entire PC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

When is trap function called?

A
  • System call (program needs OS service)
  • Program fault (program does something illegal, e.g. access memory it doesn´t have access to)
  • Interrupt as external devices needs attention of OS (e.g. network packet has arrived on network card)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is a trap function?

A

It is a software interrupt generated by the user program or by an error when the operating system is needed by it to perform the system calls or an operation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What are the two types of scheduler?

A

Preemptive and non preemptive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What are the difference between Preemptive and non-Preemptive schedulers?

A

Non preemtive schedulers are polite:

- Switch only if process blocked or terminated 
- Once resources are allocated to a process, the process holds it till it completes its burst timr or switches to waiting state.

Preemptive (non-cooperative) schedulers can switch even when process is ready to continue:
- CPU generates periodic timer interrupt and the resources are allocated to a process for a limited time
- After servicing interrupt, OS check if the process has
run for too long

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is a context switch?

A

For example when a process switches from user to kernel mode.
It then saves context (PC, registers, kernel stack, pointer) of process on kernel stack

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is SJF (shortest-job-first)?

A

It runs the shortest job available, not preemptive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is FIFO (first-in first-out)?

A

It runs the process with the shortest arrival time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is RR (round-robin), and how does it work?

A

A scheduling algorithm.

It runs each process on a given time quatum. This could be 2ms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is a CPU burst?

A

The CPU time used by a process in a continuous stretch

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Why do we want to have a scheduler?

A
  • Maximize utilization (fraction of time cpu is used)
  • Minimize average (turnaround time = time from process arrival to completion)
  • Minimize average (response time = time from process arrival to first scheduling
  • Fairness: all processes must be treated equally
  • Minimize overhead: run process long enough to amortize cost of context switch ( 1 microsecond)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What is stride scheduling?

A
  • Every process gets a ticket value
    Stride = ticket/high number(e.g. 1000 or 10000)
    Pass += strideValue
  • OS runs the process with the lowest stride value
  • If pass-value of multiple processes are equal, the one with the lowest stride value goes first.
  • If both the pass-value AND stride value is the same, the process that entered the scheduling que first gets to run first.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is the formula on turnaround time?

A

Tturnaround = Tcompletion - Tarrival

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is turnaround time?

A

Performance metric for scheduler policy. Often the better the metric the less fair the scheduler is.

Turnaround time is the time from the process arrives at the queue to the time it is done/finished.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is the formula on response time?

A

Tresponse = Tfirstrun - Tarrival

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Positives and negatives of SJF

A
  • Optimal for turnaround time

- Can get stuck behind a large job (convoy effect)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Positives and negatives of RR

A
  • Good for response time and fairness
  • Bad for turnaround time
  • Time quatum > context switch
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Why is it difficult to create a good scheduler?

A

Turnaround time and response time:

Both are bad where the other is good

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is multi level feedback queue(MLFQ)?

A

It contains different levels of priority that a process is assigned, and each process are ran using RR. First a process is ran at the highest priority and if not done it moves down. This continues to the process is done.

If priority A > priority B, A runs
If priority A = Priority B , they run in round robin,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What are some of the problems with multi level feedback queue(MLFQ)?

A

Starvation: If there are to many interactive jobs they will combine to consume a lot cpu time, preventing long running jobs from getting any cpu time, so they starve.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What can we done in MLFQ to prevent starvation?

A

Periodically boost the priority level of all jobs. For a given time slice we can move all jobs to the highest priority queue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is lottery scheduling?

A

Each process gets different amount of tickets and an OS chooses a ticket. The process that has that ticket can run for a given time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

How do we calculate fairness in a scheduler?

A

U = tCompletionA / tCompletionB (1 = perfect)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What is the difference between lottery scheduler and stride scheduler

A

Stride scheduler is deterministic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What is the advantages and disadvanages of using randomness in a scheduler?

A

Negative:

  • May not deliver the exact desired proportions
  • A major drawback of the lottery scheduling algorithm is that it is inherently unpredictable. Due to the random nature of the scheduler, there is no way for OS designers to know for sure what will happen in lottery scheduling, possibly producing undesirable results

Positive:

  • Fairness
  • Avoid corner cases
  • Lightweight for the OS
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Why do we virtualize memory?

A

Virtual memory serves two purposes:

  • First, it allows us to extend the use of physical memory by using disk.
  • Second, it allows us to have memory protection, because each virtual address is translated to a physical address.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What does virtual address space contain?

A

Contains program code, heap and stack

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Why do we use virtual address space

A

To be able to locate where in physical memory the data is stored. It is translated from Virtual Address to Physical Address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What component does the translation of virtual to physical address?

A

MMU (memory-managment unit)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What is the goal of virtualization?

A
  • Transparency: user programs should not be aware of the messy details
  • Efficiency: minimize overhead and wastage in term of memory space and access time
  • Isolation and protection: A user process should not be able to access anything outside its address space
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Does the OS have its own address space? If not where is it?

A
  • OS is not separate process with its own address space
  • Instead, OS code is part of the adress space of every process
  • A process sees OS as part of its code (library)
  • Page table map the OS addresses to OS code
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What does the stack contain?

A
  • Keep track of where the process is in the function call chain
  • Allocate local variables of functions
  • Pass parameters to functions
  • The return value from functions
  • Return address: Where to continue executions after a function call
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What does the heap contain?

A
  • Global variables
  • Malloc() and free() in C
  • New() in java/C++/GO
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

What does malloc() do?

A

Allocates x amount of memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

What does free() do?

A

free(x) where x is the pointer to the memory returned by malloc. It frees the allocated memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

What are the procedures during address translation?

A
  • OS tells what base(starting address) and bounds(end address) values
  • MMU does the calculation
  • If out of bounds a trap execution is called
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What is the role of OS in address translation?

A
  • OS maintains free list of memory
  • Allocates space to process during creation and cleans up when done
  • Maintains information of where space is allocated to each process (in PCB)
  • Sets address translation information in hardware
  • Updates this information upon context switch
  • Handles traps due to illegal memory access
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

What is the role of hardware in address translation?

A

The CPU provides privileged mode of execution, this makes it so the instructions set has priviliged instructions to set translation information (base and bounds).
Then the Hardware(MMU) uses this information to perform translation on every memory access (instruction fetch, load or store)
If the translation is illegal, the MMU generates traps to OS(eg, VA is out of bounds)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

What is the formula for physical address

A

PA = base register + VA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

What are the problems with base and bounds?

A

If stack and heap are small, space between them is wasted. This is called internal fragmentation.
This is caused by fixed size slots (due to our assumptions as same size address space).
- Requires a big chunk of free space in the middle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

What differs segmentation from basic address space?

A
  • It gives us the flexability with multiple base/bounds pairs
  • The idea is one pair for each logical segment of the address space, which includes code, stack and heap
    No internal fragmentation in segmentation.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

How does the virtual address know which segment contains what (segmentation storing)?

A
Using the two first bits
"00" =  Code
"01" = Heap
"10" = Stack
"11" = Unused
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

How can processes share memory (segmentation)

A

Protection:

  • A few extra bits per segment
  • Read, wrtite, execute
  • Each process still thinks that is accessing its own private memory
  • While the OS is secretly sharing their memory
  • HW must then check if a particular access is permissible
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Why do we use segmentation?

A
  • Unused space between stack and heap need not be allocated in physical memory
  • Allowing for more address spaces in physical memory we can use segmentation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

What are the problems with free space managment within segmentation?

A

Physical memory becomes full of holes of free space (external fragmentation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

What is external fragmentation?

A

Physical memory becomes full of holes of free space (external fragmentation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

What is a solution to external fragmentation within segmentation?

A

One solution:

  • compacting physical memory be rearranging segments
  • Stop process one by one, move segment, update segment registers (point to new location)
  • Very memory intensive and use of process time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Is there a better approach to avoid external fragmentation with segmentation?

A
  • Use a free list management algorithm
  • Keep large extents of memory available for allocation
  • Ex: best fit algorith - keeps list of free spaces - returns the one closest in size that satisfies desired allocation to the requester

Best fit:
Search free list of memory chunk closest to requested size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

What are the benefits of segmentation?

A
  • Support sparse address spaces (address spaces with a lot of emty spaces)
  • Avoid wasting memory between logical segments of an address space
  • Fast translation
  • Code sharing
  • No internal fragmentation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

What are the problems with segmentation?

A
  • Free memory gets left as odd sized pieces (external fragmentation) (does not happen with paging)
  • Memory allocation difficult (many smart algorithms)
  • Fundamentally hard to avoid external fragmentation
  • Not flexible enough, to support fully generalized sparse address space
  • If heap is sparsely used - must keep entire heap in memory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

How to manage free space, when satisfying variable sized request?

A
  • Splitting(splits a segment based on the request)
    and coalescing(Smelte sammen minne) If different chunks is free, continually. The allocator will coalesce thse chunks together.
    if 3 chunks of free space, each 10 bytes, a request for 15 bytes will not work. even though we have 30 bytes free space
  • Tracking the size of allocated regions
  • Embedding a free list
73
Q

What is splitting?

A

Example, trying to allocate 1 bytes when we have two 10 bytes segment available:
- Find a free chunk that satisfiy the request and split it into two
- Return the first chunk to the caller (1 byte)
Second chunk remain on the list (9 bytes)

74
Q

What is coalescing?

A

It allows us to combine free segment next to each other in memory

75
Q

Why would we need coalescing?

A

It occur when we for example have three free segment next to each other with size 5. If we try to allocate 7 bytes it will fail. Thats why we coalesce them to get 15 byte available in one segment

76
Q

In free space managment how do we track the size of allocated regions?

A

We create a header field in the allocation with the information about the size. May also contain pointers to speed of freeing.

77
Q

What are some of the strategies to manage free space? (Free space management)

A

-best fit
search free list for chunk closest to requested size
high cost; heavy performance

-worst fit
opposite of best fit, search for biggest chunk
worst fit tries to leave big chunks free instead of lots of small chunks

  • first fit
    finds the fist chunk with enough space
    advantage of speed
    may pollute the beginning of free list with small objects
    may use address-based ordering
    keeping the list ordered by the address of the free space
    coalescing becomes easier
  • next fit
    instead of starting at the start of the free list, keep an extra pointer to the last searched location and start there
78
Q

What is buddy allocation

A
Given the example that we want to allocate 7 bytes and have 64kb free space:
- We split the size into 2 to when it doesn't fit anymore and choose the split before. 
- First: 64
- Second: 32+32
- Third: 16+16
- Fourth: 8+8
- Fifth: 4+4 (target)
Therefor we use the 8 byte size.

Basically. split memory into chunks, and when they are small enough and the request is met, use that. The rest gets coalesced

79
Q

How does paging differ from segmentation?

A

Instead of allocation of variable sized chunks we use fixed size chunks

80
Q

What is a page table?

A

Page table is a per process datastructure to help with address translation
Keeps track of where each virtual page is in memory

81
Q

What does a page table store?

A

Stores address translations for each virtual page of the address space.

Basically mappings from virtual page number to physical frame number

82
Q

What does a page table entry (PTE) contain?

A

Each PTE contains:

  • PFN (physical frame number)
  • Valid bit: is this page used by process
  • Protection bit: read/write permissions
  • Present bit: is this page in memory
  • Dirty bit: Has this page been modified
  • Accesses bit: has this page recently accesses
83
Q

How does address translation happen in software(paging)?

A
  • Most significant bits of VA give the VPN
  • Page table maps VPN to PFN
  • PA is obtained from PFN and offset within a page
  • MMU stores (physical) address of start og page table, not all entries
  • Walks the page table to get relevant PTE

Important bits of Virtual Address til virtual page number
Page table maps from virtual page number to physical frame number
Page address is obtained from Physical fram number and offsett within a page.
MMU stores physical address in the start of the page table
Walks the page table to get relevant Page Table Entries

84
Q

Why is paging slow?

A

Paging requires perform ONE extra memory acccess

85
Q

How do we determine the size of VPN?

A
Formula: 2^x
Example: 
- 64 bit = 2^6 
- So we need 6 bits
- If the page size is 16 bytes we need to be able to access 4 pages. Therefore we need 2 bits in the VPN and the rest 4 bytes will be the offset
86
Q

How to we calcualte VPN and offset if we have a virtual address 21:

A

21 in binary: 010101
VPN : 01
Offset: 0101

87
Q

Explain how a page table is used

A

The OS indexes the array by the virtual page number (VPN) and look up the page table entry (PTE) at that index in order to find the desired physical frame number (PFN)

88
Q

What does the Translation Lookaside Buffer (TLB) contains?

A

-A cache of recent VA-PA mappings

89
Q

Why do we use TLB’s?

A

To speed up translation

90
Q

How does the TLB work(Translation Lookaside Buffer)?

A
  • HW checks if desired virtual address to physical address mapping is in TLB (cache)
  • If so (tlb hit), use TLB translation (without consulting the page table in memory)
  • Else tlb miss, consult page table (which has all the translation
91
Q

What are the problems with TLB’s?

A
  • If two pages use the same entry of the memory, only one of them can be remembered at once.
  • Only remembers a small number of entries
  • Context switches
  • TLB contains translation only valid of running process
  • When switching between processes HW/OS must ensure that next proc does not use translation from previous process. HW cannot distinguish which entry is meant for which process
92
Q

What are some of the TLB policies we can do to ensure a best up to date TLB?

A

Two approaches:

  • Evict entries at random
  • Evict least recently used (minst brukt)
93
Q

Why can’t we just increase the page size to reduce the size of a page table?

A
  • Leads to waste within each page (internal fragmentation)
  • Therefore most systems use relatively small pages
  • So our problem will not be solved that easily
94
Q

How can we solve some the problem with page tables being to big?

A
  • Combine page table and segmentation
  • Multi level page table
  • Inverted page table (only opne pagetable with all the physical addresses)
95
Q

What are the problem of using segmentation and page table (hybrid)?

A
  • External fragmentation
  • Not flexible
  • If we use large page table we might still get a lot of waste
96
Q

What is multi level page table?

A

Multilevel pagetable are multiple pagetables mapped to eachother.

An example:

  1. An application accesses the pagetable
  2. The first 10 bits maps the address to the second pagetable
  3. The next 10 bits maps the address to the actual physical memory
  4. As always, the offset goes straight trough.
97
Q

What are the advantages with multi level page table?

A

(a) Page number lookups are faster.
(b) The page table can consume much less space if there are large regions of unused memory.
(c) Each segment (code, data, stack) can be managed separately.

98
Q

What are the disadvantages with multi level page table?

A

Extra memory references to access address translation tables can slow programs down by a factor of two or more.

99
Q

What is an inverted page table (don’t say an inverted page table, i know you tried!)

A
  • Instead of having many page tables (one per process of the system) we keep a single page table that has an entry for each physical page of the system
  • The entry tells us which process is using this page and which virtual page of the process maps to this physical page
  • Finding the correct entry is now a matter of searching through this data structure

Only one page table with all the physical addresses <3

100
Q

What is swapping, within the page table section?

A

Usually a page table is big and will be stored on disk.

The goal of swapping is to have the illusion of large virtual memory for multiple concurrently running processes

101
Q

Explain the mechanism of swapping within the page table section?

A
  • When translating VA to PA, MMU reads present bit
  • Present bit in page table entry: indicates of a page of a process resides in memory or not
  • If page present in memory, directly accesses
  • If page not in memory, MMU raises a trap to the OS - page fault (trap)
102
Q

How do we handle a page fault exception?

A
  • Page fault traps OS and moves CPU to kernel mode
  • The OS chooses a page to evict from RAM and write to DISK
  • If the page is DIRTY, it needs to be written back to DISK first
  • The OS reads the page from DISK and puts it in RAM
  • The OS then changes the page table to map the new page
  • The OS jumps back to the instruction that caused the page fault and loads it.
103
Q

What are some of the problems that can occur with swapping mechanism during a page fault?

A
  • When servicing page fault, what is OS finds that there is no free page to swap in the faulting page
  • OS must swap ut an existing page and then swap in the faulting page – to much work!
104
Q

What page replacement policy do we have?

A
  • Optimal: Replace page table not needed for longest time in future (not practical, as we can not predict future)
  • FIFO: replace page that was brought into memory earliest (may be a popular page)
  • LRU/LFU: replace the page that was least recently (or frequently) used in the past
105
Q

What is the LRU policy?

A

the page that is used least recently will be replaced

106
Q

What is LFU policy?

A

LFU is a cache eviction algorithm called least frequently used cache

107
Q

What roles does an operating system have?

A
  1. Referee: Operating systems manage resources shared between different applications running on the same physical machine. Examples: Memory protection, scheduler
  2. Illusionist: Operating systems provide an abstraction of physical hardware to simplify application design. Examples: Virtual memory, file system
  3. Glue: Operating systems provides a set of common services that facilitate sharing among applications. Examples: HAL, system calls, copy paste
108
Q

What is the purpose of dual-mode operation on a CPU?

A

There are two modes of operation on a CPU: Kernel mode and user mode. The purpose of the two modes is to limit the privileges of a user program, so that it cannot cause harm to the machine, operating system, or other user processes running on the system.

109
Q

What is a cache?

A
  • Copy of result of computation that can be accessed quickly

- Partial copy of data that can be accessed quickly

110
Q

Give two examples of virtualization?

A
  • CPU

- Memory

111
Q

Explain the software design principle of separating mechanism from policy

A

By separating mechanism and policy, we can provide a generic low-level mechanism (machinery) that can be reused across multiple high-level policy decisions. Such policy decisions can more easily be tuned and adapted to different environments and workloads, without changing the underlying mechanism.

  • Policies are ways to choose which activities to perform. Mechanisms are the implementations that enforce policies.
112
Q

Give examples on policies

A
  • Scheduling
  • Allocation of resources
    • Page fault handling
    • Segmentation handling
113
Q

Give examples on mechanism?

A
  • Context switching
  • Swapping
  • Authorization
114
Q

Give the three main states that a process can take on

A
  • Ready
  • Running
  • Blocked (waiting)
115
Q

Explain why a process has state?

A

A process can be in one of several states at a time. This is a way for the OS to manage the processes and to facilitate efficient resource utilization. In particular, a Blocked process will not consume CPU resources while waiting for IO to finish, which would otherwise be extremely wasteful.

116
Q

Explain what a process is, and how the operating system handles processes with respect to memory and execution

A

A process is the instance of a computer program that is being executed by one or many threads. (limited priviliges)
A process is divided into two static components: code and data,
and two dynamic components: heap and stack.
There is two parts to a process:
- Thread
- Address space

117
Q

What is a thread?

A
  • A thread is a path of execution within a process. A process can contain multiple threads.
  • a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler
118
Q

Why do we want to use threads?

A
  • Parallelism
  • Concurrency
  • Even if no parallelism, concurrency of threads ensures effective use of CPU when one of the threads blocks (for I/O for example)
  • Better average response time
  • Better performance
119
Q

What is parallelism?

A
  • Running multiple processes in parallel on multiple CPU’s
120
Q

What is concurrency?

A
  • Running multiple threads/process at the same time, even on a single CPU core, by interleaving their execution
121
Q

What is the main difference between parallelism and concurrency?

A

Parallelism is about running multiple processes over multiple CPU’s but concurrency is about running multiple threads at the same time on a single CPU

122
Q

How does the OS schedule threads?

A
  • OS schedules threads that are ready to run independently, much like processes
123
Q

What is TCB?

A

Thread Control Block (TCB) is a data structure in the operating system kernel which contains thread-specific information needed to manage it.

124
Q

What can happen with threads that share some data?

A

Race condition

125
Q

What is a race condition?

A
  • When multiple process access the same data, weird think happens
  • When using a counter function is is actually necessary with 3 steps with assembly. And when many threads access the same it can get the old value before the other counter gets to update the value. So both function gets the base value 30 and both add with +1 but the new value is now 31 and not 32.
126
Q

What is a critical section?

A
  • Portion of code that can lead to race condition
127
Q

What is mutual exclusion?

A
  • Only one thread should be executing critical section at any time
128
Q

How can we avoid race condition?

A

Mutual exlusion

129
Q

How can we get mutual exlusion?

A

Atomicity of the critical section

130
Q

How is atomicity achieved?

A
  • Locks
  • Condition variables
  • Channels
    etc
131
Q

What are some of the problems with atomicity/mutual exlusion?

A

Where one thread must wait for another to complete some action before it continues. This interaction arises, for example, when a process performs a disk I/O and is put to sleep; when the I/O completes, the process needs to be roused from its slumber so it can continue.

132
Q

How does the lock mechanism works?

A
  • We can use a special lock variable to protect data
  • All threads accessing a critical section share a lock
  • One threads succeeds in locking - owner of lock
  • Other threads that try to lock cannot proceed further until lock is released by the owner
133
Q

What are the goals of locks?

A
  • Mutual exclusion
  • Fairness: all threads should eventually get the lock and no thread should starve
  • Low overhead: acquiring, releasing and waiting for lock should not consume too many resources
134
Q

What are some of the different types of locks?

A
  • Spin lock (thread ligge å venta i en for loop(spinne til den fe tilgang til locken)
  • Sleep lock(Thread sleeps until lock is free. Sleeping results in more idle time)
135
Q

What is a spin lock?

A
  • Spins until lock is acquired

- While loops

136
Q

What is a sleep lock?

A
  • Instead of spinning for a lock, a contending thread could simply give up the CPU and check back later
  • Yield() moves thread from running to ready state
137
Q

When should we use locks?

A
  • A lock should be acquire before accessing any variabel or data structure that is shared between multiple threads of a process
138
Q

What is the difference between a fined and coarse grained locks?

A

One big lock for all shared data(Coarse grained locks) vs separate locks(fined locks)

139
Q

What are the positives and negaives of fine-grained locks?

A
  • Fine grained allows more parallelism

- Multiple fine-grained locks may be harder to manage

140
Q

What are the advantages and disadvantages with coarse grained locks?

A
  • Slower

- Easier to manage

141
Q

Why do we want to use sleeping locks?

A
  • CPU wasted by spinning contending threads

- More so if a thread holds spinlock and blocks for long

142
Q

What are the advantages and disadvantages with spin locks?

A
  • Provides mutual exclusion
  • spin locks don’t provide any fairness guarantees
  • Spin locks, in the single CPU case, performance overheads can be quite painful
143
Q

How does Compare-and-swap works for locking?

A

it simply checks if the flag is 0 and if so, atomically swaps in a 1 thus acquiring the lock

144
Q

How does Load-Linked and Store-Conditional for locking works?

A
  • The load-linked operates much like a typical load instruction, and simply fetches a value from memory and places it in a register
  • The key difference comes with the store-conditional, which only succeeds (and updates the value stored at the address just load-linked from) if no intervening store to the address has taken place
145
Q

How does Fetch-And-Add for locking works?

A
  • Fetch-and-add instruction atomically increments a value while returning the old value at a particular address.
  • Unlock is accomplished simply by incrementing the turn such that the next waiting thread (if there is one) can now enter the critical section
146
Q

What is a two phase lock?

A
  • A two-phase lock realizes that spinning can be useful, particularly if the lock is about to be released. So in the first phase, the lock spins for a while, hoping that it can acquire the lock. However, if the lock is not acquired during the first spin phase, a second phase is entered, where the caller is put to sleep, and only woken up when the lock becomes free later
  • In another term, a hybrid between spinlock and sleeping lock
147
Q

Why is condition variables used?

A

To achieve synchronization

148
Q

How does condition variables works?

A
  • A condition variable (CV) is a queue that a thread can put itself into waiting in some condition
  • Another thread that makes the condition true can signal the CV to wake up a waiting thread
  • Signal wakes up one thread, signal broadcast wakes up all waiting threads
149
Q

How do we check a condition using condition variables?

A

In a while loop

We do this to avoid corner cases of thread being woken up even when condition not true

150
Q

What is a semaphore?

A
  • Semaphore is a variable with an underlying counter
  • Synchronization primitive like condition variable
  • Acts like a lock
  • Shared between threads
151
Q

How does a semaphore work?

A

If a semaphore is going to work like a mutex lock is has a value of 1:
Thread 1: decrements the counter (acquiring the lock) and runs the code in the critical section. When finished it calls post() with increment it by 1 (lock is released)
- Now thread 2 can run and etc.

152
Q

Why is semaphores special?

A
  • Can be used to set order of execution between threads like Condition Variables
153
Q

What are some of the concurrency bugs?

A

We have two:

  • Deadlock bugs (trying to share access same resource, but stops to function)
  • Non deadlock bugs
154
Q

What is a deadlock bug?

A
  • threads cannot execute any further and wait for each other
155
Q

What is a non deadlock bug?

A

non deadlock but incorrect result when threads execute

- Creates atomicity bugs

156
Q

How can we fix a non deadlock bug?

A

Locks, ez clap

  • Check lock for given area
  • Is it really secure?
157
Q

How can a deadlock occur?

A
  • Mutual exclusion: a thread claims exclusive control of a resource
  • Hold-and-wait: thread holds a resource and is waiting for another
  • No preemption: thread cannot be made to give up its resource (cannot take back a lock)
  • Circular wait: there exists a cycle in the resource dependency graph
  • All four of the above conditions must hold for a deadlock to occur
158
Q

How can we prevent circular wait (deadlock bug)?

A
  • Aquire locks in a certain fixed order

- Total ordering must be followed

159
Q

How can we prevent hold-and-wait (deadlock bug)?

A
  • Require that all processes request all resources at one time.
  • Require that processes holding resources must release them before requesting new resources, and then re-acquire the released resources along with the new ones in a single new request.
  • But this method may reduce concurrent execution and performance gains
160
Q

How can we generally avoid deadlock bugs and what to to if one occurs?

A
  • Deadlock avoidance: if OS know which process needs which lock, it can schedule the processes so that deadlock does not occur.
  • Detect and recover: reboot system or kill deadlock process.
161
Q

How do we calculate average response time?

A

Steps:

  • Calculate each process response time
  • Add them all together and divid it by the number of processes.
162
Q

Explain the term temporal locality?

A

Temporal locality: programs tend to reference the same instructions and data that they have recently accessed (time dimension).

For example, instructions inside a loop or a data structure that is repeatedly accessed.

By caching these memory values, we can improve
performance.

163
Q

Explain the term spacial locality?

A

Programs tend to reference data near other data that has been recently
referenced (space dimension).

For example, the next instruction to execute is usually near to the previous one, and different fields in the same data structure tend to be referenced at
nearly the same time.

To exploit this, caches are often designed to load a block of data at
the same time, instead of a single location.

164
Q

Belady’s anomaly occurs when?

A

The number of page faults increase despite adding more page frames.

Will ONLY happen with FIFO page scheduling algorithm.

Will NOT happen with LRU, LFU or Optimal page scheduling algorithms.

165
Q

What is the virtualization of memory represented by?

And what is the virtualization of the cpu represented by?

A

Memory- The address space.

Cpu - Processes

166
Q

What is virtualization?

A

Virtualization is the general technique for sharing a physical resource (processor, memory, disk,etc) by transforming it to a more general and easy-to-use virtual form.

167
Q

Which signal is used to synchronize the CPU with an I/O device when the device wishes to notify that it is ready?

A

Interrupts, because it is used to signal the CPU that some external event has occurred that may require attention.
As the cpu executes instructions, it checks for wether an interrupt has occurred. If so, it completes current instruction or it may stall and handle the interrupt instead.

168
Q

Consider an interrupt while a processor is executing an instruction. What happends?

A

The processor finishes the execution of the instruction before handling the interrupt.

169
Q

Why is memory protection important? Name a simple technique we can use and how it works.

A

Because it restricts the user from accessing kernel memory or other processes’ memory.

Base and bounds technique
Base: Start of process memory
Bounds: Process length.
Variable sized chunks that can only be changed by kernel mode.

Cpu fetches instruction or data, the PC or address reference is checked against the range (base, base+bounds) If in range, proceed, otherwise. Exception.

170
Q

What is base and bounds techinque?

A

Virtual memory where access to computer memory is controlled by
two cpu registers called base and bounds registers.

It allows us to place address space anywhere we’d like in physical memory, and do so while ensuring that the process can only access its own address space.

171
Q

What translates from virtual address to physical address?

A

Memory management unit MMU.

172
Q

Why does increading the RAM of a computer typically improve computer perfomance.

A

A page fault is the act of accessing a page that is not in physical memory. This means that adding more ram will make us have more virtual memory and therefore less page faults.

173
Q

Why does the translation look-aside buffer not necessarily be saved on a context switch between processes?

A

A Translation lookaside buffer (TLB) is a CPU cache that memory management hardware uses to improve virtual address translation speed. A TLB has a fixed number of slots that contain page table entries, which map virtual addresses to physical addresses. On a context switch, some TLB entries can become invalid, since the virtual-to-physical mapping is different. The simplest strategy to deal with this is to completely flush the TLB

174
Q

Classify each of the following as mechanism/policy/neither

Scheduling
Context switching
Limited direct execution
Page replacement
Fragmentation
Address translation
Best/worst fit
Locality
A
Scheduling -  Policy
Context switching - Mechanism
Limited direct execution - Mechanism
Page replacement - Policy
Fragmentation-  Neither
Address translation-  Mechanism
Best/worst fit-  Policy
Locality-  Neither
175
Q

What is the purpose of the free list in paged memory?

A

Keeping track of unused physical page frames.

176
Q

Explain the software design principle of seperating mechanism from policy.

A

By seperating them, we can provide a provide a machinery that can be reused across multiple high-level policy decisions. Such policy decisions can more easily be tuned and adapted to different environments and workloads, without changing the underlying mechanism.

Mechanism- What the computer does
Policy - How its gonna use the mechanisms

Can easily change Policy without having to change mechanism.

177
Q

What is the working set model attempting to model?

A

The working set model attempt to model the critical mass of a program’s memory pages, such that most memory references will result in a cache hit, and thus improve the program’s performance.

177
Q

What is the working set model attempting to model?

A

The working set model attempt to model the critical mass of a program’s memory pages, such that most memory references will result in a cache hit, and thus improve the program’s performance.