Week 8 - Further virtualisation Flashcards

(27 cards)

1
Q

How do page faults occur in a normal paging system

A

In a paging system , there may be pages that are not mapped to physical frames (unused) . When we request such virtual address and mmu tries to translate into a physical address a page fault occurs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What happens when page fault occurs

A

.Hardware interrupt occurs
.OS regains control and kills the process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Lazy allocation

A

Run a process as if there is memory there

We may have pages with no physical frames mapped; when accessed, a page fault occurs, and at that point BLANK physical memory is allocated. This allows efficient memory usage, as memory is only allocated when the page is actually accessed.

KEY WORD BLANK PHYSICAL MEMORY IS ALLOCATED

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Lazy loading

A

Run a process as if memory is there

When a page fault occurs, the OS doesn’t just allocate blank physical memory — it also fetches the actual data (e.g., from a file on disk) and loads it into the allocated memory, only when that page is first accessed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Just read

A

a simple solution would be to make all alocation lazy

ie remember we have a stack and heap that are initlaly empty and grow towards each other ( when functions / local variables declared) or objects created independently) . We could use page faults then - when region needs more space we allocate physical memory on demand

NOTE HEAP DOES NOT GROW LAZILY WE EXPLICITLY USE SYSTEM CALLS SEE LATER IN COURSE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does lazy zeroeing work

A

so essentially is what youre saying
lazy zeroing maps all the UNUSED virtual pages (as read only firstly) in a process to a single zerod physical frame

This works if youre just reading but the second you try write to a page a page fault occurs as we mapped the virtual pages as read only and when this happens we allocate a frame that is zero initiallised ( so nothing that previously used that frame (like a process) interferes) and we are free to overwrite

AFTER THIS WE RESUME THE PROCESS AT FAULTING INSTRUCTION AGAIN (now theres space allocated ofc it succeeds - this is where we overwrite the zeros in the new frame)

IF YOU SEE COPY ON WRITE THIS IS JUST STAGE WHERE YOU COPY THE ZEROES (from the shared frame) into the newly allocated frame (so that no interference occurs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

advantages of lazy zeroing

A

Use an example of array lets say we declared it of size 1,000,000 (initially empty)

“FAST ALLOCATION”

When you request (say) a huge array or large zero-initialized block,

The OS doesn’t need to allocate and zero out tons of physical memory right away.

Instead, it just sets up mappings to the shared zero page → this is a fast, lightweight operation.

SPARSE USE OF MEMORY (SAVES LOTS OF MEMORY):

You can reserve (declare) a huge virtual memory region,

But the OS only backs it with real physical memory when you actually write to a part of it,

So untouched regions stay mapped to the shared zero page and cost almost nothing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Lazy zeroeing is an instance of copy on write

copy on write is more general

A

pages not backed by physical memory shares same physical data ( not just zeroes now)
when we want to write to a page we allocate a new frame and we copy physical data in frame -> We resume the process’ faulting instruction which now succeeds

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

lazy swapping to and disk

A

amoutn of physical memory is scarce so we dont want to have pages mapped to frames that havent been accessed in a while

so we can write (swap) them to disk

This means :
When we try to access pages ( they are in disk now not memory) a page fault occurs ,
os brings the page from disk back into physical memory and we resume the process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

recap of storage hierachy

Top - small but fast
bottom - big but slow

A

Registers
Cache
Main memory
Disk / other backup storage devices like SSD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Recap

A

we can create the illusion of having more memory than whats actually in physical memory by using disk

ie pages that arent used swapped to disk and brought back on a page fault ( when accessed)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

why does swapping ( to disk ) need us to extend page table

A

too simple no longer that a page table entry stores frame number or invalid

our page table now needs to be extended to know :
where a page maps to in the disk

in the case of lazy loading , that a page should eventually map to a block in disk (remember we try to access a page not in memory -> page fault -> retrieve it from disk)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Whats the solutions (2) ( in extending a page table) to support swapping to disk

Hint :cheap and nasty option
Hint: realistic

A

so essentially we are saying we can do a cheap and nasty option by :
Remember not just frame number , we have valid bit , permission bit , dirty …
if a pagetable entry sees that a page has no physical mapping ( valid bit set to 0 )
then rest of information is useless and so we can store the information that we need (ie where in disk page is stored etc) in those useless regions

realistic option : use a separate data structure to store extra info like where pages stored in disc…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

problems with demand paging (being lazy)

A

slow :
Each read pauses a process and we are forced to wait on slow io ( reading from disk)

Therefore constant swapping leads to horrible performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

solution to demand paging being very slow

A

combining prefetching with demand paging so that we have needed pages(in physical memory) that we are confident going to be used a lot ready ( avoiding getting them through slow disks when page faults)

Simple heuristics (like “if you accessed page N, you’ll probably need page N+1”), or

models to predict patterns

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why is a block cache useful

A

REMEMBER DISK IS MUCH SLOWER THAN MAIN MEMORY ACCESS

we want to have the pages in disk that are recently or commonly accessed in the cache . WE MUST AVOID HAVING PAGES (in disk) that are not used in the cache as that defeats purpose ( wasting physical memory)

17
Q

page replacement

A

What if physical memory gets full

We need to make decisions:
page replacement - which pages best suited to swap out to disk when physical memory is full

18
Q

frame allocation

A

How many physical frames do we assign to each process

19
Q

How page replacement works

A

When a process has maximum frames assigned to it based on frame allocation and we need further memory -> we need to pick which page to replace based on a policy

20
Q

different policies for page replacement

A

FIFO → evict the page that was loaded longest ago.

Random → pick any page at random.

LRU → evict the page that hasn’t been used for the longest time.

21
Q

why cant we implement true LRU

A

so essentially we cant directly implement LRU policy since hardware doesnt keep track of timestamps ( no way to directly know what is least recently used in relation to others) so we do a work around -> We use the used bit and we get the os to periodically clear the used bit so that if we have the used bit set to one we know its least recently used -> When a page is accessed hardware sets used bit again

22
Q

How we replace pages in our workaround LRU

A

The way we replace pages with our workaround ‘LRU’ is :
We order the list using FIFO (oldest first , newest last)
We move through the list , if used bit is 0 we replace , if used bit 1 we set bit to 0 and send to back of the list

THIS WAY WE GUARANTEE THE PAGE WE ELIMINATE IS ONE NOT RECENTLY TOUCHED

TO OPTIMISE:
sending pages to back of list is computationally expensive so we can use a circular list with a pointer
this way we can just cycle through and if bit 0 replace if bit one set to zero

No need to send to back

23
Q

working set

A

working set is the set of pages that the process is currently using ( when we say currently we mean a small time interval like 100 ms)

24
Q

How to estimate WSS (working set size)

A

clear used bits
wait a time interval (ie 100ms)
see how many pages have used bits set

this gives you size of current working set

25
what happens when a process given frames less than WSS
We have less pages mapped to frames than we need We are constantly swapping to and from disk performance is horrendous
26
Thrashing
many or all processes allocated frames less than WSS This means they are constantly swapping to and from disk which is slow as we spending huge amounts of time on io Througput almost zero
27
How to avoid thrashing
better to give some processes frames equal to WSS and sacrifice ( have others wait) rather than having all lower than WSS