Virtual Memory Flashcards
(28 cards)
Virtual Memory
- On-Demand Loading:
- Only the necessary parts of a program are loaded into physical memory.
- The rest remains on disk until needed, conserving RAM.
- Support for More Concurrent Programs:
- By efficiently managing memory, virtual memory allows more programs to
run simultaneously. - Inactive parts of programs can be swapped out to disk, freeing up memory
for active processes. - Two possible implementations:
- Demand segmentation
- Demand paging
Demand Paging
- Regular swapping (old) approach is to load entire process into
memory at load time - and swap out entire process - Alternative (new): Demand Paging load a page only when it is
needed - Less I/O required (unused parts of program never loaded)
- Less memory required
- Faster response
- If a page is not in memory and the program tries to access it
- A page fault occurs
- The OS loads the required page from disk into RAM.
- Lazy swapper never swaps a page into memory until needed
Old Method: Regular Swapping
- Full Process Loading:
- In traditional swapping, the entire process is loaded into memory at the
start. - High Overhead:
- This approach required significant memory and I/O operations, even if
large parts of the program were never used. - Inefficiency:
- Especially problematic for large applications with rarely accessed code
paths (e.g., error handling routines).
New Method: Demand Paging
Concept:
* On-Demand Loading:
* Instead of loading the entire process, only the needed pages are loaded
into memory.
* Lazy Swapping:
* The OS only swaps pages into memory when the process accesses them.
* Page-by-Page Management:
* The program is divided into fixed-size pages, which are loaded individually
as required.
Pages
- A page is a fixed-size block of data in a program’s logical (virtual) address
space - Pages allow the logical address space of a process to be divided into
manageable chunks.
Frames
- A frame is a fixed-size block of physical memory (RAM).
- Frames provide physical storage for pages when they are loaded into memory.
Sizes
- Vary depending on the OS and hardware architecture. Typically 4 KB
- Frame size is identical to Page size
Page table
*Page table stores mapping from logical address to location in
physical memory
* If the page is not in physical memory, page table needs to record
this:
* Done with a valid/invalid bit
* In case of lazy swapping, invalid bit is initially set on all entries
* During address translation, invalid bit leads to a page fault
Page Fault
- Check secondary table to decide if it is an invalid reference or
page not in memory - Invalid reference → generate error
- Segmentation fault
- Access violation error
- If not in memory
- Get empty frame
- Swap: Retrieve page from disk place into empty frame
- Change to valid bit in page table
- Restart operation that caused page fault
Demand Paging Challenges
Pure demand paging starts with no pages in memory
* Slow to get started, better to load at least first page needed
Single instruction can access multiple pages, causing multiple page
faults
* Fortunately, exceedingly rare – Locality of reference
* Temporal Locality: If a page is accessed, it is likely to be accessed again soon. (Loops)
* Spatial Locality: Pages near a recently accessed page are likely to be used soon. (Arrays)
Hardware support required
* Modified page table – valid/invalid
* Swap space (secondary memory)
* Instruction restart - crucial requirement is the ability to restart any instruction
after a page fault.
Optimisation: Pre-paging
- Pre-paging is an attempt to prevent the high level of page faults an
initial process encounters. - Load the first page or a small set of essential pages initially to avoid
immediate page faults. - Reduces large number of page faults at startup
- If pre-paged pages are not used – I/O and memory wasted
- Heuristic Loading: The OS may predict which pages will be
needed next and load them proactively
Optimisation: Copy-on-Write
- Copy-on-Write is an optimisation strategy to efficiently manage
memory by delaying the copying of data until absolutely necessary. - If processes use same pages:
- When a parent creates a child process (fork() system call)
- Can share these pages until one changes the data
- Reduces swapping
- Pages must be marked as shared
- When process wants to modify the page – it takes a copy
No Free Frame
- If all frames in physical memory are in use, must free one to load a
page from secondary storage - Some pages in memory not currently active
- Which one gets paged out?
- Want to minimise number of page faults
- Don’t page out one that will be needed in with the next operation
- If a page hasn’t been modified, no need to to save it to disk (can use the
copy already on disk)
Basic Page Replacement
- Find the location on the desired page on disk
- Find a free frame:
* If there is a free frame, use it
* If there is no free frame, select a victim frame
* Write victim frame to disk if “dirty” i.e. modified - Bring the desired page into the (newly) free frame; update the
page and frame tables - Continue the process by restarting the instruction that caused
the trap
Note if no frames are free – potentially 2 page transfers for page fault
in and out – increasing Effective Access Time
Page and Frame Replacement
- Page replacement algorithm
- Used to decided which page to replace when no free frame
- Want lowest page faults
- Frame allocation algorithm determines
- How many frames to give each process
- Which frames to replace
- In general, more frames → less page faults
First-in-first-out (FIFO) page replacement
- Replace oldest page in memory
- Don’t need to record times, just maintain a FIFO queue
- Easy to understand and implement
- Does not always perform well
- First page may contain heavily referenced variable that will soon be needed
again - In some circumstances, page faults may increase with more page
frames (Belady’s anomaly)
Conditions for Belady’s Anomaly
- Algorithm-Specific: Belady’s Anomaly primarily occurs with
algorithms like FIFO - Specific Access Patterns: The anomaly depends on the sequence
of memory accesses (page reference string) and how the algorithm
evicts pages. - Explanation: Evicting the “oldest” page without considering how
frequently or recently it was used can lead to: - Inefficient replacements
- Increased page faults
- Increasing memory does not always improve performance
- Limitations of algorithms like FIFO & Emphasizes the importance of
smarter page replacement strategies
Optimal Page Replacement
- Belady’s anomaly led to the search for an optimal page
replacement algorithm - One with the lowest possible page fault rate
- That should never suffer from Belady’s anomaly
- Such an algorithm does exist!
- Evicts the page that will not be used for the longest time in the future
- Unfortunately, ideal but impractical, very difficult to know which page this
is… future knowledge - Need to find something suboptimal, but is there something better
than FIFO?
Least-recently-used (LRU)
- Instead of replacing oldest page (in terms of residence)
- Evict the page that has not been used for the longest time.
- Recent past treated as an approximation of the near future
- Implementation not so simple
- Option A - Counter. CPU maintains a counter which is incremented for
every memory reference. When a page is referenced, counter copied to
field on page. Remove page with smallest value in this field. - Option B: Stack. Stack contains page numbers. When a page is referenced,
it gets moved to the top of the stack. Removed page from bottom of the
stack
Hardware Support
- Exact LRU is not practical without hardware support because of the
high computational and memory overhead involved in tracking - Use of reference bit (set when a page is referenced) allows
determination of whether a page has been used - although not when it was used (periodic reset)
- If no hardware support available, modern operating systems
typically use approximations like: - Clock Algorithm
- Aging
- Which achieve similar performance with much less complexity.
Counting-based Page Replacement
- Least frequently used (LFU)
- Not good if a process has different phases of operation
- Most frequently used (MFU)
- On the principle that the most frequently used – might no longer be needed
- Neither algorithm commonly used
- Implementation expensive
- Do not approximate optimal solution well
Fixed Allocation
- Distribute total available frames evenly amongst number of
processes - Distribute total available frames proportionately according to size
of processes
Global vs Local - Frame Allocation
- Global replacement: replacement frame can come from set of all
frames - One process can “steal” from another
- Execution time of an individual process can vary greatly
- Greater throughput
- Local replacement: replacement from can only come from frames
of same process - More consistent per-process performance
- More likely to under-utilise memory
- Modern OS often use a hybrid approach
- Combine aspects of both policies to balance flexibility and fairness
Thrashing
- If a process does not have “enough” pages, the page-fault rate is
very high - Page fault to get page
- Replace existing page
- Quickly need replaced frame back
- This leads to low CPU utilisation – more focused on disk I/O
- OS increasing degree of multiprogramming!
- Thrashing:
- A process busy swapping pages in and out
- More time swapping pages between memory and disk than executing
actual instructions