Memory Management Flashcards

Week 2.9 (75 cards)

1
Q

define swapping

A
  • moves inactive processes from memory to disk
  • OS swaps processes in/out to maximise CPU utilisation
  • introduces I/O overhead - overused = v performance
  • allows more processes to be managed in limited memory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

describe fixed-size partitioning

A
  • divided into fixed-size partitions
  • uequal-sized partitions can be used to reduce waste
  • process is placed in the smallest available partition that fits
  • wasted memory occurs when a process doesn’t fully utilise its allocated partition
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

describe variable-size partitioning

A
  • each process gets the exact amount of memory it needs
  • allocated dynamically - reducing wasted space
  • leads to fragmentation over time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is compaction

A

solves fragmentation by shifting processes to group freee memory together in large blocks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is the challenge with compaction

A

moving processes means changing their memory addresses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

solution to compaction challenge

A
  • OS uses logical addresses, which the processor converts to physical addresses at run time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is address translation

A

processor uses a base address register to map logical addresses dynamically

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is memory divided into in paging

A

fixed-size frames

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what are programs divided into in paging

A

fixed-size pages

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is paging

A

OS assigns pages to available frames

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what are page tables

A
  • pages do not need to be stored in contiguous frames
  • OS maintains a page table for each processes
  • logical addresses = page number + offset
  • rocessor uses the page table to translate logical to physical addresses
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

describe demand paging

A
  • loads process pages into memory only when they are needed
  • principle of the locality = programs use a small subset of their code and data at any time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what is a page fault

A

occurs when a required page is not in memory - OS loads it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is a page replacement

A

necessary when bringing in a new page - another page must be removed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what happends from excessive page faults

A

thrashing - system spends more time swapping pages than executing processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

how does virtual memory add to demand paging

A
  • DP eliminates loading entire processes into main memory
  • a process can be larger than physical memory
  • OS manages memory allocation dynamically - large programs easier to execute
  • real memory = actual physical memory where execution occurs
  • virtual memory - perceived memory space, extended onto disk storage
  • enables efficient multiprogramming
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

what is the page table size problem

A
  • each process has its own page table mapping virtual pages to physical frames
  • large virtual memory space leads to huge page tables
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

describe page tables in virtual memory

A
  • page tables are kept in virtual memory
  • when a process runs, at least part of its page table must be in memory
  • current page table for executing page must always be accessible
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

what are the 2 solutions to the page table & virtual memory problem

A
  1. two-level paging
  2. intverted paging
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

describe two-level paging

A
  • a single large page table is split into smaller page tables to save memory
  • page Director (Level 1) stores pointers to these smaller page tables
  • each Page Table (Level 2) fits within a single memory page
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what 3 things is the virtual address made up of in two-level paging

A
  1. page directory value - to find the correct PT
  2. page table index - to locate page inside that table
  3. offset - to get the exact byte
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

describe an inverted page table

A
  • virtual page number is mapped into a hash value using a hash function
  • hash value points to an entry in the inverted page table
  • there is one IPT entry per physical memory frame, not per virtual page
  • a fixed amount of memory is allocated for IPT, independent of the number of processes
  • if multiple virtual pages map to the same hash value, a chaining method resolves conflicts
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

what is the performance issue with virtual memory & paging

A
  • every memory access requires two physical accesses : 1st & 2nd
  • doubles memory access time, reduces system performance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

what is fetched in the first access using virtual memory

A

fetch the page table entry to translate the virtual address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
what is fetched in the second access using virtual memory
retrieve the actual data from memory
26
what is the solution to virtual memory's performance issue
translations lookaside buffer (TLB)
27
describe a TLB
- caches recently used PT entries - principle of locality - v need to access full PT - hit in TLB = v memory access = ^ performance
28
describe the 4 step process from code to virtual address
1. program code 2. compilation & virtual address assignment 3. OS memory management - assigns virtual memory to the process 4. CPU fetches instruction - recieves instruction containing virtual address
29
describe the 3 step process of virtual to physical address location
1. address translation - CPU checks TLB for VA - hit = retrieves physical address, else continues to PT in memory 2. cache check - CPU has PA. checks cahce - hit = loads.stores data, else fetches from RAM 3. memory access - if data is not in cache, CPU accesses RAM & retrieves/strores the value
30
why do we need paging
- fixed & variable-size partitioning do not work - flexibility in swapping comes atr the price of address changes - VA - virtual to physical address translation needed - demand paging - NEED-ONLY - 2-level paging & IPT - TLB reduces time complexity & overhead
31
describe segmentation
- memory = variable logical parts = segments - logical address = segment number + offset - each segmentt has a base address & size limit - segement table - protection through access permissions
32
3 benefits of segmentation
1. logical memory orginisation 2. independent compilation of program models 3. supports basic protection & modular sharing
33
disadvantage of segmentation
less flexible than paging
34
what are the 4 possible memory views in intel x86 memory management
1. unsegmented paged memory 2. unsegmented paged memory 3. segmented unpaged memory 4. segmented paged memory
35
describe unsegmented unpaged memory
- CPU uses physical address directly from instructions - no address translation or TLB - no segmentation or paging
36
advanatges of unsegmented unpaged memory
1. minimal hardware complexity 2. fast memory access 3. useful in early boot stages or simple-embedded systems
37
limitations of unsegmented unpaged memory
1. no memory protection or isolation 2. manual memory management required by software
38
describe unsegmented paged memory
- memory divided into fixed-size pages - VA to PA through PT - protection & memory management handled via paging
39
advantages of unsegmented paged memory
1. efficient memory use 2. supports virtual memory & dynamic allocations 3. common in modern OSes using a flat memory model
40
limitation of unsegmented paged memory
no logical speration between code, data & stack
41
describe segmented unpaged memory
- memory divided into logical segments of variable size - segment = base address, limit & access rights - VA to PA through ST
42
advanatges of segmented unpaged memory
1. logical seperation of memory 2. logical access control at segment level 3. each segment can grow dynamically
43
limitations of segmented unpaged memory
1. no demand paging or full virtual memory support 2. external fragmentation due to variable-size 3. less flexible for modern multitasking OS needs
44
describe segmented paged memory
- memory divided into logical segments - each segment divided into fixed-size lages - both segment & page table used for VA translation
45
advantages of segmented paged memory
1. combines logical memory organisation with efficient physical memory use 2. enables fine-grained protection at both segment and page levels 3. supports large, modular programs & demand paging
46
limitations of segmented paged memory
1. more complex hardware & translation steps 2. slower address translation compared to flat page memory 3. rarely used in modern 64-bit systems
47
what are the 2 forms of segmentation protection
1. privilege levels 2. access attributes
48
what are privilege levels
control access rights
49
what are the privilege levels
- 0: memory management, protection and access control - 1: most of the OS - 2: reserved or specialised applications (databases) - 3: user applications
50
what are access attributes
defines read/write or execute permissions - data segments - read/write or read-only - program segments - read/execute or read-only
51
what is a virtual address made of (bits)
- 16-bit segment reference - 32-bit offset
52
what 3 things are used to reference a segment in memory
1. index - entry in a ST 2. table indicator - determines whether the segment is in the GDT or LDT 3. requestor privilige level
53
what is in a 64-bit entry in a segment table
- base address - segment limit - type - descriptor privilege - granularity - present - default operation size
54
describe how address translation works using segmentation in x86
VA - consists of segment selector + offset - selector used to index GDT - descriptor provides base address LA (after segmentation) - compiled using base + offset - no paging then this becomes the PA Paging (if enabled) - LA = directory, table & offset - directory identifies PT - PT entry = PA - offset = byte within page
55
where is intel x86 most commonly applied
desktops, laptops & servers
56
where is ARM commonly applied
mobile devices, tables & power-efficient systems
57
what is ARM memory management
- does not support segmentation - cache organised as either logical/virtual or physical
58
what is the address translation process in ARM
1. CPU recieves a VA 2. TLB checks if translation exists 3. TLB miss = PT consulted, translate VA to PA 4. PA is then checked in the cache
59
describe memory access control in ARM
- when CPU requests an address - TLB checks both address mapping & access permissions - processor's access request is checked with access control bits in TLB; if not found, check is performed in memory - if a processor attempts unauthorised access - ACH generates a signal to the processor
60
what are 2 supported types of memory mapping in ARM
1. sections 2. pages
61
what are sections
large memory blocks for fast translation
62
what are pages in ARM
smaller blocks for finer access control
63
with what 3 reasons did we move from sequential execution to parallel processing
1. micro-operations 2. pipelining 3. superscalar processors
64
why did micro-operations help move to parallel processing
multiple control signals execute simultaenously within an instruction
65
why did pipelining help move to parallel processing
overlapping fetch, decode & execute stages
66
how did superscalar processors help move to parallel processing
mutliple execution units within a single CPU handling different instructions in parallel
67
what are the 4 types of parallel processor systems
1. SISD 2. SIMD 3. MISD 4. MIMD
68
describe SISD
- a single processor executes one instruction at a time on one piece of data - sequential execution, no parallelism
69
describe SIMD
- one instruction controls multiple processing units - parallel execution if the same operation on multiple data points
70
use of SIMD
GPU image rendering
71
describe MISD
multiple processors execute instructions on the same piece of data
72
use of MISD
- mainly found in specialised applications - fault-tolerant systems in a spacecraft
73
use of MIMD
- multiple processors execute different instructions on different data simultaneously - high felxibility & efficiency
74
use of MIMD
used in most modern multiprocessor systems
75