week 7 memory management Flashcards

1
Q

describe memory management

A

Management of a limited resource:
(Memory hunger of applications increases with capacity!)
⇒ Sophisticated algorithms needed, together with support from
HW and from compiler and loader.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is logical address

A

Key point: program’s view memory is set of memory cells starting
at addresss 0x0 and finishing at some value (logical address)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is physical address

A

Hardware: have set of memory cells starting at address 0x0 and
finishing at some value (physical address)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

goal of memory management

A

Want to be able to store memory of several programs in main memory at the same time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

how can we manage memory

A

eed suitable mapping from logical addresses to physical addresses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what happens at compile time

A

absolute references are generated (eg MS-DOS .com-files)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what happens at load time

A

can be done by special program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what happens at execution time

A

needs HW support

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what can we use to take address mapping one step further

A

dynamic linking

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is dynamic linking

A

use only one copy of system library

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

how does os help memory management

A

same code accessible to more than one process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

why does swapping happen

A

memory demand is too high,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what happens in swapping

A

If memory demand is too high, memory of some processes is
transferred to disk
Usually combined with scheduling: low priority processes are
swapped out
Problems:
Big transfer time
What to do with pending I/O?
First point reason why swapping is not principal memory
management technique

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what are problems with swapping

A

Big transfer time
What to do with pending I/O?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is swapping usallu combined with

A

scheduling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

is swapping a primary memory management technique

A

no -> seen as a last resort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

wha is fragmentation caused by

A

swapping

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what are the two types of fragmentation

A

internal and external

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

describe internal fragmentation

A

programs only a little smaller than hole ⇒ leftover too small to qualify as hole

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

describe external fragmentation

A

over time, many small holes appear in memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what are the 4 strategies to choose holes in fragmentation

A

first fit
rotating first fit
best fit
buddy system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what is first fit

A

Start from beginning and use first available hole

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

what is rotating first fit

A

start after last assigned part of memory

24
Q

what is best fit

A

find smallest usable space

25
what is buddy system
Have holes in sizes of power of 2. Smallest possible hole used. Split hole in two if necessary. Recombine two adjacent holes of same size
26
how can we avoid external fragmentation
paging
27
describe paging
Alternative approach: Assign memory of a fixed size (page) ⇒ avoids external fragmentation Translation of logical address to physical address done via page table
28
describe hardware use in paging
Hardware support mandatory for paging: If page table small, use fast registers. Store large page tables in main memory, but cache most recently used entries
29
what do we use if page table is small
fast registers
30
what do we use if page table is large
Store large page tables in main memory, but cache most recently used entries
31
general principle in paging
Whenever large lookup tables are required, use cache (small but fast storage) to store most recently used entries
32
describe memory proctection in paging
Memory protection easily added to paging: protection information stored in page table
33
describe ides of segmentation
Divide memory according to its usage by programs: Data: mutable, different for each instance Program Code: immutable, same for each instance Symbol Table: immutable, same for each instance, only necessary for debugging
34
describe hardware in segmentation
Requires again HW support same principle as paging but have to do a overflow check
35
describe difference between paging and segmentation
Paging motivated by ease of allocation, segmentation by use of memory ⇒ combination of both works well (eg 80386)
36
describe virtual memory
dea: complete separation of logical and physical memory ⇒ Program can have extremely large amount of virtual memory works because most programs use only small fraction of memory intensively. Efficient implementation tricky Reason: Enormous difference between - memory access speed (ca. 60ns) - disk access speed (ca. 6ms for hard disks, 60μs for SDD
37
what is virtual memory impemented as
demand paging
38
describe demand pages
Two strategic decisions to be made: Which process to “swap out” (move whole memory to disk and block process): swapper which pages to move to disk when additional page is required: pager Minimisation of rate of page faults (page has to be fetched from memory) crucial If we want 10% slowdown due to page fault, require fault rate p < 10−6 for HDD and p < 10−4 for SDD
39
what are the 3 page replacement algorithms
FIFO Optimal algorithm Leat recently used
40
descibe FIFO
easy to implement, but does not take locality into account Further problem: Increase in number of frames can cause increase in number of page faults (Belady’s anomaly)
41
describe optimal algorithm
select page which will be re-used at the latest time (or not at all) ⇒ not implementable, but good for comparisons
42
describe leat recently used algorithm
use past as guide for future and replace page which has been unused for the longest time
43
problems with least recently used algorithm
Requires a lot of HW support
44
how to fix least recently used algorithm
Stack in microcode -Approximation using reference bit: HW sets bit to 1 when page is referenced. Now use FIFO algorithm, but skip pages with reference bit 1, resetting it to 0 ⇒ Second-chance algorithm
45
describe thrashing
if process lacks frames it uses constantly, page-fault rate very high. ⇒ CPU-throughput decreases dramatically. ⇒ Disastrous effect on performance
46
what is the 2 solutions to thrashing
working set model page fault frequency
47
what is the working set model
Working-set model (based on locality): Define working set as set of pages used in the most recent ∆ page references keep only working set in main memory ⇒ Achieves high CPU-utilisation and prevents thrashing Difficulty: Determine the working set! Approximation: use reference bits; copy them each 10,000 references and define working set as pages with reference bit set
48
what is the page fault frequency solution
akes direct approach: - give process additional frames if page frequency rate high - remove frame from process if page fault rate low
49
four segments in linux kernel device
Kernel Code Kernel Data User Code User Data
50
kernel address and user address in 32 bit architecture
Have separate logical addresses for kernel and user memory For 32-bit architectures (4 GB virtual memory): kernel space address are the upper 1 GB of address space (≥ 0xC0000000) user space addresses are 0x0 to 0xBFFFFFFF (lower 3 GB) kernel memory always mapped but protected against access by user processes
51
kernel address and user address in 64 bit architecture
For 64-bit architectures: kernel space address are the upper half of address space (≥ 0x8000 0000 0000 0000) user space addresses are 0x0 to 0x7fff ffff ffff, starting from above. kernel memory always mapped but protected against access by user processes
52
describe page caches in linux
Experience shows: have repeated cycles of allocation and freeing same kind of objects (eg inodes, dentries) can have pool of pages used as cache for these objects (so-called slab cache) cache maintained by application (eg file system) kmalloc uses slab caches for commonly used sizes
53
what are caches made of
pool of pages
54
what are caches maintaine by
application (e.g files system)
55
what is kmalloc used for
uses slab caches for commonly used sizes