Exam 2 Flashcards

1
Q

What is Memory Management?

A

It allows virtual address spaces to be mapped to physical memory. It is supported by the OS.

Typically, virtual memory is too big to store in physical memory, so we can only store a portion in there.

Memory management also maps virtual addresses that cannot fit in physical memory over to the disk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are Page Tables?

A

A data structure used by a virtual memory system in the OS to map between virtual addresses and physical addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Computer memory is NOT simplistic where all programs fit into memory and each program occupies a continuous part of memory. Why?

A
  1. This would make memory allocation very difficult.
  2. It would be hard to make sure apps didn’t write over/run into each other.
  3. It would be hard to efficiently and consistently find the free space you need for an app?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the Fixed Partition Scheme for memory management and why doesn’t it work?

A

IDEA: memory is loaded into partitions such that one process is loaded into each partition. The processes are assigned a partition at load time and the loader rewrites the memory addresses. New processes wait in a queue until a sufficiently large partition becomes available. When you compile a program, all you need to do is load it into memory, and change all virtual addresses to physical addresses by adding a fixed offset.

ADVANTAGES:

  1. Simple
  2. Relatively little hardware support needed

DISADVANTAGES:

  1. Have to decide partition boundaries in advance.
  2. Inflexible.
  3. Inefficient - maybe the program doesn’t need all its content at the start but it’s still loaded in and taking space.
  4. You need to know memory requirements in advance, but this is often hard to know. You can’t use dynamic memory allocation if you do this.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the Base & Limit Registers solution for memory management, and why doesn’t it work?

A

Basic idea: hardware support - two special registers (for user-mode processes)

  1. A base register, automatically added to each address as it is used. We add new virtual addresses to the base register to figure out the actual physical address.
  2. A limit register, against which all accesses are tested. This is the upper bound address that a program cannot go beyond. This is a way to prevent programs from running over each other.

DISADVANTAGES:

  1. Requires hardware support to do an “add” and “compare” on every access.
  2. It leads to increased overhead.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the biggest challenge that Dynamic Memory Allocation presents.

A

It can create holes in physical memory. If one process ties up a block of memory and then leaves, another process may be able to take that block if it’s the same size or smaller, but if it’s bigger it can’t use any of that space.

To track the holes, we use Bitmaps and Linked Lists.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are Bitmaps and how do they work?

A

Suppose memory is divided into chunks of 10kb. We maintain a vector of 0’s and 1’s that specifies availability. The i’th bit tells us whether the i’th chunk is free.

For example, look at 20 bits:
00000011 00111110 0011

The OS occupies the first 20kb. As a result, above we have the two rightmost digits set as 1 and 1.

The next 30kb are free 000 following the OS’s 11.

If a new app comes in and requests 20kb of memory, we just look at the bitmap and find 2 consecutive 0’s. Then we can put the app there.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the advantages and disadvantages of using Bitmaps?

A

ADVANTAGE: simple to implement.

DISADVANTAGE: Slow to search for free bits. Search for k allocation units for a process requires searching for k consecutive 0’s.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How are Linked Lists used to manage memory?

A

Every record in the Linked List has a process ID/marked free (H/B), a starting location, a size, and a pointer to the next record. The current state is say:

(H,2,3), (B,5,5), (H,10,2), (D,12,2), (H,14,6)

In the first one above, H represents that it is a HOLE. the 2 represents an offset of 2*10kb = 20kb as a starting location for this free hole. The 3 is the size, meaning this hole is 30kb wide.

When you have an app that needs to be loaded into physical memory, we just look for a hole that is large enough to take the new entry.

The PCB of a process points to a record storing its location in memory. Example: process B has a pointer to B,5,5.

You also need to use doubly linked lists because when a process terminates, neighboring blocks need to be examined. You may have to merge multiple adjacent blocks into 1 contiguous large hole.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the three strategies for seeing which block of open memory is the most appropriate fit for a new process?

Consider the scenario where you have a 15kb program to store and you have holes of sizes:

  • 30kb
  • 20kb
  • 60kb
A
  • 30kb: First-Fit
  • 20kb: Best-Fit
  • 60kb: Worst-Fit

First, Best, and Worst are the three kinds of fits.
Let {Hi | 1,…n} be unused blocks and k be the size of a requested block.

First Fit: Select the first Hi such that size(Hi) > k. Good because we don’t need to traverse the whole list to find a hole.

Best Fit: Select Hi such that size(Hi) >= k, and if size(Hi) > k, then size(Hi) >= k for all i != j. In other words, select the smallest block that is big enough. This gets the tightest fit but will likely be a slower solution.

Worst-Fit: Select the largest block (reduce size of biggest hole).

Your choice of strategy can lead to memory fragmentation. No consensus on which is best.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the External Memory Fragmentation Problem?

A

This is space the OS is aware of but is too small to give to new programs.

You have small holes in memory that are left behind after you removed and added processes from memory. The holes are too small for storing a contiguous program.

Example: We use best-fit to put the 15kb program in the 20kb spot. This leaves 5kb open for another process to take, but no other process can use it because it’s too small a slot.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is Internal Memory Fragmentation?

A

This is when we allocate more kb for a program than it ends up using, thus wasting the space it doesn’t use.

Example: We thought process B needs 50kb of memory. When it runs, it ends up only needing 40kb. 10kb is unused and the OS cannot reallocate it to another process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the Buddy Algorithm (brief overview)?

A

Used by Linux to allocate memory.

In the Buddy Algorithm, Linux maintains page sizes by the power of 2. Doing this makes it easy to search for pages of the right sizes that you want. Think of a page as a discrete fixed size of memory that can be allocated to a process. They are typically 4kb (this is tunable).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Describe the Buddy Algorithm in depth.

A

When a request for memory comes in, it is rounded to a power of 2 (e.g. 2,4,8,etc).

For a request of 8 pages, we repeatedly half a block of memory until we have reached the size that we need. Say we started with a 64 page block. We then cut it to 32 & 32, then cut one of the 32’s into 16 & 16, and then cut one of the 16’s into 8 & 8. Then if we get another request for 8 pages, it can take the other 8 that we got from breaking up that 16.

If we then get a request for a size of 4 pages, we take the unused 16, and break it into 2 8’s and then one of the 8’s into 2 4’s, bringing us to a page of size 4.

The reverse is also true though. Say we release 8 pages of memory in step h and then another 8 in step i. Then, both of these will be merged into size 16.

Because the buddy algorithm allocates in powers of 2, we can maintain a fixed size array to maintain all the info in the memory chunks. The chunks are ordered in their sizes for searching. For example, if you are interested in pages of size 8, we can go to the entry for pages of size 2^3. Then we find the beginning offset which in that case would be of size 24. The offset points to the memory offset based on the number of pages from the beginning. Likewise, if the user asks for a chunk of size 32, we can go to the array and look for a chunk the size of 2^5.

*See lecture slides for visuals - that’s much clearer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is one of the biggest downsides of the Buddy Algorithm?

A

It can cause internal fragmentation.

Example: If an app asks for a 65 page chunk, we have to round up to the next power of two above 64, which is 128. That means 63 pages are essentially wasted since the OS allocates entire 128 page chunk even though it knows only 65 will be used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Slab Allocation?

A

This is a Linux approach to address the internal allocation problem of the Buddy Algorithm.

This is where we take chunks using the buddy algorithm, but carve out slabs (smaller units) from them, and manage smaller units separately.

Linux also mitigates issues with internal fragmentation by using object caches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is an Object Cache?

A

Object Caches: dealing with creation/destruction of objects of certain types. Pointers to one or more slabs, storing objects of the same type. Essentially, if you allocate and deallocate objects frequently, when you free them up, rather than freeing for allocation for other processes, the OS may actually cache it. At some later time, if you allocate the same object again, the OS can get that from the cache much more easily (and faster). This is usually a pointer to one or more slabs where you end up storing objects of a common type.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is Virtual Memory?

A

virtual memory is a memory management technique that provides an “idealized abstraction of the storage resources that are actually available on a given machine” which “creates the illusion to users of a very large memory.

However, many addresses in the virtual space may not have a mapping to physical memory. This happens if they have not yet been allocated for use. IF some of the virtual memory is not frequently used, it gets mapped to disk using swapping/paging.

You may have 2^64 bytes of addressable virtual memory space, but the physical memory (RAM) is only about maybe 4-8GB (much less).

19
Q

What is a Memory Management Unit (MMU)?

A

The hardware component that translates from virtual to physical memory. This takes care of reallocation and protection of one process from another.

20
Q

What is Paging?

A

This is a memory management technique where address space is divided into fixed size pages.

Physical memory is divided into chunks of memory called “page-frames” or “frames”. On Linux, each page is 4kb by default.

Virtual memory is divided into chunks called “virtual pages” or “pages”. The size of a page is equal to the size of a page frame.

In a 32-bit machine, there are 2^20 pages in virtual memory.

The OS is responsible for maintaining the mapping from pages to page frames.

21
Q

What is Segmentation?

A

Address space is divided into variable-size segments (logical addressable units).

22
Q

What is a Swap File?

A

The Swap File is where memory pages go when they can’t fit into physical memory. You get them by using the disk controller.

23
Q

Covert the below N-bit addresses to their size in memory and the number of addresses they can hold:

10-bit address
20-bit address
30-bit address

A

10-bit address: 1KB of memory; 1024 addresses

20-bit address: 1MB of memory; about 1 million addresses

30-bit address: 1GB of memory; about 1 billion addresses.

24
Q

You are interested in designing a new processor with 16-bit addresses. What is the total size of the virtual address space?

A

65,536 Bytes

25
Q

What does an actual virtual address look like?

A

It is a pair (P,O) of addresses. P corresponds to the page number and O corresponds to the offset. The lower order bits give an offset O within the page, this part stays the same after translation to physical address.

The high order bits specify the page P. This is the part that gets translated.

EXAMPLE: If each page is 1kb and the virtual address is 16 bits, then low-order 10 bits give the offset, and high-order 6 bits give the page number. 10 bits is the offset since 1kb of memory requires 10 bits.

26
Q

What do we have fewer of, page frames or virtual pages?

A

We have fewer page frames. As a result, only some of the virtual addresses point to a virtual page frame. The first virtual page points to page frame 2 (see diagram on page 133 of notes).

27
Q

Explain the mappings between pages and frames.

A

Within a page or a frame, the virtual addresses map to a contiguous section of physical memory. BUT, the page frames can map anywhere (not contiguous). See diagram on page 133 of notes.

28
Q

What is a Page Table?

A

The OS maintains a Page Table which is a mapping from page number to page frame. Think of it as two columns. One for page number, one for page frame.

Page and frame have exactly the same size. The goal for the MMU is to translate virtual page “P” (left column) into a physical frame “F” (right column). Because they have the same size though, the byte offset is actually retained (see image on page 134 of notes).

Each process/app gets its own page table.

29
Q

Describe the Paging Process.

A

The MMU is a hardware component that handles the below.

  1. During paging, the virtual page number is extracted from the virtual address. The virtual address is a pair (p,o).
  2. This is used to index into a page table which will contain a pointer to the actual frame.
  3. The physical frame is then retrieved and the offset is used to point to the specific word in memory that we are trying to read that corresponds to the virtual address. In this process, (p,o) is translated to (f,o) where “o” is the same for physical and virtual memory.
  4. The request for (f,o) is then sent on the memory bus to fetch data from memory.
30
Q

Example of Paging

A

Note: Review image on page 135 while reviewing this.

P is an index into the array for translation.

In the image, the binary at the bottom is a VA which is input to the MMU. Assume the first 4 bits correspond to the page number, and the rest correspond to the page offset.

We can see the page number is 2 (0010). Page 2 has a value of “110” (or 5). Page 2 is therefore stored in frame 5 of physical memory.

The MMU then outputs a sequence of bits (outgoing physical address) where the first 3 MSB are 110, and the remainder are the 12. bits that were part of the offset in the initial string that was entered to the MMU (the incoming virtual address).

31
Q

Paging with Multiple Processes

A

Say we have 3 user processes - each gets a pointer to the page table.

In order to retrieve a memory value, we sometimes need two memory accessors. One to fetch the table and one to fetch the value of interest.

32
Q

Define the contents of a Page Table.

A
  1. Validity Bit: This is set to 0 if the page is not in memory and 1 otherwise. A page may not beb in memory because A) the page was never allocated before. B) The page was allocated but it is in the swap file. C) If it’s currently in use and in physical memory, we have a valid frame number.
  2. Frame Number: Number of bits required depends on the size of physical memory.
  3. Block Number: If page is not in memory but on disk, this number points to the disk block number.
  4. Protection Bits: Set to “read”, “write”, or “block” depending on the segment. If you are reading CODE for example, pages are read only.
  5. Referenced Bit: Set to 1 by hardware when the page is accessed. It is used by page replacement policy. It allows us to detect the fact that some pages are used frequently. This helps the OS decide which pages are best to keep and which are lower priority and can be evicted to the swap file (if needed).
  6. Modified Bit (dirty bit): Set to 1 by hardware whenever a page has been written. If you are just reading, this is set to 0. When writing it’s set to 1. This means that when you swap the page to disk, you have to copy any modifications to the frame in physical memory onto the disk. The OS will try to de-prioritize dirty bit pages when evicting pages to disk because moving them to disk is more expensive than just discarding a page that has been kicked out of physical memory.
33
Q

What is a Swap File?

A

It initially starts off empty, and is created on demand as pages are spilled to disk. Each page being swapped is dynamically assigned a block number in the swap file.

When pages are freed, we get holes in the swap file. The bitmap is used to identify holes for recycling swapped pages.

EXAMPLE: (see image on page 138).

Say we have two processes A0 and A1. Each process has 4 virtual pages and we have 6 memory frames in the middle. The first page 0 in the Page Table is stored in memory and it is stored in frame 0. The second page, Page 1, is not stored in memory but is stored in B1 on disk. If A1, the virtual page 0 is stored in frame 5. But, virtual page 1 is stored in block 3 on disk. IF we look at the physical frame, we see that some portion of one of the physical frames is used to store the page table. The two pages of A0 are stored in physical frames as well as 3 pages A1. On the right, we see that 3 blocks are used to store actual pages that have been swapped to disk. There is also a free block. So, the bitmap has a value of 1101 to indicate: taken, taken, free, taken.

34
Q

What are the 4 major design issues with swap files?

A
  1. INTERNAL FRAGMENTATION: We habe been referring to page frames as 4kb, but the OS actually has a range of 1-4kb and even up to 16kb. And, there are tradeoffs with size. The larger the page frame, the smaller the page table. But, this increases the likelihood we will end up getting internal fragmentation issues if we are not using the full page frame.
  2. MEMORY RESTRICTIONS: Say we have a 32-bit computer. If we have 20 bits to represent the page number, this is easily over 1 million pages. A page table of a million entries is unlikely to fit into memory. Even if you can fit one app, there can be multiple running at the same time. Multi-level paging and inverted page tables can help with this.
  3. SLOWER PERFORMANCE: Reading page tables from memory requires additional memory lookup. TLBs (Translation Lookaside Buffers). If every memory access. requires one access to read the page table and another to read the memory location itself, it drastically impacts the computer’s performance because going from CPU to memory requires going to the memory BUS.
  4. PAGE FAULTS: Reading pages that are not actually in physical memory is also a problem. It is called a Page Fault, and is when it traps to the kernel. One of the important things the OS needs to do is come up with a good policy for page replacement: try to pick a page to evict that is least likely to be used in the future.
35
Q

What is Multi-level Paging?

A

A page table can have 2^20 entries in memory, which is too big. Many architectures require 1 per process.

Multi-level paging is where we break virtual page numbers into 10-bit chunks. The first 10 bits index into a top-level page-entry table T1 which has 1024 entries. Each entry in T1 points to another second-level page table with its own 1k entries (4MB of memory since each page is 4KB). The second 10-bits of the virtual address index into the second-level page-table by the first 10-bits.

ADVANTAGE: each one of the levels requires a page table that is MUCH smaller. If a process has many virtual pages that are unused, there is no need to create the second and third levels unless you need them. It is memory efficient.

DOWNSIDE: The global and middle level can sometimes even be stored on disk, and if we use it frequently, there is a high likelihood they will have been swapped to physical memory.

See image on page 141 of notes.

36
Q

Example of Multi-paging.

A

Using a 32-bit machine, if a process only takes 16MB of virtual memory, it only needs 4 entries in the top level table (the rest will be marked unused) and only 4 second-level tables are created.

The MMU takes the virtual address at the bottom. Instead of using the whole page number portion of the VA to look up the page table, we instead break up the page number into 3 parts:

  1. The directory
  2. The middle
  3. The actual page number itself

The MMU takes the first 10 bits to find the global directory, it finds the middle directory with the next 10, etc. until we get the page and then the physical frame.

37
Q

What are Translation Lookaside Buffers (TLB’s)?

A

Page tables are stored in memory and accessing memory is up to 10x slower than the time taken to run instructions on the CPU.

We use a TLB in the MMU to solve this. The TLB stores.a small cache (say 64) of page tables to avoid the usual page-table lookup.

TLB is associative memory (content addressable memory) that contains basically pairs of the form (page-#, page-frame).

Special hardware compares incoming page # in parallel with the entries in the TLB to retrieve page-frame.

If no match is found in the TLB, we do standard lookup via main memory.

38
Q

How do TLB’s Work?

A

When the CPU gives the MMU the VA, the MMU queries the TLB cache. If the number is there. it takes it and we can avoid reading from the page table.

The TLB not only maintains the page to frame number mapping, it also holds the modify bits and protection bits. The point of this is to store as much info as needed in the TLB so we can avoid going to the page table.

TLB needs to keep in sync with page table. The modified reference bits can be modified in the TLB for performance, but later propagated to page table memory. Changes in the page table also need to be propagated to the TLB.

39
Q

What is an Inverted Page Table?

A

When virtual memory is much larger than physical memory, overhead of storing the page-table is high.

Virtual memory is so much bigger than physical, so rather than indexing a page table, we index physical memory which is smaller.

So, Inverted Page Tables store entries of the form of a pitch frame, process id, page#, and requires at most 64k entries.

Given a page, we find the page frame by using a hash table. This is especially useful when the VA space is significantly larger than the physical address.

40
Q

Describe the process of using an Inverted Page Table.

A
  1. Input to MMU: virtual address = (page p, offset o).
  2. Check if there is a frame f with (p,f) in the TLB.
  3. If so, physical address is (f,o).
  4. If not, lookup page-table in main memory (requires several slow accesses due to multi-level paging).
  5. If page is present in the page table, compute physical addresses.
  6. If not, issue page fault interrupt to OS kernel (even slower).
  7. If the memory access actually modifies the contents of the frame, we have to update the TLB/page0table entries (e.g. modified bit).
41
Q

What do we do when we page fault?

A

This might happen if a process is allocating new memory or accessing memory currently not in physical memory but is swapped to disk.

If there is space in physical memory, all we need to do is bring the page from disk to physical memory and we continue our execution.

If physical memory is full, we need. to write one that is in physical memory out to disk. This is called Page Replacement.

42
Q

What is Page Replacement?

A

Happens when we page fault and need to remove one page from physical memory to make room for a new one that was previously on disk.

If there are no free frames in physical memory, the OS needs to select a frame in physical memory and remove its page (the victim) to disk.

In doing this, we consult the page replacement rule to select a victim. Remember it might belong to a current process or might belong to another.

Once victim is selected, if the dirty bit is set, we copy the victim page back to the swap file. If the dirty bit is not yet set, just evict the page and free up the spot.

If it’s the first time the victim has been written to disk, we need to find a few blocks in the swap file to copy the page to, and note the assigned page number in the process’ page table.

When the page in physical memory is freed up, we locate the block number for the incoming page, load it from the swap file. and schedule a read transfer request. We block the current process until this is complete.

If a swapped page is used again in near future, it. will e copied back-and-forth to/from physical memory.

43
Q

What improves performance for Page Replacement?

A
  • You want to avoid swapping actively/recently used pages to disk.
  • Identify inactive pages not likely to be used in the future.