Memory Mgmt Flashcards Preview

CS 241 > Memory Mgmt > Flashcards

Flashcards in Memory Mgmt Deck (42):

What's wrong with this WLP4 code?
int *arr = new int[100] ;

You must initialize all pointers before using them in wlp4.
int *arr = NULL ;
arr = new int[100] ;


What are the three subsystems supported by alloc.merl (the system that allocates memory in WLP4)?

1) init
2) new
3) delete


How are nested procedures called and returned in WLP4? What are the implications of this?

In WLP4, nested procedures use stack frames that are stored in a LIFO system

This means that procedure arguments, return values and local variables can all be stored elegently in the system stack.


What is the problem with dynamic memory allocation in WLP4?

(Hint: why is it unpredictable?)

Since new and delete can be called in an unpredictable order, they don't necessarily follow a LIFO system. Therefore, it cannot be stored properly in a stack making it more problematic than procedure stuff.


For memory management, what are the key differences between local variables and dynamically allocated memory?

What are the consequences of these differences?

What is the solution to these problems?

1) Dynamically allocated arrays can remain even after the function returns
2) Many dynamically allocated data structures can grow and shrink, meaning their size is not known at compule time.

It is not efficient to store dynamically allocated memory in the system stack

The solution is to use the Heap for dynamic memory allocation. In our case, RAM is used


What is the typical memory layout?

1) Code
2) Read-only data
3) Global Data
4) Heap (grows towards the stack)
5) Stack (High value addresses, starts at bottom and grows towards Heap)


When does the size of the Heap change?

When does the size of the Stack change?

Heap - changes if a lot of dynamically allocated memory

Stack - changes as functions get called and returned


What are the three tasks in memory management?

What are their equivalents in Stack/Heap?

Initialization, Allocation, Reclamation

Stack: Done by OS, push(), pop()

Heap: init, new, delete


How can Heap memory allocation vary?

How can Heap memory reclamation vary?

Allocation can be done explicitly or implicitly. It can also be done in a fixed size or by variable size.

Reclamation can be done explicitly or implicitly


True/False: Pointers can be relocated to fill spaces in between allocated memory

True, just not in C++ or WLP4


What does automatic memory allocation mean?

Some languages run a procedure in the background that determines when memory used by an object can be freed (i.e. when it is no longer being used)

This is commonly referred to as garbage collection


What does manual memory allocation mean?

The programmer manually calls delete when memory is no longer needed.


What are the risks associated with manual memory allocation?

1) Dangling pointers (deleting too early)
2) Memory Leaks (deleting too late)
3) Double deleting (deleting too often)


What is the problem with this code? What are the associated risks?

int* arr = NULL ;
arr = new int[10] ;
arr = NULL
delete arr ;

This is a memory leak.

Delete was called too late causing the program to use more and more memory.

The longer the code runs, the higher the risk of memory exhaustion.


What is the problem with this code? What are the associated risks?

int* arr = NULL
int x ;
arr = new int[10] ;
delete arr ;
x = 1 + 3 ;
delete arr ;

This is deleting twice.

This can cause the system to crash :(


What is the problem with this code? What are the associated risks?

int* arr = NULL
arr = new int[10]
int x ;
delete arr ;
x = 1 + 1 ;
arr[0] = x ;

This is a dangling pointer.

This happens when delete is called too early.

If the memory location that you tried to write to is being used by another data structure, it can modify that structure in an unpredictable way. That makes me feel some type of way :/


What are the cons of automatic memory management?

1) Uses more resources (i.e. it takes time to track memory usage)
2) Can reduce performance
3) Can stall program execution (bad for real-time applications)


What are some commonalities between manual and automatic memory management systems?

You still need to track
1) Which locations in RAM are being used
2) Which are free


What is the basic idea around explicit memory management?

Carve out an arena of memory from RAM. This arena is a large continuous area of memory that is handed out in blocks when new or malloc is called


What are the three explicit memory management approaches?

1) No reclamation
2) Explicit Reclamation
3) Variable-sized blocks


What are the feature of No Reclamation.

Explain this approach.

What are its limitations?

Features: Fixed sized blocks, explicit allocation, no reclamation, no relocation.

1) When initialized, ptrs for current and end are set up
2) When blocks in the arena/heap get used, the current ptr moves up
3) Memory exhaustion error thrown when current == end

Limitations: Can exhaust memory quick since you cant reuse mem


What are the features of Explicit Reclamation?

Explain this approach.

Features: Fixed block size, explicit allocation, explicit reclamation, no relocation

1) When initialized, ptrs for current and end are set up
2) A free list is set up that keeps track of any free blocks between the start and the current ptr
3) If the free list is empty, a block from the current ptr is then used


What are the features of Variable Sized Blocks?

Explain this approach.

Features: variable sized blocks, explicit allocation, explicit reclamation, no ptr reallocation.

1) A free list is initialized that is just one giant block (and the size of that block)
2) If a block is requested, allocate the # of bytes requested + 4 bytes
- First 4 bytes will store the size of the block
- Second part will store the data
- A ptr to the start of the data part will be returned
- Our free list remains the same but is now [# of bytes] smaller
3) When delete is called, p[-1] is checked to determine how much memory is now free.
4) The reclaimed block is then added to the beginning of free
5) If a second block is free and the free list is sorted by address, the second block can be coalesced to merge with the the adjacent block if their addresses are adjascent


Write a pseudo code program for the Allocate() function in Explicit Reclamation

if(free != NULL) // i.e. free is not empty
remove first block from free list
return first block
else if (current != end)
return current
increment current
ERROR: memory is tired (exhausted)


What is memory fragmentation?

When, even though there is enough free memory, you cannot allocate memory for a new block because there are no continuous free blocks big enough.


What are the three types of memory allocation strategies?

First fit - find the first hole that the block fits in
- Generally works well, fast
Best fit - Find the hole with least amount of leftover space
Worst fit - Find biggest hole so that enough space to store another block is leftover
- most fragmentation and least utilization


What is the efficiency of the allocation strats?

All the allocation strategies are O(log n) were n the number of free blocks. Assumes some sort of search tree


What are the different types of fragmentation?

External fragmentation - unused blocks between allocated blocks. Only happens in variable-sized blocks

Internal fragmenation - If a block has a pre-determined size and not all of the block is used, the rest of the block is fragmented


What is the general idea around dlmalloc?

Distinguish between smallbin, medium and large requests (0-512B, 512B-256KB, 256KB or more)
- Smallbins have sizes which are multiples of 16, starting at 32B
- Each bin has 8 bytes of info, status at beginning, size at beginning and end
-Note, this means a 32 byte bin can only store 1-24 bytes of data

The idea is to have multiple free lists for holes of different sizes


What are some basic notes for deallocating/allocating in dlmalloc

Deallocating - You can check your neighbour to see if you can coalesce

Allocating - If there isn't a smallbin available, break up a larger block


Explain how Coalescing works in dlmalloc

1) Check the previous block's size
2) Use the size to go to the start of that block
3) Check the status by subtracting size and adding 4 bytes
4) If it is not allocated, coalesce with neighbor, new block is the size of your block plus previous.
5) Then, use your size to go to your neighbor's status and do the same

You have officially coalesced with your neighbors!


Is line 7 an error?

int * ia = NULL ;
int* ip = NULL ;
ia = new int[100] ;
ip = ia ;
delete [] ip ;

Depends on what happens in line 4, 6 and 8.

What happens in these lines can be determined by user input so we don't know if that could be an error or not.

SO: The compiler should not try to determine if calls of new and delete are properly paired as ptr values can be modified or copied.


What is the challenge faced by automatic memory management?

It is difficult to identify all pointers and determine what parts of the heap to free.


What is the first solution to this problem?

What are its limitations?

Monitory memory access and pointer values at runtime

Slows down the program
Hard to guarantee you've caught all the errors


What is garbage collecting?

Automatically deciding when to free up memory


What are the three approaches to garbage collecting?

1) Reference counting
2) Mark and sweep
3) Copying collectors


Explain reference counting

For each block, keep track of its reference count (how many pointers point to that block)
1) Every time a pointer is defined or redefined, update the reference count
2) If a block's reference count is 0, reclaim it


What is the problem with reference counting?

If you have two blocks that point to each other, it is inaccessible and should be freed but reference counting won't free it.


Explain Mark and Sweep

1) Scan global variables and the entire stack for pointers
2) For each non-null pointer, mark the corresponding block in the heap
3) If the heap object has pointers, follow them as well
4) Scan the heap, reclaim any blocks not maked


What are the problems with mark and sweep?

Since heap blocks can have pointers, you need mad graph theory alogos to do this >:(


Explain copying collector

1) Divide your heap into 2 parts, R1 and R2
2) Only allocate stuff to R1
3) When R1 fills up, scan for all non-null pointers, copy the data from R1 to R2
4) Then do the same thing to R2


What are the pros and cons of copying collector

1) After copying, there are no fragmentations
2) new and delete are super speedy

Only half the heep is in use at once`