Memory Management Flashcards
(31 cards)
What is the CPU Instruction Cycle?
- Fetch: Getting the instruction from memory.
- Decode: Understanding what the instruction does.
- Execute: Performing the instruction’s operation.
- Store: Writing back results to memory if needed.
What are registers?
Can be accessed within a single CPU cycle (extremely fast)
How is the main memory accessed?
Accessed over a memory bus; typically slower than registers.
What is cache memory?
Fast memory between the CPU and main memory to reduce access
times. It stores frequently used data and instructions to avoid slow
main memory access.
How is the Memory Hierarchy accessed?
*Helps balance speed, size, and cost, typically organized as:
* Registers > Cache > Main Memory (RAM) > Secondary Storage (Disk)
L1 Cache
Fastest and smallest, located inside the CPU core.
L2 Cache
Larger but slower, still on the CPU chip.
L3 Cache
Shared among cores, significantly larger but
slower than L1 and L2
Memory Isolation
Ensures that a process cannot access memory outside its
allocated space, protecting the OS and other processes.
Security Enhancement
Prevents malicious or faulty programs from causing memory
corruption.
Dynamic Relocation
Allows the OS to move processes in memory by updating only
the base register, without altering the program code.
What is address binding?
Address binding transforms logical addresses (generated
by programs) into physical addresses (used by hardware).
What is a logical address?
A logical address (also known as a virtual address) is an
abstract address that a process uses to access memory.
What is a physical address?
A physical address refers to the actual location in the
physical memory (RAM).
What 3 times can address binding of instructions and data to memory addresses happen?
- Compile time
- Load time
- Execution time
Load Time Binding
- Used when the load address is not known at compile time.
- The compiler generates relocatable code, meaning addresses are
relative to a base address. - The loader calculates the absolute addresses when the program is
loaded into memory. - Flexibility: The program can be loaded at different memory
locations without recompiling
Execution Time Binding
- The most dynamic form of address binding.
- Binding occurs when instructions are executed.
- Requires special hardware support, typically through the MMU.
- Allows processes to move freely between memory locations during
execution, as logical addresses are dynamically translated to
physical addresses. - Essential for modern operating systems that use paging or
segmentation.
How is Logical Address (Virtual Address) created?
- Generated by the CPU during program execution
- The address seen by the program
Physical Address
- The actual address in the memory unit
- The address used by the memory hardware to access memory
cells
Static Binding
- Address binding is completed before execution.
- Occurs at compile time or load time.
- Suitable for simple systems with predictable memory usage.
Dynamic Binding
- Binding occurs during execution.
- Allows for dynamic memory allocation, swapping, and relocation.
- Enables virtual memory systems, allowing processes to use more
memory than physically available
What is dynamic loading?
Dynamic loading is a technique where a program loads a module or
routine into memory only when it is needed.
- Modules can include libraries, functions, or data segments that are
not immediately required when the program starts. - Memory Efficiency
- Faster Start-Up
Dynamic Linking
- When a program is written, it typically contains reference to
external libraries - A program that has these libraries incorporated into its binary at
compile time is said to be statically linked - Dynamic linking postpones the linking until runtime
- Reduces the memory footprint of running programs.
- Allows multiple processes to share the same code, particularly
with shared libraries
Standard Swapping
- A process may be swapped out of memory to a backing store, then
brought back when needed - Allows total memory used by processes to exceed physical memory size
- Often combined with priority-based scheduling
- Low priority process more likely to be swapped out
- Greatly increases the cost of context switching
- Reading/writing secondary storage is relatively slow
- Time to swap is proportional to memory usage of process