Is flash memory suitable for use as main memory? Explain your answer.
No,
Too slow: RAM answers in nanoseconds; flash takes microseconds → way slower for random access.
Wears out: Flash cells can be rewritten only a limited number of times; RAM doesn’t have this limit.
Awkward writes: Flash must erase big blocks before writing; RAM can change any byte directly.
Uneven speed: Flash reads fast but writes much slower and needs extra housekeeping (like garbage collection).
What are the benefits of dividing the fetch-execution cycle into different fetch and execution units in modern processors?
Do more at once: While one instruction runs, fetch the next → higher throughput.
Faster clocks: Smaller, focused steps make it easier to run the chip faster.
Hide waiting time: Queues between front end and back end let work continue even if memory is slow.
More per cycle: Wider front end can issue several instructions each tick (superscalar).
Right tool for the job: Specialized units (branch, decode, ALU, load/store) use power better.
Out-of-order ready: Front end renames registers and builds a pool of work so the core runs whatever is ready.
Smarter guessing: A tuned front end improves branch prediction and prefetch, cutting stalls.
How does the cache controller know when to use cache or go to main memory?
Look-up: CPU makes an address. Cache checks a tag and a valid bit for that spot.
Hit: Tag matches → use cache data right away.
Miss: No match → get data from next level/main memory, put a copy in cache, then use it.
Writes:
Write-back: Change cache now, save to memory later.
Write-through: Change cache and memory now.
Many cores: A coherence rule keeps everyone’s caches in sync.
Explain what happens in each phase of the Instruction Cycle.
Fetch (IF): The CPU gets the instruction from memory.
Decode (ID): The CPU figures out what the instruction means.
Execute (EX): The CPU carries out the instruction (like adding numbers or moving data).
Store: The CPU saves the result back into memory or a register.
Describe Static Memory
What: Memory with fixed size/lifetime determined before (or at start of) program execution; doesn’t resize at runtime.
Common forms
SRAM (hardware): “Static RAM” used for CPU caches fast, no refresh, low density, pricey.
Static allocation (software): Global and static variables with program-long (or file/function-scope) lifetime.
Key traits (software)
When allocated: At load time/startup (not during execution).
Lifetime: Persists for entire run (or across calls if static in a function).
Where: Static/data segments (not the heap).
Speed/overhead: No alloc/free calls; predictable; faster access than dynamic in many cases.
Used for:
Configuration tables, buffers with known max size, counters/state that must persist across calls.
Example:
Cache memory built from SRAM (hardware).
static int counter; inside a C function keeps value between calls.
Note on stack
Stack = automatic storage (created/destroyed per call). It’s fixed-pattern and fast, but not “static allocation” in the language sense.
Describe Dynamic Memory
What: Memory requested at runtime so programs can size things based on current needs.
Two common forms
DRAM (hardware): Physical main memory; cheap, dense, needs refresh, volatile.
Dynamic allocation (software): Heap-based memory you request/release while the program runs.
Key traits (software/heap)
Flexible size/lifetime: Decide during execution (grow/shrink via new blocks).
Where: Heap (vs. stack for automatic, fixed-size frames).
Costs: Allocation/deallocation overhead; potential fragmentation; slower than stack use.
APIs: C malloc/calloc/realloc → free; C++ new/delete; Java/C# new with Garbage collection (no manual free).
Risks: Leaks, double free, use-after-free (GC reduces but doesn’t remove memory pressure issues).
Quick compare
Stack: Fast, automatic, fixed lifetime/scope.
Heap (dynamic): Flexible, must manage lifetime/Garbage Collection.
Example
Allocate a resizable array/list at runtime (e.g., realloc in C, std::vector in C++, ArrayList in Java).
What are Special Purpose Registers?
Special Purpose Registers are dedicated CPU registers used to control and support the operation of the processor.
What are some examples of Special Purpose Registers?
Program Counter (PC) / Instruction Pointer (IP)
It holds the address of the next instruction the CPU will run.
Instruction Register (IR)
Stores the current instruction being decoded/executed.
Memory Address Register (MAR)
Holds the address in memory to read from or write to.
Memory Data Register (MDR/MBR)
It holds the memory address the CPU wants to read from or write to.
Accumulator (ACC)
It’s a main register the CPU uses for calculations.
Status / Flags Register (PSW/FLAGS)
It shows info about the last operation (like zero or carry) and controls CPU settings.
Stack Pointer (SP)
It points to the top of the stack (used in function calls and storing data temporarily).
Base/Frame Pointer (BP/FP)
It helps keep track of variables in a function (in the stack).
Link Register (LR) / Return Address (RA)
It saves where to return after a function call.
Index Registers (IX/IY, etc.)
They help the CPU loop through arrays and data lists.
Control Registers (CRs)
They control system settings like memory protection and interrupt handling.
Segment Registers (x86: CS/DS/SS/ES/FS/GS)
They help divide memory into sections for code, data, stack, etc.
What is the function of the Computer Clock?
The computer clock (or system clock) generates a consistent, repeating signal used to synchronize all operations within the computer.
It ensures that all instructions and data move through the processor and other components in a coordinated way.
It produces clock pulses ( or ticks) at a regular rate, measured in Hertz (Hz).
Each pulse serves as a timing signal that tells the CPU when to perform a specific action.
What happens in a computer clock cycle?
A clock cycle is one complete tick of the clock signal. During a single clock cycle, many things can happen depending on the instruction and CPU architecture. A clock cycle may involve:
On modern CPU’s these steps are pipelined, so multiple instructions are in different stages of execution simultaneously.
Describe the L1 cache and how it’s used by the computer.
Built directly into the CPU
Very small (typically 16 KB to 128 KB per core)
Extremely fast, faster than L2 cache
Function:
Stores the most frequently accessed data and instructions.
It’s often split into two parts:
Instruction Cache (L1i) - holds instructions
Data Cache (L1d) - holds data
How it’s used:
When the CPU needs data, it first checks the L1 cache.
If the data is found ( a cache hit), it’s used immediately.
If not found ( a cache miss), the CPU checks the L2 cache next.
Describe the L2 cache and how it’s used by the computer.
-Either on the CPU chip or very close to it.
-Slower than L1 but still much faster than RAM.
-Acts as a backup for L1 cache
Describe Primary Memory and give an example
The computer’s main memory used for temporarily storing data and instructions that the CPU is actively using.
Characteristics:
Volatile: Data is lost when the power is turned off.
Fast: Much faster than secondary storage
Directly accessible by the CPU
Examples:
RAM (Random Access Memory) - used to hold active programs and data.
ROM (Read Only Memory) - Stores permanent instructions like the system BIOS.
Describe Secondary Storage and give an example
used for long - term data storage. It is not directly accessed by the CPU, instead data is first brought into primary memory before processing
Characteristics:
Slower compared to primary memory
Non - volatile, data is retained even when power is turned off.
Much larger capacity than primary memory
Examples: Hard Disk Drive (HDD) or Solid State Drive (SSD). These drives store files like documents, videos, and operating system data.
What is a memory controller and where is it located?
A memory controller is a digital circuit (or part of a chip) that manages the flow of data between the CPU and the computer’s memory (main memory such as RAM.)
It serves a role similar to a traffic manager, making sure that data is read from and written to memory efficiently and correctly.
In modern systems, the memory controller is usually integrated directly into the CPU, which improves speed and reduces latency because the CPU doesn’t have to communicate with a separate chip.
What types of operations does the memory controller manage?