Lecture 2 - Memory Flashcards

(8 cards)

1
Q

Memory Hierachy Overview (5 Points)

A
  • Programmers desire unlimited memory with low latency, but faster memory is more expensive.
  • The solution is a memory hierarchy: a large, slow memory for the entire addressable space, with smaller, faster memories containing subsets of the memory below them.
  • Temporal and spatial locality ensures that most references are found in the smaller, faster memories, giving the illusion of a large, fast memory.
  • The memory hierarchy typically includes L1 and L2 caches per core, a shared L3 cache on the chip, and main memory.
  • Design considerations become crucial with multicore processors, as total peak bandwidth grows with the number of cores.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Memory Hierarchy Basics
(Explain how caching works in memory)
(Mention 3 types of Caching)

A
  • When a word isn’t found in the cache (a miss), it’s fetched from a lower level, along with other words in the cache line, taking advantage of spatial locality. The cache line is placed in a location within its set, determined by the line address.
  • n sets imply n-way set associativity. Direct mapped cache has one line per set, while a fully associative cache has one set.
  • Modern CPUs often use 4-way or 8-way associative caches.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Direct Mapped Cache (Explain mechanism, Advantages and Disadvantages)

A
  • In a direct mapped cache, each memory block maps to a specific cache line.
  • The address is divided into tag, index, and offset. The index selects the cache line, and the tag is compared to determine a hit.
  • Advantages include speed, power efficiency, simplicity, and low cost.
  • The disadvantage is a lower hit rate due to only one cache line per memory block.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Fully Associative Cache (Explain mechanism, Advantages and Disadvantages)

A
  • In a fully associative cache, a memory block can be placed in any cache line.
  • The tag identifies the memory block. All tags must be searched to find a match.
  • Advantages include flexible memory block placement and a better cache hit rate.
  • Disadvantages include slowness, power consumption, and high cost.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Set Associative Cache (Explain mechanism, Advantages and Disadvantages)

A
  • Set associative cache is a compromise between direct mapped and fully associative caches.
  • The cache is divided into sets, and each set contains multiple cache lines. A memory block can be placed in any line within its set.
  • The address is divided into tag, set index, and offset. The set index selects the set, and the tags within the set are compared for a hit.
  • Advantages include a trade-off between direct-mapped and fully associative placement and flexibility in using replacement algorithms.
  • A disadvantage is that placement policy may not effectively use all available cache lines.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Writing to Cache (Two types)

A

Two Strategies Exist:
- Write Through (WT)
* (Immediately update lower levels)
- Write Back (WB)
* (Update lower levels only on replacement)

Both strategies use write buffers to make writes asynchronous.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Cache Performance (Metrics 4)
(Cause of Misses)

A
  • Miss rate is the fraction of cache accesses that result in a miss.
  • Causes of Misses:
    • Compulsory (first reference to a particular memory block),
    • Capacity (working set of a program exceeds cache size),
    • Conflict (a line previously in the cache was evicted).
  • Hit time is the time to return a value on a cache hit.
  • Miss time is the time to report a miss.
  • Miss penalty is the time to fetch a line from a lower level.
  • Higher associativity reduces conflict and capacity misses but increases miss and hit times.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Cache Design Issues

A
  • Caches are organized into lines or blocks. Larger line sizes reduce compulsory misses but increase capacity misses.
  • Multi-level caches involve more design decisions, including inclusive, exclusive, or non-inclusive/non-exclusive policies.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly