Cache Flashcards

1
Q

Describe 1) what cache memory can improve in computing system and how 2) is this achieved?

A
  • Cache memory can improve the average computational performance of a microprocessor system by having a parcial copy of the data from the main memory stored in a local, faster accessible cache memory.
  • Cache is useful due the spatial locality and temporal locality properties.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is spatial locality

A

data/ code you access most likely has data you will access soon in its vicinity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is temporal locality

A

data / code you access now, you will likely access soon again

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

compare and contrast direct mapping and associative cache mapping policies.

A

Direct Mapping
Advantage:
- temporal behavior can be analyzed
- simple implementation
Disadvantage:
- inefficient utilization of cache resources, if the program accesses multiple addresses frequently which are mapped to the same cache-line (conflict miss), many cache misses occur slowing down the execution.
- many unneccessary read/write operations

Associative mapping
Advantage:
- better use of cache space. less on-chip consuming
- simple for timing analysis
Disadvantage: more complex cache control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is write through

A

done synchronously both to cache and main memory (next level cache)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is write back

A

initially writing only to cache, write to main memory (next level cache) when data is replaced by new content
- more complex

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Pro and cons for Fully associative cache

A

Adbvantages
- Simple and consumes less chip area
- Reasonably mimics LRU, hence, better performance than direct mapped
Drawbacks
- Slightly more complicated to predict
- slightly less performance compared to LRU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Set associative cache

A

Hybrid of gully associative and direct mapping
cache lines are grouped in sets
Data can go to any cache line within the set

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Compare and contrast write through and write back

A

Write through
- A lower level memory is also updated on a write access
- Number of writes are higher;
- Simple but more energy consuming; - Depending on application, execution time may be longer;
- Generally employed if lower level memory is fast enough

Write back
- A lower memory is updated only when dirty line is evicted;
- less number of writes;
- Complex but more efficient;
- Execution time will be mostly shorter;
- Generally employed if lower level memory is slow (last level cache)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly