Cache & Internal Memory Flashcards
Week 2.6 (35 cards)
name the 7 elements of cache design
- cache replacment
- write policy
- replacement algorithms
- number of cores
- line size
- cache size
- cache addresses
what are the three cache write policies
- cache replacement
- write through policy
- write back policy
what is the cache replacement write policy
- unaltered blocks can be overwritten without updating main memory
- modified blocks must be written back to main memory replacement
what is the write through policy
- updates cache & main memory for consistency
- the word being written is updated in cache & memory
advantages of write through polic
simplifies consistency in multi-device systems
disadvantages of write through policy
high memory traffic, potential bottlenecks
what is the cache write back policy
- updates only when the block is replaced
- the entire block is updated even if one single word has changed
advantages of the cache write back policy
v memory traffic
disadvantages of the cache write back policy
complex handling to avoid memory states
what types of mapping need to replacement algorithms
- associative
- set-associative
what are the 3 replacement algorithms
- least recently used
- FIFO
- least frequently used
how does the least recently used algorithm work
replaces the block unused for the longest time
how does the least frequently used replacement algorithm work
replaces block with the fewest references
what is split cache
seperate cache fore instuctions & data
advanatage of split cache
avoids contention
- essential for superscalar machines
advantage of unified cache
^ hit ratio & simple implemetatin & design
describe how changing the block size affects cache
- ^ block size fetch adjacent words = ^ hit ratio
- large blocks
= v blocks that can fit in cache
= additional words are further from the requesred block size
what is the problem with cache coherency in multi-processor systems
altering data in one cache can invalidate corresponding data in main memory & copies in other caches
what is are the 3 solutions to cachecoherency in multi-processor systems
- bus watching
- hardware transparency
- noncacheable memory
what is bus watching
- cache controllers monitor writes to shared memory
- invalidate matching cache entries when changes are detected
- CPU next time must fetch the updated value from memory
what is hardware transparency
if a processor modifies a word in its cache, the update is written to memory & propagated to matching other caches
what is non cacheable memory
- shared memory is never stored in cache, so all accesses result in cache misses
- special hardware marks shared memory as noncacheable
structure of static RAM
- uses flip-flop logic-gate configurations
- no need for refresh as long as power is supplied
define operations of SRAM
- WRITE - bit value is applied to force transistors into the correct state
- READ - bit value is read from the designated line