Caches (PPT 2 - 4) Flashcards

1
Q

What are the four goals we wish to achieve?

A
  • Fast processing
  • Fast Memory
  • Large Memory
  • Cheap
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the Von Neumann Bottleneck?

A

It takes a much longer time to read data from memory than it does to actually process it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a cache?

A

A small storage area located close to a processing unit, which stores recently used data taken from a larger, but more distant and slower, storage area.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the payload of a cache?

A

The payload is the data and instructions fetched from memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the four traits of a computer cache?

A
  • Physically very close to the processor for fast access
  • Is very small
  • Is accessed instead of main memory when doing a “main memory access”
  • Functions automatically without software control
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is locality of reference?

A
  • When an item is used, it will probably be used again

- When an item is used, the items around it will likely be used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How does a cache work?

A

It uses locality of reference, meaning it keeps a local copy of the original data and possibly the material around it to avoid going back to the original source

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Examples of Locality of Reference in Computing?

A

-ZIP Compression exploits repeated sequences in text
-Using the same applications regularly
Program loops use the same code repeatedly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How small is the cache compared to main memory usually?

A

It is usually about 1% of main memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why does a cache work?

A

A cache works because if the same data is needed again, it only takes 1 clock cycle to access rather than the full 8 it would take to fetch it from main memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does the CPU use a cache?

A

When the CPU needs to fetch, it first checks the cache to see if the data is there. If it is, it gets a cache “hit”. If it isn’t, it gets a cache “miss” and has to read the data from main memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Is the processor aware a cache exists?

A

No, it treats the cache in the same way it treats main memory, meaning memory addressing is the same

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Name another type of caching

A

Disk Caching, from memory to hard disk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the four possible issues with using a cache?

A
  • Usage: Must function automatically
  • Mapping: Cache is small so must be able to map main memory to it
  • Coherence: Cache contains a copy of memory locations and integrity must be maintained during use
  • Updating: Cache is normally full so needs to be able to handle new data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is Cache Overhead?

A

The cache needs additional space for housekeeping information and decoding information. This means storage is always more than the payload size but the quoted size is always the payload size only

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the three main cache mapping methods?

A
  • Direct mapping
  • Fully associative mapping
  • N-way set associative mapping
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is direct mapping?

A

Each main memory address maps to one specific cache address. This is found by calculation:
cache set no. = block no. mod cache size

18
Q

What are the advantages and disadvantages of using Direct Mapping?

A

Adv: Cache only stores the data and no other info so it is efficient in silicon use
Disadv: It can be inefficient. If referencing two blocks, blocks will be continually swapped and hit ratio will be low

19
Q

What is Fully-Associative Mapping?

A

An item can be placed anywhere in the cache and to find it we search every stored address

20
Q

What are the advantages and disadvantages of using Fully-Associative Mapping?

A

Adv: Does not require calculation to place payload
Disadv: -Must store the original address along with the data, meaning more space is taken up on the silicon.
-Must search every stored address to find data.

21
Q

What is Set Associative Mapping?

A

Combines the best features of Direct and Fully Associative Mapping. Cache is divided into sets(“rows”) and main memory is divided up in order to calculate the location. Some bits are used for the set, others for a partial address or tag

22
Q

What do “Ways” in a Set Associative map do?

A

They allow multiple addresses which map to the same location to be stored at the same time.

23
Q

How do you map an item using the Set Associative method?

A

We split the address into three sections:
(Tag Set Byte)
No. of byte bits is determined by way size = 2^byte bits
No. of set bits determined by sets = 2^set bits
The tag is what is left of the address bits

24
Q

If the bytes per cache line is 16 bytes and there are 256 sets, how many bits are needed for each part of the (Tag, Set, Byte) format?

A

2^4 = 16 so 4 bits for the Byte portion
2^8 = 256 so 8 bits for the Set portion
Tag is the remainder so 20 bits

25
Q

What is hexadecimal E in binary?

A

1110 (14)

26
Q

What are the three cache update strategies?

A
  • Least Recently Used (LRU): Most Effective strategy
  • First in First out (FIFO):Implemented by a circular buffer on each set
  • Least Frequently Used (LFU): need a hit counter on each way of each set
27
Q

How do we determine which Set and Way will be used?

A

The set is determined by the address being accessed and is therefore fixed

The way to be used is determined at run time and therefore dynamic.

28
Q

Does the cache need any extra housekeeping bits to determine which way to overwrite?

A

Yes, it needs to keep track of some extra bits. These are updated dynamically during cache usage

29
Q

What is part of the physical size of the cache which is not stated in the quoted size?

A

Any housekeeping data, such as the address tag

30
Q

What are the four levels of cache?

A

Level 1 is the normal cache
Level 2 is a slightly larger but slower cache
Level 3 is for multi core CPUs and is shared between the cores
Level 4 is for multi CPU computers and can be shared between CPUs

31
Q

What are the two types of multi-level cache?

A

Inclusive: Level 2 holds all data from Level 1 and all the data it discards. Hit in L2, data copied into L1. Miss in both, data is fetched and copied into both.

Exclusive: L2 holds only discarded data from Level 1. Hit in L2, data is switched between L1 and L2. Miss in both, data is fetched into L1, L1 data moves to L2, L2 older data discarded

32
Q

What is cache coherence?

A

If one copy of the data the cache holds is updated, there may be two different versions of the same data.

33
Q

How can coherence be achieved?

A

-Ensure the cache and main memory are always coherent (contain the same data)
or
-Use appropriate dynamic techniques to ensure coherence differences don’t cause problems.

34
Q

What is a write-through?

A

In a write-through cache, a data write updates the cache and the main memory. It ensures the cache is always coherent but is slow as a memory write is needed for every write

35
Q

What is a write-back?

A

In a write-back cache, a data write only updates the cache. Cache and main memory are therefore not coherent. Main memory is updated only when the data is about to be overwritten in the cache which means there is only a main memory write cycle when necessary. Must keep track of which data has been changed.

36
Q

Do we need to do more for coherence if there are multiple processors?

A

Yes, we may want to use software and hardware protocols to monitor coherence. We also may want to allow the processors to eavesdrop on reads made by other processors to ensure each cache is up to date.

37
Q

What is the MESI Protocol?

A

Two bits are stored for each cache way, indicating one of four possible states:
Modified: Data in this way is newer than main memory. Must be written back if a processor tries to read that address
Exclusive: Data in this way matches main memory and is the only copy
Shared: Data in this way matches main memory but there are other copies
Invalid: Data is invalid or empty.

38
Q

How do you calculate Cache size?

A

Cache Size = Line size x no. of ways x no. of sets

39
Q

Calculate the size of a cache which is 2 way set associative with 128 sets, each with a way size of 32 bytes?

A

2x128x32 = 8192

40
Q

What does a cache speed up?

A

The cache increases the overall speed of the fetch execute cycle by automatically reducing the number of main memory reads/writes that are required