Computer Memory & Caching Flashcards

Week 2.6 (63 cards)

1
Q

what are the 8 key characteristics of computer memory system

A
  1. location
  2. performance parameters
  3. physical characterisitcs
  4. physical type
  5. organisation
  6. access method
  7. unit of transfer
  8. capacity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what are the 2 memory locations

A
  1. internal - small, fast & directly accessible
  2. external - large, slower & accessed through I/O devices
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

examples of internal location memory

A
  • registers
  • cache main memory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

examples of external location memory

A
  • hard drives
  • tapes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what are the 3 performance parameters

A
  1. access time
  2. cycle time
  3. transfer rate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is the access time

A

time to R/W from a memory location

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is the cycle time

A

total time needed for ine memory operation, including setup for the next operation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is the transfer rate

A

speed at which data is transferred between memory & the processor, includes fetchin the data & transferring it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Non-RAM transfer formula

A

Tn = Ta + n/R
where:
- Ta = access time
- n = number of bits
- R = transfer rate in bps

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what are the 2 physical characteristics

A
  1. volatile
  2. non-volatile
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what are the 4 physical types

A
  1. semiconductor
  2. magnetic
  3. optical
  4. magneto-optical
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what are the 4 access methods

A
  1. sequential
  2. direct
  3. random
  4. associative
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

how does sequential access work

A

linear order

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

how does direct access work

A

unique addresses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

how does random access work

A

any memory location can be accessed directly, with constant access time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

how does associative access work

A

based on content not address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

what is a unit of transfer

A

number of bits transferred between memory & the processor at one time
- internal: words or bytes
- external: larger blocks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what are the design constraints & dilemma of memory

A

how much? how fast? how expensive?
- designers need large-capacity, low-cost memory. performance requirements demand fast, low-capacity & costly memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

solution to memory’s dilemma

A

memory hierachy - smaller, faster, more expensive memory is supplememted by larger, slower & cheaper memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

how is cache optimised

A

levels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what is level 1 cache

A

small, fast. stores frequently accessed data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what is level 2 cache

A

larger, acts as a buffer between L1 and memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

how do cache levels optimise cache

A
  • reduce latency
  • minimises CPU stalls
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

how is data organised across the memory hierachy

A
  • high-level memory stores frequently accessed data
  • clusters in L1 are periodically swapped with L2 clusters to manage space
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
define single-level cache
- combines speed of fast, small memory with size of slower, large memory - holds copies of frequently accessed main memory blocks
26
define multi-level cache
1. L1 - fastest & smallest, stores frequently used data & instructions for rapid access 2. L2 - larger & slower than L1, acts as a middle layer 3. L3 - largest & slowest, shared across cores in the CPU
27
what is the main memory structure
- 2^n addressable words divided into M = 2^n/k blocks
28
how is cache memory structured
- line size = tags & control blocks - tags = identifies blocks
29
what are the 7 elements of cache design
1. write policy 2. replacement algorithms 3. number of caches 4. line size 5. cache size 6. cache addresses 7. mapping
30
describe cache size
- balance cost & performance - larger caches = ^ gates & ^ chip/board requirements - workload sensitivity
31
how a cache addresses made
- virtual memory = allows programs to address memory logically, independent of physical availability - MMU - translates virtual addresses during operation
32
why do we need mapping
- cache has fewer lines than memory blocks - a mapping algorithm determines how main memory blocks are assigned to cache lines
33
what are the 3 mapping techniques
1. direct 2. associative 3. set-associative
34
define direct mapping
assigns each block to a unique cache line
35
what is the direct mapping formula
i = j modulo m where: - i = cache line - j = main memory block - m = number of cache lines
36
how is direct mapping organised
main memory address divided into 3 fields: 1. word (w bits) 2. line (r bits) 3. tag (s - r bits)
37
what do the word bits define in a main memory address
specifies a unique word or byte within a block
38
what do the line bits define in the main memory address
specifies one of the cache lines
39
what do the tag bits define in the main memory address
identifies blocks mapped to the same line
40
address length formula
s + w
41
block size formula
line size = 2^w words/bytes
42
number of blocks in main memory
2^s
43
number of lines in cache
m = 2^r
44
tag size formula
s - r
45
what is the direct mapping lookup
1. extract bits related to the cache line 2. retrieve the tag stored in the cache line 3. compare the tag with the one from memory address 4. if they match, use word bits to find offset within the block 5. if not, fetch from main memory
46
what is the direct mapping miss penalty
- fixed cache locations = conflict - thrashing = 2 blocks repeatedly map to the same cache line, lowering hit ratio
47
what is the solution to direct mapping miss penalties
- stores frequently evicted cache lines - reduces conflict misses
48
define associative mapping
overcomes the limitation of direct mapping by allowing any main memory block to be loaded into any cache line - provides flexibility at the cost of more complex control logic
49
how is associative cache organised
memory address: 1. tag: uniquely identifies a block in main memory 2. Word: specifies the data within the block
50
how does associative mapping check if a block is in cache
control logic compares the tag of every cache line
51
pros of associative mapping
flexibility in block replacement: - any block can be placed in any cache line - enables advanced replacement algorithms designed to optimise hit ratio
52
define set associative mapping
- combines direct & associative mapping - cache divided into sets containing multiple lines
53
what is the set associative mapping relationship
m = v x k, i = j mod v where: - m = num lines in cache - v = num sets - k = num lines per set - i = cache set num - j = main memory block num
54
how can set associative mapping be implemented as direct mapping
can be implemented as k direct-mapped caches known as WAYS
55
what are ways
each way consists of v-lines, where: - the first v lines of main memory are mapped into the v lines of each way - subsequebt groups of v lines are similarly mapped
56
what is the implementation preference of set associative mapping
- ^ sets & v lines = v associativity = direct-mapped - v sets & ^ lines = ^ associaivity = associative
57
how are memory address made in k-way set associatve mapping
1. tag 2. set - d bits specify one of v = 2^d sets 3. word
58
number of sets formula
v = 2^d
59
number of lines in a set formula
k
60
total lines in cache formula
m = k.v = k.2^d
61
tag size formula in k way set associative mapping
s - d bits
62
k way set associative mapping lookup
1. extract the set index from the address to determine which set to search 2. compare the tag from the address with the stored tags in all k lines of the selected set 3. if a tag matched, we have a cache hit. If not, it's a cache miss, and data is fetched from main memory 4. use the offset to locate the exact word within the matched cache block
63
performance degradtion of k way set associative mapping
- v = m, k = 1 - direct mapping - v = 1, k = m - associative