Virtual memory prefetching Flashcards

1
Q

Prefetching

A

Predict what data might be needed soon and request for it earlier. If done well, it should reduce memory access latency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Prefetching is only effective under certain conditions. What are they?

A

There is spare memory bandwidth so the number of memory accesses.

The prefetched memory will be used soon. The prefect prediction is accurate.

The prefetch is done in a timely manner, not too late.

Prefetching doesn’t evict memory in the cache.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Software examples to perform prefetching

A

arrange data structures so access to values are in sequential order or in large intervals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Does a compiler add prefetch instructions?

A

Yes, it is able to optimise by adding prefetch instructions into the program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the structure for virtual memory?

A

Virtual address goes into the MMU (mapping table) and returns a mapped address in physical memory like a disk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

MMU

A

memory management unit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Page tables

A

The mapping of virtual memory to physical memory is saved in page tables because there is a lot of mappings. Page tables. are saved in the L1 cache.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

TLB (Translation Lookaside Buffer)

A

a hardware cache for page table entries to reduce access to the page table in memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What happens when there is a miss at the page table?

A

The MMU triggers a page fault. The OS page fault handler takes over. It evicts the victim page. If the page is dirty aka it has been modified, it is paged out to the disk. A new page with a new value is added to the page table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why do we add a TLB after having PTE?

A

Accessing the PTE will cause a 1 cycle delay. So the cost to access PTE is double of normal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

TLB reach

A

the maximum amount of memory the TLB can hold.
Size of TLB size x (size of PTE) or L1.
If every entry in the buffer is replaced with a value in the PTE.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Working set

A

active virtual pages

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Working set > main memory size what happens?

A

Main memory is the L1 cache where PTEs are stored. If the working set is more than main memory, then there will need to be accesses to the disk. Its called thrashing and it really increases memory access latency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

No. of working set pages > No. of TLB entries

A

The TLB won’t be able to cache all the active pages. So it will need to have TLB misses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly