Operating Systems Flashcards
(15 cards)
How many Data-TLB entries for 4KB pages in Intel icelake x86_64? What about for 2MB? For 1GB?
Same question for Instr-TLB.
Data TLB
- 4KB : 64
- 2MB : 32
- 1GB : 4
Instr TLB
- 4KB : 128
- 2MB : 8
- 1GB : 8
What does 4-way, 8-way, 16-way, fully-associative TLB signify?
Trade-offs with more/less “ways”?
- gives the number of sets
- set index is pulled from VPN
- everything in the set is searched in parallel
Higher (e.g., 8-way, 16-way):
- Fewer collisions, better hit rate
- More complex, slightly slower
Lower (e.g., 2-way, 4-way)
- Faster lookup
- More collisions (conflict misses)
What is STLB in Intel Ice Lake?
L2 TLB, larger, slower
fetch time for L1 TLB vs L2 TLB
0.5-1 cycle vs 5-10 cycles
What are some free space management policies?
When/How to use them?
- free space management policies are used in memory allocators like in malloc, or address sanitizers
- api is malloc(int size), free(void* ptr)
- we use splitting and coalescing
- The Header block is used to store the size of an allocated memory block
How can you get infinite TLB misses? How to prevent that?
- When the TLB miss handler’s pages are not in TLB you get infinite…
- The translations for TLB-miss handler are pinned in the TLB
Metadata in a TLB entry
- Address space identifier / Global identifier
- valid
- dirty
- caching info
Why is TLB so fast
- spatial etc etc
- but looking for “parallel search”
What are inverted page tables
mapping from physical page frame to virtual page, instead of the other way around
but who cares, not how intel x86_64 does it
What is the difference between swap space and page cache
- swap space is for anonymous memory
- page cache is for file backed pages
Give an example of when LRU misbehaves
- the memory region you’re referencing is one more than the cache size in a loop, so every access is a miss
in linux when does a swap daemon run? what is a swap daemon, what does it do, why is it needed?
- kswapd: kernel swap daemon
- runs when the memory is under pressure
- kswapd picks a page using LRU, avoiding active/pinned ones.
- Dirty CPU cache lines for that page are flushed to RAM by hardware (cache coherence).
- TLB entry is invalidated, and the page is marked non-present in the page tables.
- Page is written to swap (if dirty) and removed from physical memory.
- Future access triggers a page fault → kernel reloads it from swap, updates page tables & TLB.
How does linux’s page cache work
- 2Q policy