Virtual memory prefetching Flashcards
Prefetching
Predict what data might be needed soon and request for it earlier. If done well, it should reduce memory access latency.
Prefetching is only effective under certain conditions. What are they?
There is spare memory bandwidth so the number of memory accesses.
The prefetched memory will be used soon. The prefect prediction is accurate.
The prefetch is done in a timely manner, not too late.
Prefetching doesn’t evict memory in the cache.
Software examples to perform prefetching
arrange data structures so access to values are in sequential order or in large intervals
Does a compiler add prefetch instructions?
Yes, it is able to optimise by adding prefetch instructions into the program.
What is the structure for virtual memory?
Virtual address goes into the MMU (mapping table) and returns a mapped address in physical memory like a disk.
MMU
memory management unit
Page tables
The mapping of virtual memory to physical memory is saved in page tables because there is a lot of mappings. Page tables. are saved in the L1 cache.
TLB (Translation Lookaside Buffer)
a hardware cache for page table entries to reduce access to the page table in memory.
What happens when there is a miss at the page table?
The MMU triggers a page fault. The OS page fault handler takes over. It evicts the victim page. If the page is dirty aka it has been modified, it is paged out to the disk. A new page with a new value is added to the page table
Why do we add a TLB after having PTE?
Accessing the PTE will cause a 1 cycle delay. So the cost to access PTE is double of normal.
TLB reach
the maximum amount of memory the TLB can hold.
Size of TLB size x (size of PTE) or L1.
If every entry in the buffer is replaced with a value in the PTE.
Working set
active virtual pages
Working set > main memory size what happens?
Main memory is the L1 cache where PTEs are stored. If the working set is more than main memory, then there will need to be accesses to the disk. Its called thrashing and it really increases memory access latency
No. of working set pages > No. of TLB entries
The TLB won’t be able to cache all the active pages. So it will need to have TLB misses.