Paging Flashcards
A page table stores mapping between…
virtual page numbers and page frame numbers
what is the purpose of the valid/present bit?
it indicates if a virtual page is currently mapped to physical memory or not
=> bit is not set: page fault is raised
=> bit is set: PTE can be used for deriving the physical address of the page
What is a page fault?
If a process issues an instruction to access a virtual address that isn’t currently mapped, the MMU calls the OS to bring in the data
Virtual address is divided into:
-virtual page number: index into the page table which contains base address of each page in physical memory
-page offset: concatenated with base address results in physical address
The OS’s involment in paging
- page allocation/bringing data into memory
- page replacement
- context switching
Page table entry content
-valid bit: aka present bit
-page frame number: if the page is present, at which physical address the page is currently located
-write bit: if the page may be written to
-caching: if the page should be cached at all and with which policy
-accessed bit: set by the MMU if page was touched since the bit was last cleared by the OS
-dirty bit: set by the MMU if this page was modified since the bit was last cleared by the OS
Disadvantages of linear page tables
- for every address space, need to keep complete page table in memory that can map all virtual page numbers
-most virtual addresses are not used by process & unused vpns don’t have a valid mapping in the page table -> no need to store invalid vpns
What kind of fragmentation does occur
=> eliminates external fragmentation due to its fixed-size blocks
- internal fragmentation becomes a problem, as memory can only be allocated in coarse grained page frame sizes, the unused rest of the last allocated page can’t be used by other allocations and is lost
Page size trade-offs (consider fragmentation, table size and I/O)
Fragmentation:
- larger page => more memory wasted due to internal fragmentation for every allocation
- small page => on average only half of page wasted
Table size:
- larger page => fewer bits needed for pfn, fewer PTEs
- smaller page => more and larger PTEs
I/O:
-larger => more data needs to be loaded from disk to make page valid
-smaller => need to trap to OS more often when loading large program
Linear inverted page table + and -
+ less overhead for page table meta data
- increases time needed to search the table when a page reference occurs
TLB name & idea
Translation lookaside buffer
Idea: add a cache that stores recent memory translations
TLB maps <vpn> to <pfn></pfn></vpn>
TLB hit
on every load/store check if translation result is already cached in TLB => TLB hit if available
TLB miss
On every load/store if result isn’t already cached in TLB, walk page tables and insert result into TLB
Need to evict an entry from TLB on TLB miss
Software vs hardware managed TLBs
software:
- OS receives TLB miss exception
- OS decides which entry to drop from TLB
- OS walks page tables to fill new TLB entry
- TLB entry format specified in instruction set architecture
hardware:
-Drop a TLB entry based on a policy encoded in hardware without involving the OS
- Walk page table in hardware to resolve address mapping
TLB reach (aka TLB coverage)
increase page size
provide multiple page sizes
increase tlb size
TLB reach = (TLB size) x (Page size)
Increase the page size:
+ need fewer TLB entries per memory
- increases internal fragmentation
Provide multiple page sizes:
+ allows apps that map larger memory areas to increase TLB coverage with minimal increase in fragmentation
Increase TLB size
- expensive