Paging Flashcards
A page table stores mapping between…
virtual page numbers and page frame numbers
what is the purpose of the valid/present bit?
it indicates if a virtual page is currently mapped to physical memory or not
=> bit is not set: page fault is raised
=> bit is set: PTE can be used for deriving the physical address of the page
What is a page fault?
If a process issues an instruction to access a virtual address that isn’t currently mapped, the MMU calls the OS to bring in the data
Virtual address is divided into:
-virtual page number: index into the page table which contains base address of each page in physical memory
-page offset: concatenated with base address results in physical address
The OS’s involment in paging
- page allocation/bringing data into memory
- page replacement
- context switching
Page table entry content
-valid bit: aka present bit
-page frame number: if the page is present, at which physical address the page is currently located
-write bit: if the page may be written to
-caching: if the page should be cached at all and with which policy
-accessed bit: set by the MMU if page was touched since the bit was last cleared by the OS
-dirty bit: set by the MMU if this page was modified since the bit was last cleared by the OS
Disadvantages of linear page tables
- for every address space, need to keep complete page table in memory that can map all virtual page numbers
-most virtual addresses are not used by process & unused vpns don’t have a valid mapping in the page table -> no need to store invalid vpns
What kind of fragmentation does occur
=> eliminates external fragmentation due to its fixed-size blocks
- internal fragmentation becomes a problem, as memory can only be allocated in coarse grained page frame sizes, the unused rest of the last allocated page can’t be used by other allocations and is lost
Page size trade-offs (consider fragmentation, table size and I/O)
Fragmentation:
- larger page => more memory wasted due to internal fragmentation for every allocation
- small page => on average only half of page wasted
Table size:
- larger page => fewer bits needed for pfn, fewer PTEs
- smaller page => more and larger PTEs
I/O:
-larger => more data needs to be loaded from disk to make page valid
-smaller => need to trap to OS more often when loading large program
Linear inverted page table + and -
+ less overhead for page table meta data
- increases time needed to search the table when a page reference occurs
TLB name & idea
Translation lookaside buffer
Idea: add a cache that stores recent memory translations
TLB maps <vpn> to <pfn></pfn></vpn>
TLB hit
on every load/store check if translation result is already cached in TLB => TLB hit if available
TLB miss
On every load/store if result isn’t already cached in TLB, walk page tables and insert result into TLB
Need to evict an entry from TLB on TLB miss
Software vs hardware managed TLBs
software:
- OS receives TLB miss exception
- OS decides which entry to drop from TLB
- OS walks page tables to fill new TLB entry
- TLB entry format specified in instruction set architecture
hardware:
-Drop a TLB entry based on a policy encoded in hardware without involving the OS
- Walk page table in hardware to resolve address mapping
TLB reach (aka TLB coverage)
increase page size
provide multiple page sizes
increase tlb size
TLB reach = (TLB size) x (Page size)
Increase the page size:
+ need fewer TLB entries per memory
- increases internal fragmentation
Provide multiple page sizes:
+ allows apps that map larger memory areas to increase TLB coverage with minimal increase in fragmentation
Increase TLB size
- expensive
Explain the basic idea of paging
Paging allows each application to have its own, contiguous (virtual) address space, while the physical space occupied by an application doesn’t need to be contiguous. With paging, virtual memory is broken into fixed-size blocks, called pages, and physical memory is broken into fixed-size blocks of the same size, called frames. Each virtual page can be mapped to an arbitrary physical frame by page table.
How is a virtual address translated to a physical address using a single-lever page table?
Each virtual address is divided into two parts: A page number (vpn) and a page offset. vpn is used as an index in a page table. Each element in the page table is called a page table entry (pte) and contains the frame number (pfn) of the frame to which the corresponding virtual page maps. To get the physical address, the page offset is concatenated to the frame number. This address is passed to the memory controller.
Multi-level page table; explain the idea, advantages & disadvantages
Instead of using a large, linear array as a page table, the page table consists of multiple levels. A virtual address is split into one index per level and an offset. The first index is used to obtain the address of the second-level table from the first-level table, the second index is used to obtain the address of the third-level table from the second-level table and so on. The last table in the hierarchy contains the actual PFN.
+space requirements for page tables can be reduced
-# of memory accesses required for translation increases with each additional level
Inverted page table; explain the idea, advantages & disadvantages
The page table contains one entry per physical frame instead of per virtual page. Each entry contains an identifier for the address space and the number of virtual pages to which the corresponding frame is mapped.
+the size of the page table only depends on the amount of actually available physical memory and expected to be smaller than linear page table
Hashed inverted page table; explain the idea, advantages & disadvantages
Instead of using the virtual page number as an index, the number is hashed first, and the resulting hash value is used as an index
+ reduce lookup costs
- collisions
When are TLB miss/page fault handlers invoked?
- TLB/page table doesn’t contain a valid mapping for a requested page
- Invalid accesses to mapped pages
How can shared memory between two processes A and B realized at the page table level?
The PTEs for the two pages in processes A and B are configured to map to the same physical frame, thus allowing both processes to work on the same memory.
Explain page replacement steps
- Save/clear victim page
- drop page if fetched from disk
- write back modifications if from disk and dirty
- write pagefile/swap partition otherwise - Unmap page from old AS
- unset valid bit in PTE
- flush TLB - Prepare the new page
- Mage the page frame into the new address space
- set valid bit in PTE
- flush TLB
Frame Allocation Types
Local: only frames of the faulting process are considered for replacement
- isolates processes
- separately determine how many frames each process gets
Global: all frames are considered for replacement
- doesn’t consider page ownership
- one process can get another process’s frame
- doesn’t protect process from a process that hogs all memory
Fixed:
=> Equal: all processes get the same amount of frames
=> Proportional: allocate according to the size of the process
Priority: proportional allocation scheme using priorities rather than size