16. Paging Flashcards
What two things do we need to do in order to keep the TLB up-to-date?
Store information about each virtual page (page state) and locate that information quickly
What is a page table entry (PTE)?
A single entry storing information about a single virtual page used by a single process
What are the 4 parts of a page table entry (PTE) on a 32-bit machine word?
The location (physical page number or location on disk) is 20 bits.
The permissions (read/write/execute) are 3 bits.
The valid bit (“Is the page currently in memory?”) is 1 bit.
The “referenced” bit (“Has this page been read/written to recently?”) is 1 bit
What are the requirements for how we locate page information?
Speed (translation is a hot path and should be quick) and compactness (data structures should not take up much physical memory - we want to use that memory for useful stuff after all)
What happens if a process wants to store data at a virtual address that is not contained in the MMU?
The MMU has to ask the kernel for the page table entry for that virtual address.
What is a page table?
The data structure used to quickly map a virtual page number to a page table entry.
Why does each process have a separate page table?
Virtual addresses are private to each process and therefore need to be translated differently depending on the process
Describe the “flat” page table implementation.
Use one array to hold all page table entries for each process. Virtual page number is the index into this array.
This is a O(1) access for page table entries because the VPN is used directly as an index into the array.
The problem is that this is not a compact data structure. It requires 4 MB per process and has to be contiguous, even though most of the page table will be unused.
Describe the linked-list page table implementation.
The page table entries for a process are stored in a linked list that is searched through on each translation.
This is a O(n) algorithm for n valid virtual pages. A O(1) algorithm would be more ideal.
The compactness of this data structure is optimal, with 4 bytes * n for n valid virtual pages, meaning all entries are used and the data does not have to be contiguous.
Describe the multi-level page table implementation.
Build a tree-like data structure mapping virtual page number to page table entry. Break VPN into multiple parts, each of which is used as an index at a separate level of the tree.
Example: With 4K pages a VPN is 20 bits. Use top 10 bits as index into top-level page table and bottom 10 bits as index into second-level page table. This means each page table is 2^10 * 4 bytes which = 4K (what we want).
This is an O(c), where c is some constant number of look ups per translation depending on how many levels there are.
The compactness depends on the address space, but we can say it is better than flat page table (4 MB) and worse than linked list (4 bytes * n valid virtual pages)
What happens if we run out of physical memory when storing pages?
Two options:
- Fail and either don’t load the process (exec()), don’t create a new process (fork()), refuse to allocate more heap (sbrk()), or kill the process (if it is trying to allocate more stack).
- Create more space by clearing part of memory, preserving the contents of that memory for later use (swapping)
What are the requirements for having the kernel remove memory from a process without the process knowing?
The last time the process used a virtual address, it behaved like memory;
The next time the process uses that same virtual address, it still behaves like memory;
And in between the last and next time the process uses that virtual address, whatever data was stored at that process must be preserved (stored) somewhere (disk)