23. Complete Virtual Memory Systems Flashcards
How was VAX memory system constructed?
32 bit address space, 512-byte pages. 23 bits for VPN, 2 of which are for segment (so it’s a hybrid of segmentation and paging). Lower half of address space was known as “process space” and is unique to each process. The upper half was known as system space (S), and only half of it was used. Protected OS code and data resided there.
How was process space constructed in VMS?
The first half (P0) held the user program and heap that grew downwards. The second half (P1) held the stack which grew upwards
How did VMS reduce pressure placed on memory by big number of pages in page table?
1) Segmenting the user address space (P0 and P1). 2) The page tables for these segments were located in kernel virtual memory (which allowed swapping)
In real address space in VMS system, what is neat aspect of page 0?
The code section does not usually start from this page, instead it’s used for support of null-pointer accesses.
What was benefit of mapping VMS OS into each process address space? What was implement for proper protection?
Easier use of data and its moving. The page table had protection bits which specified which privilege level CPU must be in to access the page
Since VMS system didn’t have reference bit, how was it determined which page to evict?
Replacement policy was called “Segmented FIFO”. Each process had a maximum number of pages it could keep in memory (called resident set size - RSS). Each of the pages is kept on FIFO list, when process exceeds RSS, the first page is evicted.
VMS also introduced second-chance lists where pages are placed before eviction
Explain second-chance lists in VMS.
The clean-page free list and dirty-page list were used. When a process exceeds its RSS, a page is removed from per-process FIFO. If clean - it’s placed on the end of the clean-page list. If dirty - on the end of dirty-page list. If another process needs page, it would take it from the free list. However if original process faults on that page before it is reclaimed, such process reclaims it from clean or dirty list.
What is trick with VMS system related to 0?
Demand zeroing. Instead of finding a page in physical memory and zeroing it out right away, then adding to page table, the OS instead just puts an entry into page table and marks the page inaccessible. When it’s needed, the trap to OS will occur and page will be actually loaded and zeroed out.
What is trick with VMS related to cows?
Copy-on-Write (CoW). When OS needs to copy a page from one address space to another, instead of copying it, it will just map it into target address space and mark it as read-only in both spaces. If one of the processes tries to write to that page, it will trap to OS and then the actual copying will occur.
Initially, how was Linux address space constructed?
32-bit address space. 3/4 used for process data and rest is used for kernel data. Page 0 is also marked inaccessible.
What types of kernel virtual addresses are in Linux?
Kernel logical addresses - normal virtual address space, can be obtained with kmalloc(). Most kernel DS live here, it cannot be swapped to disk. The most interesting aspect is that there is a direct mapping between logical kernel addresses and the first portion of physical memory. Such direct mapping allowed for easier translations and this memory was suitable for operation that required contigious physical memory chunks (such as DMA).
Kernel virtual addresses - does not have direct mapping so cannot be allocated on a contigious chunk of physical memory. Obtained with vmalloc(). These addresses also enable kernel to address more than 1 GB of memory.
Is Linux 32 bit or 64 bit?
64 bit, using only 48 bottom bits.
What Linux memory system uses for paging?
It uses 4-level multi-level page table.
What is page size in Linux?
4KB. However, it also allows for use of “huge tables” of 2MB and even 1GB. They provide better efficiency due to being in TLB and also shorter TLB-miss path.
Do huge tables need to be enabled?
At first, the applications that required huge tables had to do so explicitly with system calls. However, currently it’s done automatically when OS notices such need exists.