Exam 2 Practice Questions Flashcards
Why is physical memory typically smaller than virtual memory?
Physical memory is limited to the size of the RAM used in a system. Virtual memory is stored on disk and is typically larger.
Today, typically a virtual address is 32 or 64 bits. 32 bits allows each process to have 2^32 bytes or 4GB of virtual memory.
What is the problem that virtual memory solves?
We often have programs that are too large to fit into memory at the same time. Or – we might want to fit multiple programs into memory at once.
So, the basic idea behind virtual memory (Fotheringham, 1961) is that each program has its own address space, which is broken up into chunks called pages. Each page is a contiguous range of addresses. These pages are mapped onto physical memory, but not all pages have to be in physical memory at the same time to run the program. When the program references a part of its address space that is in physical memory, the hardware performs the necessary mapping on the fly.
Virtual memory works in a multiprogramming system, with bits and pieces of many programs in memory at once. While a program is waiting for pieces of itself to be read in, the CPU can be given to another process.
Describe the paging architecture using this image.
On the left, we see the hardware on our motherboard where we have the CPU and MMU (or memory management unit).
Assembly language instructions such as MOV, R, M will cause the CPU to move a value from its register R into physical memory M.
M initially is a virtual address. The role of the MMU is to translate this virtual address into a physical address. We can directly access the value stored at M, which is stored in physical memory frames shown in the middle.
The frames are the same size as virtual memory pages. They are analogous to slots in a parking lot where a car would be a page and one of these frames would be a space to park your car in.
Last, we look at the disk controller and the swap file. The swap partition is like spillover space. The memory pages that cannot fit into physical memory will spillover to the swap file, which you can communicate with through the disk controller.
If you design a new processor architecture with 16-bit addresses, what is the total size of the virtual address space? (Think back to LC-4!!)
65536 Bytes
2^16 bytes because we can address 2^16 different byte addresses with 16 bits
In a 32 bit computer with a page size of 4KB, what is the number of offset bits in the virtual address?
12
the page size is 4 KB (2^12 bytes)
In a 32-bit computer with a page size of 4KB, how many pages does the virtual memory have?
1048576
The computer is 32-bit, which means the size of the virtual address space is 2^32 Bytes. The page size is 4 KB, which is 2^12 Bytes. So the number of pages is 2^32 / 2^12 = 2^20 = 1048576. In other words, there are 20 bits of the memory address are used to present the page numbers.
Briefly describe how paging and offset work.
During paging, the virtual page is extracted from the virtual address. This will be used to index into a page table. The page table will contain a pointer to the actual physical frame if the page is currently in physical memory. The physical frame is then retrieved and the offset is then used to point to the specific word that we’re trying to read in memory that corresponds to the virtual address.
Given the page table and virtual address, what is the physical address after the table translation?
10100101000010
There are 16 rows in the page table, so the first 4 bits in the virtual address are shaded as the index bits. Binary 1001 is equal to 9 in decimal. so, the first three bits of physical address should be 101. The offset bits are copied directly from the virtual address. Therefore the physical address should be 10100101000010.
Given the page table of process A and the swap file, please enter the page numbers mapped to each block in the swap file correspondingly.
free, A(1), free, A(2)
Based on the bitmap, the first and third blocks are empty. So they should be filled with “free”. In the page table, page 1 is mapped to the B1 in the swap file and page 2 is mapped to the B3 in the. Therefore the answer should be free, A(1), free, A(2).
Multi-Level Paging
- Describe the problem that multi-level paging solves.
- Describe how multi-level paging works.
- In a 32-bit computer with a 4KB page size, we need a page table with 2^20 entries. This is too big! And the problem is even worse on 64-bit computers.
- Insight: make page tables hierarchical, load only needed portions to memory.
Break virtual page number into 10-bit chunks:
–first 10 bits index into a top-level page-entry table T1 (1024 entries)
–each entry in T1 points to another (second-level) page table with its own 1K entries (4 MB of memory since each page is 4KB)
–next 10-bits of virtual address index into the second level page table selected by the first 10 bits
In a 32-bit operating system with an 8 KB page size, there are 8 bits in the top-level table and 10 bits in the second-level table. If a process uses 128 MB virtual memory, how many entries it will accept in the top-level table?
16
There are 10 bits in the second-level table and the page size is 8 KB. So, each second-level table can index 8 MB memory. The process needs 128 MB memory, so it needs to be able to index 16 second-level tables. Therefore, there will be 16 entries in the top-level page table.
What is a downside to multi-level paging?
While this approach is memory efficient, one issue is that instead of having one memory access, we need multiple memory accesses. This is one potential downside of multi-level paging. In fact, the global and the middle level directory may sometimes even be stored on disk. If we use them frequently, there’s a high likelihood they would have been swapped to physical memory.
- Describe what problem Translation Lookaside Buffers (TLB) solve.
- Describe the TLB solution.
- Problem: page tables are stored in main memory, and access to main memory is about 10 times slower compared to clock cycle of CPU.
- TLB in MMU:
- TLB stores small cache of page table entries to avoid the usual page-table lookup
- TLB is associative memory (content addressable memory) that contains pairs of the form (page-no, page-frame)
- special hardware compares incoming page number in parallel with all entries in TLB to retrieve page-frame
- if no match found in TLB, we do standard lookup via main memory
- like hardware cache for MMU
What is one solution to the problem discussed about multi-level paging?
Inverted Page Table:
inverted page table is bounded by physical memory size
popular in 64 bit machines
What are the steps involved in paging?
- input to MMU: virtual address = (page p, offset o)
- check if there is a frame f with (p,f) in TLB
- if so, physical address is (f, o)
- if not, lookup page-table in main memory
- - requires slow accesses due to multi-level paging - if page is present, compute physical address
- if not, issue page fault interrupt to OS kernel
- - even slower - update TLB / page-table entries (e.g. Modified Bit)
Why is swapping a page an expensive process?
Swapping is an expensive process and will even block the current process. If a page in memory is “dirty” (i.e., it contains recent writes), and this block is selected for eviction, the changes need to be written to disk and the new page needs to be brought to memory.
Why is the FIFO replacement scheme not a theoretically optimal algorithm?
According to the FIFO scheme, on a page fault, the frame that has been in memory the longest is replaced. This is not an optimal algorithm, in fact FIFO suffers from Bélády’s anomaly, the phenomenon in which increasing the number of page frames results in an increase in the number of page faults for certain memory access patterns. In FIFO, the page fault may or may not increase as the page frames increase, but in Optimal algorithms, as the page frames increase the page fault decreases.
Describe how the Second Chance Algorithm looks for a candidate to evict.
The second chance algorithm looks in the queue for the first page with reference bit set to 0, this page is the candidate for eviction. If it finds a page with reference bit set to 1 it will decrement this value to 0 and add the page to the back of the queue.
What is an improvement on the Second Chance Algorithm?
Clock Algorithm
- keep a circular list with current pointer
- if current page has R=0 the replace, else set R to 0 and move current pointer
In LRU (least recently used), what is the page that was most recently accessed? Least recently accessed? (stack algorithm)
Most: the page with the highest value of the 8-bit counter
Least: the page with the lowest value of the 8-bit counter
(the least is evicted)
Briefly describe I/O Architecture.
At the user space, you have your application and the API that is exposed is the virtual file system API.
Underneath that, we have the actual file system API that will translate all the read and write to files, and this is then sent to the device driver, and the device driver will do another level of conversion into actual bytes to be sent and received from the external device.
The way your device driver interacts with the device controller is through the device registers and the data buffers. For example, the device driver can read or write to the device controller status value. This can allow the device driver to figure out whether the external device such as your hard disk, is ready to accept new input, and any data that we write to the disk or read from the disk, is first buffered in the data buffers of the device controller and then can be read by the device drivers themselves.