Paging Flashcards

1
Q

A page table stores mapping between…

A

virtual page numbers and page frame numbers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is the purpose of the valid/present bit?

A

it indicates if a virtual page is currently mapped to physical memory or not
=> bit is not set: page fault is raised
=> bit is set: PTE can be used for deriving the physical address of the page

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a page fault?

A

If a process issues an instruction to access a virtual address that isn’t currently mapped, the MMU calls the OS to bring in the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Virtual address is divided into:

A

-virtual page number: index into the page table which contains base address of each page in physical memory
-page offset: concatenated with base address results in physical address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The OS’s involment in paging

A
  • page allocation/bringing data into memory
  • page replacement
  • context switching
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Page table entry content

A

-valid bit: aka present bit
-page frame number: if the page is present, at which physical address the page is currently located
-write bit: if the page may be written to
-caching: if the page should be cached at all and with which policy
-accessed bit: set by the MMU if page was touched since the bit was last cleared by the OS
-dirty bit: set by the MMU if this page was modified since the bit was last cleared by the OS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Disadvantages of linear page tables

A
  • for every address space, need to keep complete page table in memory that can map all virtual page numbers
    -most virtual addresses are not used by process & unused vpns don’t have a valid mapping in the page table -> no need to store invalid vpns
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What kind of fragmentation does occur

A

=> eliminates external fragmentation due to its fixed-size blocks
- internal fragmentation becomes a problem, as memory can only be allocated in coarse grained page frame sizes, the unused rest of the last allocated page can’t be used by other allocations and is lost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Page size trade-offs (consider fragmentation, table size and I/O)

A

Fragmentation:
- larger page => more memory wasted due to internal fragmentation for every allocation
- small page => on average only half of page wasted
Table size:
- larger page => fewer bits needed for pfn, fewer PTEs
- smaller page => more and larger PTEs
I/O:
-larger => more data needs to be loaded from disk to make page valid
-smaller => need to trap to OS more often when loading large program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Linear inverted page table + and -

A

+ less overhead for page table meta data
- increases time needed to search the table when a page reference occurs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

TLB name & idea

A

Translation lookaside buffer
Idea: add a cache that stores recent memory translations
TLB maps <vpn> to <pfn></pfn></vpn>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

TLB hit

A

on every load/store check if translation result is already cached in TLB => TLB hit if available

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

TLB miss

A

On every load/store if result isn’t already cached in TLB, walk page tables and insert result into TLB
Need to evict an entry from TLB on TLB miss

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Software vs hardware managed TLBs

A

software:
- OS receives TLB miss exception
- OS decides which entry to drop from TLB
- OS walks page tables to fill new TLB entry
- TLB entry format specified in instruction set architecture
hardware:
-Drop a TLB entry based on a policy encoded in hardware without involving the OS
- Walk page table in hardware to resolve address mapping

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

TLB reach (aka TLB coverage)
increase page size
provide multiple page sizes
increase tlb size

A

TLB reach = (TLB size) x (Page size)
Increase the page size:
+ need fewer TLB entries per memory
- increases internal fragmentation
Provide multiple page sizes:
+ allows apps that map larger memory areas to increase TLB coverage with minimal increase in fragmentation
Increase TLB size
- expensive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Explain the basic idea of paging

A

Paging allows each application to have its own, contiguous (virtual) address space, while the physical space occupied by an application doesn’t need to be contiguous. With paging, virtual memory is broken into fixed-size blocks, called pages, and physical memory is broken into fixed-size blocks of the same size, called frames. Each virtual page can be mapped to an arbitrary physical frame by page table.

17
Q

How is a virtual address translated to a physical address using a single-lever page table?

A

Each virtual address is divided into two parts: A page number (vpn) and a page offset. vpn is used as an index in a page table. Each element in the page table is called a page table entry (pte) and contains the frame number (pfn) of the frame to which the corresponding virtual page maps. To get the physical address, the page offset is concatenated to the frame number. This address is passed to the memory controller.

18
Q

Multi-level page table; explain the idea, advantages & disadvantages

A

Instead of using a large, linear array as a page table, the page table consists of multiple levels. A virtual address is split into one index per level and an offset. The first index is used to obtain the address of the second-level table from the first-level table, the second index is used to obtain the address of the third-level table from the second-level table and so on. The last table in the hierarchy contains the actual PFN.
+space requirements for page tables can be reduced
-# of memory accesses required for translation increases with each additional level

19
Q

Inverted page table; explain the idea, advantages & disadvantages

A

The page table contains one entry per physical frame instead of per virtual page. Each entry contains an identifier for the address space and the number of virtual pages to which the corresponding frame is mapped.
+the size of the page table only depends on the amount of actually available physical memory and expected to be smaller than linear page table

19
Q

Hashed inverted page table; explain the idea, advantages & disadvantages

A

Instead of using the virtual page number as an index, the number is hashed first, and the resulting hash value is used as an index
+ reduce lookup costs
- collisions

20
Q

When are TLB miss/page fault handlers invoked?

A
  • TLB/page table doesn’t contain a valid mapping for a requested page
  • Invalid accesses to mapped pages
21
Q

How can shared memory between two processes A and B realized at the page table level?

A

The PTEs for the two pages in processes A and B are configured to map to the same physical frame, thus allowing both processes to work on the same memory.

22
Q

Explain page replacement steps

A
  1. Save/clear victim page
    - drop page if fetched from disk
    - write back modifications if from disk and dirty
    - write pagefile/swap partition otherwise
  2. Unmap page from old AS
    - unset valid bit in PTE
    - flush TLB
  3. Prepare the new page
  4. Mage the page frame into the new address space
    - set valid bit in PTE
    - flush TLB
23
Q

Frame Allocation Types

A

Local: only frames of the faulting process are considered for replacement
- isolates processes
- separately determine how many frames each process gets
Global: all frames are considered for replacement
- doesn’t consider page ownership
- one process can get another process’s frame
- doesn’t protect process from a process that hogs all memory

Fixed:
=> Equal: all processes get the same amount of frames
=> Proportional: allocate according to the size of the process

Priority: proportional allocation scheme using priorities rather than size

24
Q

Pareto principle for working sets of processes

A

10% of memory gets 90% of the references
Goal: keep those 10% in memory, the rest on disk

25
Q

Demand paging; explanation, advantages, and disadvantages

A

Demand-paging: transfer only pages that raise page faults
+ only transfer what is needed
+ less memory needed per process
- many initial page faults when a task starts
- more I/O operations => more I/O overhead

26
Q

Pre-paging; explanation, advantages, and disadvantages

A

Pre-paging: speculatively transfer pages to RAM; at every page fault speculate what else should be loaded
+ improves disk I/O throughput by reading chunks
- Wastes I/O bandwidth if page is never used
- can destroy the working set of other processes in case of page stealing

27
Q

Page buffering

A

Idea: keep a pool of free page frames (pre-cleaning); on a page fault, use a page from the free pool, run a daemon that cleans (write back changes), reclaims (unmap), and scrubs (zero out) pages for the free pool in the background
+ such a free pool smoothes out I/O and speeds up paging significantly
=> Remaining problem: which pages to select as victims?

28
Q

Belady’s Anomaly

A

When using FIFO page replacement, for every number N of page frames you can construct a reference string that performs worse with N+1 frames
=> With FIFO it is possible to get more page faults with more page frames
=> More physical memory doesn’t always imply fewer faults

29
Q

Least Recently Used Page Replacement

A

Goal: approximate optimal page replacement
Idea: the past often predicts the future well
Assumption: the page used furthest in the past is used furthest in the future

30
Q

What difficulty do you see when implementing the clock algorithm for systems that allow shared memory?

A

The clock algorithm uses the reference bit of a page to determine if the page should be replaced. However, while the reference bit is present per virtual page and not per frame, the MMU only sets the reference bit in the PTE that it used for translation.
To determine if the data of a frame is still used, it isn’t sufficient to look at a single PTE, but the OS has to check all PTEs that map the same frame. Without having an extra data structure , the OS needs to scan all page tables.

31
Q

What is thrashing? (bunun graphi onemli!!)

A

The system is busy swapping pages in and out. Each time a page is brought in, another page, whose contents will soon be referenced, is thrown out.
=> Low CPU utilization
=> OS thinks that it needs higher degree of multiprogramming

CPU utilization
|
|yukari cikip aniden asagi dusen cizgi
|dusen yerden sonrasi thrashing
—————> degree of multiprogramming

32
Q

Reasons for thrashing

A
  • memory too small to hold (10%) memory of a single process
  • access pattern has no temporal locality
  • each process fits individually, but too many for the system
  • page replacement policy doesn’t work well
33
Q
A
34
Q
A