Virtual Memory Flashcards

1
Q

Virtual Memory

A
  • On-Demand Loading:
  • Only the necessary parts of a program are loaded into physical memory.
  • The rest remains on disk until needed, conserving RAM.
  • Support for More Concurrent Programs:
  • By efficiently managing memory, virtual memory allows more programs to
    run simultaneously.
  • Inactive parts of programs can be swapped out to disk, freeing up memory
    for active processes.
  • Two possible implementations:
  • Demand segmentation
  • Demand paging
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Demand Paging

A
  • Regular swapping (old) approach is to load entire process into
    memory at load time - and swap out entire process
  • Alternative (new): Demand Paging load a page only when it is
    needed
  • Less I/O required (unused parts of program never loaded)
  • Less memory required
  • Faster response
  • If a page is not in memory and the program tries to access it
  • A page fault occurs
  • The OS loads the required page from disk into RAM.
  • Lazy swapper never swaps a page into memory until needed
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Old Method: Regular Swapping

A
  • Full Process Loading:
  • In traditional swapping, the entire process is loaded into memory at the
    start.
  • High Overhead:
  • This approach required significant memory and I/O operations, even if
    large parts of the program were never used.
  • Inefficiency:
  • Especially problematic for large applications with rarely accessed code
    paths (e.g., error handling routines).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

New Method: Demand Paging

A

Concept:
* On-Demand Loading:
* Instead of loading the entire process, only the needed pages are loaded
into memory.
* Lazy Swapping:
* The OS only swaps pages into memory when the process accesses them.
* Page-by-Page Management:
* The program is divided into fixed-size pages, which are loaded individually
as required.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Pages

A
  • A page is a fixed-size block of data in a program’s logical (virtual) address
    space
  • Pages allow the logical address space of a process to be divided into
    manageable chunks.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Frames

A
  • A frame is a fixed-size block of physical memory (RAM).
  • Frames provide physical storage for pages when they are loaded into memory.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Sizes

A
  • Vary depending on the OS and hardware architecture. Typically 4 KB
  • Frame size is identical to Page size
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Page table

A

*Page table stores mapping from logical address to location in
physical memory
* If the page is not in physical memory, page table needs to record
this:
* Done with a valid/invalid bit
* In case of lazy swapping, invalid bit is initially set on all entries
* During address translation, invalid bit leads to a page fault

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Page Fault

A
  • Check secondary table to decide if it is an invalid reference or
    page not in memory
  • Invalid reference → generate error
  • Segmentation fault
  • Access violation error
  • If not in memory
  • Get empty frame
  • Swap: Retrieve page from disk place into empty frame
  • Change to valid bit in page table
  • Restart operation that caused page fault
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Demand Paging Challenges

A

Pure demand paging starts with no pages in memory
* Slow to get started, better to load at least first page needed

Single instruction can access multiple pages, causing multiple page
faults
* Fortunately, exceedingly rare – Locality of reference
* Temporal Locality: If a page is accessed, it is likely to be accessed again soon. (Loops)
* Spatial Locality: Pages near a recently accessed page are likely to be used soon. (Arrays)

Hardware support required
* Modified page table – valid/invalid
* Swap space (secondary memory)
* Instruction restart - crucial requirement is the ability to restart any instruction
after a page fault.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Optimisation: Pre-paging

A
  • Pre-paging is an attempt to prevent the high level of page faults an
    initial process encounters.
  • Load the first page or a small set of essential pages initially to avoid
    immediate page faults.
  • Reduces large number of page faults at startup
  • If pre-paged pages are not used – I/O and memory wasted
  • Heuristic Loading: The OS may predict which pages will be
    needed next and load them proactively
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Optimisation: Copy-on-Write

A
  • Copy-on-Write is an optimisation strategy to efficiently manage
    memory by delaying the copying of data until absolutely necessary.
  • If processes use same pages:
  • When a parent creates a child process (fork() system call)
  • Can share these pages until one changes the data
  • Reduces swapping
  • Pages must be marked as shared
  • When process wants to modify the page – it takes a copy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

No Free Frame

A
  • If all frames in physical memory are in use, must free one to load a
    page from secondary storage
  • Some pages in memory not currently active
  • Which one gets paged out?
  • Want to minimise number of page faults
  • Don’t page out one that will be needed in with the next operation
  • If a page hasn’t been modified, no need to to save it to disk (can use the
    copy already on disk)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Basic Page Replacement

A
  1. Find the location on the desired page on disk
  2. Find a free frame:
    * If there is a free frame, use it
    * If there is no free frame, select a victim frame
    * Write victim frame to disk if “dirty” i.e. modified
  3. Bring the desired page into the (newly) free frame; update the
    page and frame tables
  4. Continue the process by restarting the instruction that caused
    the trap
    Note if no frames are free – potentially 2 page transfers for page fault
    in and out – increasing Effective Access Time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Page and Frame Replacement

A
  • Page replacement algorithm
  • Used to decided which page to replace when no free frame
  • Want lowest page faults
  • Frame allocation algorithm determines
  • How many frames to give each process
  • Which frames to replace
  • In general, more frames → less page faults
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

First-in-first-out (FIFO) page replacement

A
  • Replace oldest page in memory
  • Don’t need to record times, just maintain a FIFO queue
  • Easy to understand and implement
  • Does not always perform well
  • First page may contain heavily referenced variable that will soon be needed
    again
  • In some circumstances, page faults may increase with more page
    frames (Belady’s anomaly)
17
Q

Conditions for Belady’s Anomaly

A
  • Algorithm-Specific: Belady’s Anomaly primarily occurs with
    algorithms like FIFO
  • Specific Access Patterns: The anomaly depends on the sequence
    of memory accesses (page reference string) and how the algorithm
    evicts pages.
  • Explanation: Evicting the “oldest” page without considering how
    frequently or recently it was used can lead to:
  • Inefficient replacements
  • Increased page faults
  • Increasing memory does not always improve performance
  • Limitations of algorithms like FIFO & Emphasizes the importance of
    smarter page replacement strategies
18
Q

Optimal Page Replacement

A
  • Belady’s anomaly led to the search for an optimal page
    replacement algorithm
  • One with the lowest possible page fault rate
  • That should never suffer from Belady’s anomaly
  • Such an algorithm does exist!
  • Evicts the page that will not be used for the longest time in the future
  • Unfortunately, ideal but impractical, very difficult to know which page this
    is… future knowledge
  • Need to find something suboptimal, but is there something better
    than FIFO?
19
Q

Least-recently-used (LRU)

A
  • Instead of replacing oldest page (in terms of residence)
  • Evict the page that has not been used for the longest time.
  • Recent past treated as an approximation of the near future
  • Implementation not so simple
  • Option A - Counter. CPU maintains a counter which is incremented for
    every memory reference. When a page is referenced, counter copied to
    field on page. Remove page with smallest value in this field.
  • Option B: Stack. Stack contains page numbers. When a page is referenced,
    it gets moved to the top of the stack. Removed page from bottom of the
    stack
20
Q

Hardware Support

A
  • Exact LRU is not practical without hardware support because of the
    high computational and memory overhead involved in tracking
  • Use of reference bit (set when a page is referenced) allows
    determination of whether a page has been used
  • although not when it was used (periodic reset)
  • If no hardware support available, modern operating systems
    typically use approximations like:
  • Clock Algorithm
  • Aging
  • Which achieve similar performance with much less complexity.
21
Q

Counting-based Page Replacement

A
  • Least frequently used (LFU)
  • Not good if a process has different phases of operation
  • Most frequently used (MFU)
  • On the principle that the most frequently used – might no longer be needed
  • Neither algorithm commonly used
  • Implementation expensive
  • Do not approximate optimal solution well
22
Q

Fixed Allocation

A
  • Distribute total available frames evenly amongst number of
    processes
  • Distribute total available frames proportionately according to size
    of processes
23
Q

Global vs Local - Frame Allocation

A
  • Global replacement: replacement frame can come from set of all
    frames
  • One process can “steal” from another
  • Execution time of an individual process can vary greatly
  • Greater throughput
  • Local replacement: replacement from can only come from frames
    of same process
  • More consistent per-process performance
  • More likely to under-utilise memory
  • Modern OS often use a hybrid approach
  • Combine aspects of both policies to balance flexibility and fairness
24
Q

Thrashing

A
  • If a process does not have “enough” pages, the page-fault rate is
    very high
  • Page fault to get page
  • Replace existing page
  • Quickly need replaced frame back
  • This leads to low CPU utilisation – more focused on disk I/O
  • OS increasing degree of multiprogramming!
  • Thrashing:
  • A process busy swapping pages in and out
  • More time swapping pages between memory and disk than executing
    actual instructions
25
Q

Working set Model

A

The Working Set Model is a memory management concept used
to optimise the performance of virtual memory systems.

  • Minimizes Page Faults:
  • Ensures that frequently used pages are kept in memory.
  • Prevents Thrashing:
  • Allocates memory more intelligently, avoiding memory contention.
  • Adaptability:
  • Adjusts memory allocation dynamically based on a process’s changing
    needs.
  • Challenge:
  • Defining the window – fixed time interval
  • Overhead –tracking and managing
26
Q

Kernel Memory

A
  • Treated differently from user memory
  • Often not subjected to paging
  • Need to be conservative in allocation; minimise internal
    fragmentation
  • Often a free-memory pool from which it is allocated
  • Memory allocated in varying sized blocks
  • Some kernel memory needs to be contiguous (e.g. for device I/O)
27
Q

Kernel Memory Case 1: Buddy System

A
  • Allocates memory from fixed-size segment consisting of physicallycontiguous pages

Memory is allocated using power-of-2 allocator
* Satisfies requests in units sized as power of 2
* Request rounded up to next highest power of 2
* When smaller allocation needed than is available, current chunk split into two buddies of next-lower power of 2
* Continue until appropriately sized chunk available

Advantage: can quickly coalesce unused chunks into a larger
chunk
* Disadvantage: fragmentation

28
Q

Kernel Memory Case 2: Slab Allocation

A

Technique used to efficiently allocate and manage
* small, frequently requested, fixed-size memory blocks.

Slab is one or more physical contiguous pages

Cache consists of one or more slabs

Single cache for each unique kernel data structure
* Each cache filled with objects (instantiations)

When cache is created, filled with objects marked as free

When structures stored, objects marked as used

If slab is full, next object allocated from empty slab
* If no empty slabs, new slab allocated

Benefits: no fragmentation, fast memory request satisfaction