Virtual Memory Flashcards

1
Q

Virtual Memory

A

A memory management method where computers use secondary memory to compensate for the scarcity of physical memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Methods of handling Virtual Memory

A
  • Paging
  • Segmentation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Paging

A

1- Divide physical memory into fixed-sized blocks called frames where Size
is power of 2, between 512 bytes and 16 Mbytes
2- Divide logical memory into blocks of same size called pages
3- To run a program of size N pages, need to find N free frames and load program
4- Set up a page table to translate logical to physical addresses
5- Still have Internal fragmentation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Address generated by CPU is divided into?

A
  • Page number (p) – used as an index into a page table which contains base address of each page in physical memory
  • Page offset (d) – combined with base address to define the physical memory address that is sent to the memory unit
    page number page offset
    —————
    |p | d|
    —————
    m -n n

For given logical address space 2^m and page size 2^n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Suppose that VA i = 3257, and Page size c is 100, find:
-Page number
-Line # (Offset)

A
  • Page number:
    └ i/c (p) ┘-> └ 3257/100 ┘= 32
  • Line # (Offset): i/c = 3257%100 = 57

map 32 to the frame # and add 57 to get physical address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Estimate Internal Fragmentation if:
- Page size = 2,048 bytes
- Process size = 72,766 bytes

A
  • 35 pages + 1,086 bytes
  • Internal fragmentation of 2,048 - 1,086 = 962 bytes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the worst case fragmentation

A

1 frame – 1 byte

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

On average fragmentation = XX of frame size?

A

1 / 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

If (only) 32 bits are used with 4KB pages, a page table may have?

A

2^20 entries

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Page Fault

A

If we try to address a virtual address that is not in the RAM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Page fault handling process

A
  • Page Fault Trap: The CPU detects a page fault while trying to access a memory page that isn’t currently in RAM. This triggers a page fault exception, which transfers control to the operating system.
  • Page Table Lookup: The operating system looks up the page table to determine the location of the required page in secondary storage, such as a disk.
    -Fetch Page from Secondary Storage: The OS initiates a process to fetch the required page from secondary storage into an available page frame in RAM. This involves reading the page data from disk into a free page frame.
    -Update Page Table: Once the page is successfully brought into RAM, the operating system updates the page table to reflect the new location of the page in physical memory. This includes updating the page table entry with the physical address of the page frame where the page now resides.
    -Resume Process Execution: Finally, the operating system restarts the interrupted memory access instruction that caused the page fault. With the required page now available in RAM, the process can proceed as expected, accessing the data it needs from memory.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Demand Paging

A

A process in which data is moved from secondary memory to RAM on a demand basis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Page Replacement

A

Prevent over-allocation of memory by modifying page-fault service routine to include page replacement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How does page replacement reduce soverhead of page transfers

A

Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Basic Page Replacement steps

A
  1. Find the location of the desired page on disk.
  2. Find a free frame:
    - If there is a free frame, use it
    - If there is no free frame, use a page replacement algorithm to select a victim frame
    - Write victim frame to disk if dirty
  3. Bring the desired page into the (newly) free frame; update the page and frame tables
  4. Continue the process by restarting the instruction that caused the trap
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Frame-allocation algorithm determines :

A
  • How many frames to give each process
  • Which frames to replace
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Page-replacement algorithm

A
  • Want lowest page-fault rate on both first access and re-access
18
Q

Static paging

A

Allocate fixed number of frames to the process
* Use these frames as long they are free
* If no free frame, then select a victim page and swap
* Page reference stream:
ω = r1, r4, r8, r6, r4, …..
* Let St (m) is the set of pages loaded in the m frames of memory at virtual time t
* St(m) = St-1(m) Ս Xt- Yt

19
Q

Belady’s Optimal Algorithm (OPT)

A
  • Replace page that will not be used for longest period of time
20
Q

Least Recently Used (LRU) Algorithm

A

*Use past knowledge rather than future
* Replace page that has not been used in the most amount of time
* Associate time of last use with each page

21
Q

Stack Algorithm

A
  • An algorithm is considered as a stack algorithm iff it keeps all pages with smaller number of allocated frames when it gets more frames allocated
  • LRU and OPT are cases of stack algorithms that don’t have Belady’s Anomaly
  • Use Of A Stack to Record Most Recent Page References
22
Q

First-In-First-Out (FIFO) Algorithm

A

The operating system keeps track of all pages in the memory in a queue, the oldest page is in the front of the queue. When a page needs to be replaced page in the front of the queue is selected for removal

23
Q

Working-Set:

A
  • working-set window = a fixed number of page references
    Example: 10,000 instructions
24
Q

WSSi(working set of Process Pi ) = total number of pages referenced in the most recent Working-Set (varies in time)

A
  • if Working-Set too small will not encompass entire locality
  • if Working-Set too large will encompass several localities
  • if Working-Set =inf –> will encompass entire program
25
Q

Effective Access Time (EAT)

A

EAT = (1 – p) x memory access time
+ p (page fault overhead
+ swap page out
+ swap page in )

where P is Page Fault Rate 0 <= p <= 1

26
Q

Thrashing

A

A phenomenon that occurs in computer systems when the system spends an excessive amount of time on page swapping rather than executing useful work.

27
Q

Thrashing leads to:

A
  • Low CPU utilization
  • Operating system thinking that it needs to increase the degree of multiprogramming
  • Another process added to the system
28
Q

Locality model

A

A locality is a set of pages that are actively used together. The locality model states that as a process executes, it moves from one locality to another. Thus, a program is generally composed of several different localities which may overlap.

29
Q

Page Table

A

A Page Table is a data structure used by the operating system to keep track of the mapping between virtual addresses used by a process and the corresponding physical addresses in the system’s memory

30
Q

Page-table base register (PTBR)

A

points to the page table

31
Q

Page-table length register (PTLR)

A

indicates size of the page
table

32
Q

TLBs

A

Translation look-aside buffers
(also called associative memory)

33
Q

What do TLBs do?

A

A translation lookaside buffer (TLB) is a type of memory cache that stores recent translations of virtual memory to physical addresses to enable faster retrieval.

34
Q

What happens on a TLB miss

A

value is loaded into the TLB for faster access next time
* Replacement policies must be considered
* Some entries can be wired down for permanent fast access

35
Q

Demand Paging Optimizations

A

Demand paging is a memory management scheme where pages (units of memory) are not loaded into RAM until they are needed. This method conserves RAM, allowing for efficient use of memory and the ability to run larger applications on systems by bringing in pages as required. However, various optimizations are applied to improve the efficiency and performance of demand paging, especially in handling Input/Output (I/O) operations with disk space, which can significantly affect system speed.

Swap Space I/O Optimizations

Faster than File System I/O:
Even if swap space and regular files are on the same storage device, accessing swap space can be faster than accessing files through the file system. This efficiency gain occurs because:
- Swap space is allocated in larger blocks: This reduces the overhead associated with managing many small blocks, as is common in file systems. Larger, contiguous blocks of disk space can be managed and searched more quickly with less fragmentation.
- Less management overhead: File systems typically have more complex metadata and structures (like directories and file attributes) which consume additional processing time for read/write operations.

Whole Process Image to Swap Space:
In some older systems, such as BSD Unix, the entire process image is copied to swap space when a process is loaded. This method simplifies the paging process:
- Initial setup: At process start, instead of loading all content into RAM, the whole image is stored in swap.
- On-demand paging: Pages are moved into RAM from the swap as needed, and moved back to swap when not needed or when memory needs to be freed up.

Demand Paging from Program Binary

Page in from Binary, Discard on Free:
This approach, used in systems like Solaris and modern BSD, involves loading pages directly from the executable file on disk into memory when they are demanded:
- Efficient memory usage: When a frame in memory needs to be freed, the page is simply discarded if it has not been modified (rather than being written back to swap).
- Handling non-file pages: Any pages that do not have a corresponding file on disk, such as those used for stack and heap (anonymous memory), still need swap space for storage when they are swapped out.
- Modified pages: Pages that have been changed in memory but are associated with a file are eventually written back to the file system to maintain consistency.

Mobile System Considerations

Limited or No Swapping:
Many mobile operating systems do not support swapping due to hardware constraints (like non-expandable memory and flash storage lifespan concerns) and performance reasons:
- Demand paging from the file system: Instead of using swap space, mobile systems often load pages directly from the file system as needed.
- Reclaiming read-only pages: To manage memory, systems reclaim read-only pages (such as those containing executable code) that can be reloaded later if necessary without the need to write them to a slower secondary storage.

Summary

These optimizations are crucial for minimizing the latency and overhead associated with managing memory in computer systems. By tailoring the approach to paging and memory management to the characteristics of the hardware and expected workloads, systems can achieve better performance and more efficient memory usage. This involves a delicate balance between reducing I/O operations, maximizing the speed of necessary operations, and effectively utilizing available memory resources.

36
Q

Prepaging

A

Prepaging is a technique used in memory management to optimize the performance of demand paging systems by preemptively loading pages into memory that are likely to be needed soon.

37
Q

Page size selection must take into consideration

A
  • Fragmentation
  • Page table size
  • Resolution
  • I/O overhead
  • Number of page faults
  • Locality
  • TLB size and effectiveness
38
Q

Typical Page Sizes

A

Always power of 2, usually in the range 212 (4,096 bytes) to 222
(4,194,304 bytes)

39
Q

TLB Reach

A

Refers to the total amount of memory that can be directly accessed via the Translation Lookaside Buffer (TLB)

40
Q

Calculation of TLB Reach

A

TLB Reach = (TLB Size) X (Page Size)
* TLB Size: This is the number of entries in the TLB. Each entry corresponds to a page.
* Page Size: This is the size of each memory page.

41
Q

Implications of TLB Reach

A

** Optimal Working Set Size:

-The working set of a process is the set of pages that the process is currently using. Ideally, the working set should fit within the TLB reach to minimize page faults. If the working set exceeds the TLB reach, the processor will experience more page faults, leading to increased access times and slower performance.

** Increasing the Page Size:

-Increasing the page size can extend the TLB reach, allowing more memory to be covered by the same number of TLB entries. However, larger page sizes can lead to higher internal fragmentation, as not all applications require large pages and might not use the entire page efficiently.

** Multiple Page Sizes:

-Many modern processors support multiple page sizes, often through a mechanism known as huge pages or large pages. This flexibility allows applications that can efficiently use larger pages to benefit from increased TLB reach without unnecessarily increasing fragmentation for applications that perform better with smaller pages.

42
Q
A