Operating Systems: Virtual Memory Flashcards

1
Q

Why is full process swapping considered inefficient in memory management?

A

Full process swapping moves the entire process in and out of memory, including unused parts, which leads to wasted memory and CPU resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What types of program sections are rarely used and can remain unloaded initially?

A

What types of program sections are rarely used and can remain unloaded initially?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a more efficient alternative to full process swapping?

A

Partial loading, where only the necessary segments or pages are loaded into memory, while the rest remain on disk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How can virtual memory allow a program to exceed physical memory limits?

A

By storing inactive pages on disk and only loading active ones into RAM, the program can appear larger than the available physical memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is virtual memory in modern operating systems?

A

A system that separates logical memory (what a program sees) from physical memory (actual RAM), allowing for larger address spaces and dynamic translation of addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

List three key benefits of virtual memory.

A

Allows logical address space to exceed physical RAM

Enables on-demand loading, conserving memory

Supports more concurrent programs by swapping out inactive segments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the two main implementations of virtual memory systems?

A

Demand segmentation and demand paging.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is demand paging in contrast to regular swapping?

A

Demand paging only loads a page into memory when it is accessed, reducing I/O and memory usage and improving performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What occurs when a program accesses a page that is not in memory?

A

A page fault occurs, prompting the OS to load the required page from disk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the function of a lazy swapper in demand paging?

A

It defers loading a page into memory until it is actually needed by the program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Describe the inefficiency of regular (old) memory swapping.

A

It loads the entire process, including rarely used code, requiring unnecessary memory and I/O overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How does the new memory management approach using demand paging differ?

A

It uses on-demand loading and lazy swapping, loading only the pages needed at the time of access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a page in the context of virtual memory?

A

A fixed-size block of data in a program’s logical address space, used to divide it into manageable chunks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a frame in physical memory?

A

A fixed-size block of RAM that holds a page when it is loaded from disk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the typical size of pages and frames?

A

Typically 4 KB, although it can vary depending on the OS and hardware.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does the page table do in a virtual memory system?

A

It maps logical addresses to physical memory locations and indicates whether a page is currently in memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How does the page table use a valid/invalid bit?

A

The bit indicates if the page is currently in memory (valid) or not (invalid), guiding page fault handling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the result of accessing a page with an invalid bit during address translation?

A

A page fault is triggered, and the page must be loaded from disk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What steps are taken when a page fault occurs?

A

Check if the page reference is invalid

If valid, locate an empty frame

Load the required page from disk

Update the page table

Restart the instruction that caused the fault

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What errors occur if the page reference is invalid?

A

Segmentation fault or access violation error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the performance implications of pure demand paging at process start?

A

It is slow to get started, so loading initial pages proactively can improve performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What hardware support is required for demand paging?

A

Modified page table with valid/invalid bits

Swap space (secondary storage)

Ability to restart any instruction after a page fault

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is temporal locality in memory access?

A

The principle that recently accessed pages are likely to be accessed again soon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is spatial locality in memory access?

A

The principle that pages near a recently accessed page are likely to be used soon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is pre-paging and its potential drawback?
Loading likely-needed pages at startup to reduce page faults; if pages are unused, I/O and memory are wasted.
26
What is copy-on-write?
A memory optimization where shared pages between processes are only copied if one process attempts to modify the data.
27
When is copy-on-write typically used?
During process creation (e.g., fork()), where parent and child initially share pages.
28
What happens when there are no free frames available in physical memory?
A victim page must be selected and possibly written to disk to free a frame for the incoming page.
29
What is the process of basic page replacement?
Locate page on disk Find or free a frame Swap in the desired page Update tables Restart the operation
30
What is FIFO (First-In-First-Out) page replacement?
The oldest page in memory is replaced first; simple to implement but prone to Belady’s anomaly.
31
What is Belady’s anomaly?
A counterintuitive situation where increasing the number of page frames results in more page faults.
32
What is the Optimal Page Replacement algorithm?
Replaces the page that will not be used for the longest time in the future; ideal but impractical.
33
What is LRU (Least Recently Used) replacement?
Replaces the page that has not been used for the longest time, using recent past to predict future use.
34
How can LRU be implemented?
Counter: Track timestamps of accesses Stack: Move accessed pages to the top of a stack
35
What approximations of LRU are used with limited hardware support?
Clock algorithm and aging techniques.
36
What are LFU and MFU algorithms and their issues?
LFU: Replaces least frequently used pages MFU: Replaces most frequently used pages Both are costly to implement and suboptimal in performance
37
What factors influence how many frames a process receives?
Minimum based on instruction needs and maximum based on total available frames.
38
What is fixed frame allocation?
Equal distribution among processes Proportional allocation based on process size
39
How does global vs. local replacement differ?
Global: Any frame can be replaced, even from other processes Local: Replacement occurs within the process’s own frame set
40
What is priority-based frame allocation?
Allocates more frames to higher-priority processes; may steal frames from lower-priority ones.
41
What is thrashing in a virtual memory system?
Excessive page swapping due to insufficient allocated frames, leading to high I/O and low CPU utilization.
42
What does the Working Set Model aim to do?
Track the set of pages a process actively uses over a time window to prevent thrashing.
43
What defines a process’s working set?
Pages accessed in the most recent Δ (delta) memory references.
44
How does the Working Set Model help system performance?
Minimizes page faults Prevents thrashing Adapts to process behaviour over time
45
Why is kernel memory managed differently from user memory?
It is often non-pageable, requires low fragmentation, and may need physically contiguous blocks (e.g., for device I/O).
46
What is the buddy system in kernel memory allocation?
Allocates memory in power-of-2 block sizes and splits larger blocks into buddies until a suitable size is reached.
47
What are the pros and cons of the buddy system?
Fast coalescing of free blocks Potential for internal fragmentation
48
What is slab allocation used for?
Efficient allocation of small, fixed-size kernel data structures, using preallocated caches of slabs.
49
What happens when a slab cache is full?
Allocation occurs from the next empty slab or a new slab is created.
50
What are the two main goals of a page replacement algorithm?
Decide which page to replace when no free frames are available Minimize the number of page faults to improve performance
51
What is the role of the frame allocation algorithm in memory management?
Determines how many frames to give each process Selects which frames to replace during a page fault
52
How does increasing the number of frames generally affect page faults?
Increasing the number of frames typically reduces the number of page faults.
53
How does FIFO (First-In-First-Out) page replacement work?
It replaces the oldest page in memory, tracked using a simple FIFO queue.
54
Why might FIFO perform poorly despite its simplicity?
It may replace frequently used pages just because they were loaded earlier, which can lead to more page faults.
55
What is Belady’s anomaly?
A phenomenon where adding more page frames can increase the number of page faults in some algorithms, especially FIFO.
56
Under what conditions does Belady’s anomaly typically occur?
With algorithms like FIFO When access patterns cause the algorithm to evict useful pages
57
Why does FIFO sometimes exhibit Belady’s anomaly?
Because it evicts the oldest page without considering whether it's still frequently accessed.
58
What is the theoretical goal of an optimal page replacement algorithm?
To replace the page that will not be used for the longest time in the future.
59
Why is the optimal page replacement algorithm impractical in real systems?
It requires future knowledge of memory accesses, which is not available during execution.
60
What is the main idea behind Least Recently Used (LRU) page replacement?
It replaces the page that has not been used for the longest time, based on recent access history.
61
What are two possible implementations of LRU?
Counter-based: Track timestamps of each page reference Stack-based: Maintain stack of page numbers, most recently used on top
62
Why is exact LRU difficult to implement in practice?
It has high overhead in tracking access times and updating data structures on every memory reference.
63
What hardware feature supports LRU approximations?
Reference bits, which are set when a page is accessed but don't track exact order of use.
64
What are two common LRU approximation algorithms used in real systems?
Clock algorithm Aging algorithm
65
What is the Clock page replacement algorithm?
It approximates LRU by maintaining a circular list of pages with reference bits, replacing pages with a cleared bit.
66
What is the Aging algorithm in page replacement?
It uses reference bits with shifting counters to estimate how recently a page was used.
67
What is the Least Frequently Used (LFU) algorithm?
Replaces the page that has been accessed the least number of times.
68
Why is LFU often ineffective in practice?
It may retain old pages that were used frequently in the past but are no longer needed.
69
What is the Most Frequently Used (MFU) algorithm based on?
The idea that frequently used pages might have already served their purpose and may not be needed again soon.
70
Why are LFU and MFU rarely used in modern systems?
They are expensive to implement and do not approximate the optimal algorithm well.
71
What is the problem solved by frame allocation algorithms?
Determining how to distribute a fixed number of frames among all processes in the system.
72
What is the minimum number of frames a process must be allocated?
Enough to support any single instruction, especially those with multiple memory references.
73
What is the maximum number of frames a process can be allocated?
The total number of available frames in the system.
74
What is fixed frame allocation?
Each process receives a predetermined number of frames, either equally or proportionally to its size.
75
How does proportional allocation work in frame distribution?
Processes receive a share of total frames proportional to their memory size using the formula: $$ a_i = \frac{s_i}{\sum s_j} \times m $$ Where \( a_i \) is the number of frames allocated to process \( i \), \( s_i \) is the size of process \( i \), and \( m \) is the total number of frames available.
76
What is global page replacement?
A process can replace a page from any frame in the system, even if it belongs to another process.
77
What are the pros and cons of global replacement?
Pro: Higher overall throughput Con: Unpredictable per-process performance
78
What is local page replacement?
A process can only replace pages from its own allocated frames.
79
What are the pros and cons of local replacement?
Pro: More predictable process performance Con: May under-utilize available memory
80
What strategy do most modern OS use for frame allocation?
A hybrid of global and local replacement to balance performance and fairness.
81
What is priority allocation in memory management?
Distributes frames based on process priority rather than size or equal share.
82
In priority allocation, how are page faults handled for high-priority processes?
They may replace a frame from a lower-priority process to continue execution efficiently.
83
What risks must be managed with priority allocation?
Starvation of low-priority processes Ensuring overall system fairness
84
What is thrashing in virtual memory systems?
Thrashing occurs when a process does not have enough pages, leading to excessive page faults and frequent swapping between memory and disk, which significantly reduces CPU utilization.
85
What behavior characterizes a process undergoing thrashing?
The process spends more time swapping pages in and out than executing actual instructions.
86
What common system response can unintentionally worsen thrashing?
Increasing the degree of multiprogramming can worsen thrashing by further reducing the number of available frames per process.
87
What is the Working Set Model in memory management?
It is a strategy that keeps track of the set of pages a process is actively using during a specific window of time to reduce page faults and prevent thrashing.
88
What is the main assumption behind the Working Set Model?
It assumes the principle of locality, where recently used pages are likely to be used again soon.
89
How is the working set size (WSS) of a process defined?
The WSS is the total number of unique pages referenced in the most recent Δ (Delta) memory references (i.e., working set window).
90
What is the consequence if the total WSS of all processes exceeds available memory?
Thrashing can occur, as the system does not have enough frames to hold the combined working sets of all processes.
91
What are the benefits of the Working Set Model?
Minimizes page faults by retaining active pages Prevents thrashing through intelligent allocation Dynamically adapts to a process’s memory demands
92
What are the challenges of implementing the Working Set Model?
Choosing an appropriate window size Δ Overhead of tracking and maintaining working sets for all processes
93
How is kernel memory treated differently from user memory?
Kernel memory is often not pageable and must be allocated conservatively to minimize internal fragmentation and meet constraints like contiguity for device I/O.
94
Why must some kernel memory allocations be physically contiguous?
Contiguous memory is often required for hardware interactions, such as device I/O operations.
95
What is the buddy system in kernel memory allocation?
A power-of-2 allocator that splits memory into pairs of buddies recursively until a block of the correct size is found.
96
What is the advantage of the buddy system?
It enables quick coalescing of adjacent free blocks back into larger blocks when memory is freed.
97
What is the disadvantage of the buddy system?
It can lead to internal fragmentation, as memory is rounded up to the nearest power of two.
98
What is slab allocation in kernel memory management?
A method of allocating memory for small, fixed-size objects by preallocating caches of slabs, each containing multiple instances of the object type.
99
What is a slab in slab allocation?
One or more contiguous pages of physical memory that store kernel objects of the same type.
100
How does slab allocation avoid fragmentation?
By allocating memory in fixed-sized objects from caches, it avoids repeated allocations and deallocations of varied sizes.
101
What happens when all objects in a slab cache are used?
Allocation continues from the next empty slab, or a new slab is allocated if none are available.
102
What are the key benefits of slab allocation?
Reduces fragmentation Speeds up memory requests Simplifies management of frequently used kernel objects
103
Why is exact LRU (Least Recently Used) page replacement not practical without hardware support?
Because it requires tracking every memory reference in real time, which incurs high computational and memory overhead.
103
What hardware feature supports approximate LRU implementation?
Reference bits, which are set when a page is accessed and periodically reset to track recent usage indirectly.
104
What are two common LRU approximation algorithms used when hardware support is limited?
Clock algorithm Aging algorithm
105
What is the Clock algorithm?
A circular queue is used to track pages; when a page is considered for replacement, its reference bit is checked and reset. If it’s set, the page is skipped until one with a cleared bit is found.
106
Why is kernel memory treated differently from user memory?
Kernel memory often cannot be paged and must be conservatively allocated to minimize internal fragmentation. It also may require physically contiguous memory for device I/O operations.
107
How is kernel memory typically allocated?
From a free memory pool, often in variable-sized blocks, depending on the allocation request.
108
Why do some kernel operations require physically contiguous memory?
For operations like device I/O, where hardware may not support virtual memory addressing.
109
What is the buddy system in kernel memory allocation?
A power-of-two memory allocation system where memory is divided into blocks (buddies). Blocks are recursively split or coalesced to satisfy memory requests.
110
How does the buddy system allocator work?
Requests are rounded up to the next power of two If no block of the requested size exists, a larger block is split into two buddies Free buddies of the same size can be coalesced back together
111
What is the main advantage of the buddy system?
Fast coalescing of free memory blocks back into larger ones.
112
What is the main disadvantage of the buddy system?
It can cause internal fragmentation because memory is rounded up to the nearest power of two.
113
What is slab allocation in kernel memory management?
A technique for efficiently allocating small, fixed-size memory blocks using preallocated caches called slabs.
114
What is a slab in slab allocation?
One or more contiguous physical pages containing multiple objects of the same data type used by the kernel.
115
How is memory allocated in slab allocation?
From a cache of preinitialized objects; free objects are marked and reused. If no free object is available, a new slab is allocated.
116
What are the advantages of slab allocation?
Minimizes fragmentation Speeds up memory allocation Eliminates need for repeated object initialization