Operating Systems: Memory Management Flashcards

1
Q

What is the primary purpose of memory management in operating systems?

A

Memory management ensures that programs are loaded from disk into memory as processes, manages the limited memory resource to avoid bottlenecks, and isolates processes so they do not interfere with one another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the four phases of the CPU instruction cycle?

A

The CPU instruction cycle comprises:

  1. Fetch – retrieving the instruction from memory,
  2. Decode – interpreting what the instruction requires,
  3. Execute – performing the operation, and
  4. Store – writing back any results to memory.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does the CPU interact with memory during program execution?

A

The CPU uses the Program Counter (PC) to fetch instructions from memory, decodes them, and may fetch additional data from memory. After executing the instruction, the results may be written back to memory. Main memory and registers are the only storage directly accessible by the CPU, and delays in memory access (due to slower main memory cycles) can cause CPU stalls.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Compare registers, main memory, and cache in terms of access speeds and roles in the memory hierarchy.

A

Registers are the fastest, accessible within a single CPU cycle, main memory is slower and accessed over a memory bus, and cache memory is an intermediary fast memory that stores frequently used data/instructions to reduce access time. The typical hierarchy is: Registers > Cache > Main Memory > Secondary Storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What distinguishes L1, L2, and L3 caches in modern CPUs?

A

L1 cache is the fastest and smallest, located inside the CPU core; L2 cache is larger but slower, still on the CPU chip; and L3 cache is shared among cores, offering a larger capacity at a slower speed than L1 and L2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What role do base and limit registers play in memory management?

A

Base and limit registers define a process’s logical address space. The CPU checks every memory access in user mode to ensure the address is within the range specified by these registers, thereby enforcing memory isolation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What benefits do base and limit registers provide?

A

They ensure memory isolation by preventing a process from accessing memory outside its allocated region, enhance security by avoiding memory corruption by faulty or malicious programs, and allow dynamic relocation of processes by simply updating the base register without modifying the program code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does hardware use base and limit registers for address protection?

A

Every memory access is compared against the base and limit values; if an access falls outside the allowed range, the hardware triggers a trap (an addressing error) to the operating system, thereby protecting other processes and the OS itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is address binding and what are the differences between logical and physical addresses?

A

Address binding is the process of translating logical addresses (virtual addresses generated by a program) into physical addresses (actual locations in RAM). The Memory Management Unit (MMU) performs this translation at execution time, ensuring programs operate using an abstract address space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

At which stages can binding of instructions and data to memory occur, and what does each stage imply?

A

Binding can occur at:

Compile time: The compiler translates source code into machine code with absolute addresses (suitable if the load address is fixed).

Load time: Relocatable code is generated and the loader calculates absolute addresses when the program is loaded, allowing flexibility in placement.

Execution time: Dynamic translation of logical to physical addresses occurs via the MMU, enabling processes to move in memory during execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the characteristics and limitations of compile time binding?

A

Compile time binding generates machine code with hardcoded absolute addresses, which works well if the load location is fixed (as with some legacy systems). However, if the load address changes, the program must be recompiled, limiting flexibility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How does load time binding improve flexibility over compile time binding?

A

In load time binding, the compiler produces relocatable code with addresses relative to a base. The loader then calculates the absolute addresses during program load, allowing the program to be loaded at different memory locations without recompilation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is execution time binding and why is it essential for modern operating systems?

A

Execution time binding performs address translation dynamically during program execution, using hardware (the MMU). This method allows processes to be relocated in memory while running, which is crucial for supporting advanced features such as paging and segmentation in modern operating systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Differentiate between logical and physical address spaces.

A

A logical address (or virtual address) is the address generated by the CPU during program execution and seen by the program, whereas a physical address is the actual location in main memory accessed by the hardware. Mapping techniques such as paging or segmentation are used to translate between the two.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Contrast static (dynamic) address binding with dynamic address binding.

A

Static binding is completed before execution (at compile or load time) and works well for simple, predictable memory usage. Dynamic binding occurs during execution, allowing for dynamic memory allocation, process relocation, and support for virtual memory, making it more adaptable for modern, multitasking environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the role of the Memory Management Unit (MMU) in address translation?

A

The MMU is specialized hardware that maps virtual addresses to physical addresses. The simplest method uses an offset register that adds a constant value to each logical address, thereby hiding the physical memory details from the user program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are dynamic loading and dynamic linking, and how do they enhance memory management?

A

Dynamic loading is the technique where modules or routines are loaded into memory only when needed, reducing memory usage and speeding up program start-up. Dynamic linking defers the linking of external libraries until runtime, which minimizes duplication in memory and allows multiple processes to share a single copy of common libraries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is dynamic linking, and how does it differ from static linking?

A

Dynamic linking postpones the incorporation of external libraries until runtime. In contrast, static linking embeds these libraries into the binary at compile time, which can lead to duplication when many programs use the same libraries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the key benefits of dynamic linking?

A

Dynamic linking reduces the overall memory footprint by allowing multiple processes to share a single copy of common libraries, avoiding duplication and improving system efficiency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What role does a stub play in dynamic linking?

A

At compile time, a stub is inserted into the program to mark the need for a specific external library. At runtime, this stub is replaced by a link to the appropriate library already resident in memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What responsibilities does the operating system have in the dynamic linking process?

A

The OS must ensure that the correct version of the required library is loaded into memory at runtime and that the stub in the program is updated accordingly, especially when several versions of a system library exist.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is swapping in the context of memory management?

A

Swapping is the process of temporarily moving an entire process from main memory to a backing store (disk) and then back into memory, thereby allowing the system to run more processes than can physically fit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the purpose of standard swapping in an operating system?

A

Standard swapping allows the total memory used by processes to exceed physical memory limits by temporarily moving processes to disk, often based on process priority.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the performance drawbacks of standard swapping?

A

Swapping increases context-switch overhead because reading from and writing to secondary storage is slow. Additionally, the swap time grows in proportion to the process’s memory usage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
How can the swapping process be visually represented?
A schematic view typically shows the two phases of swapping, swapping out a process from memory to disk and swapping it back in, highlighting how process priority and memory size affect the overall swapping operation.
26
How is the total context switch time calculated in standard swapping?
For example, a 100 MB process at a transfer rate of 50 MB/sec takes 2000 ms to swap out and 2000 ms to swap in, totalling 4000 ms (4 seconds) for a complete swap cycle.
27
What strategy can reduce the swapping time of a process?
Reducing the actual memory size swapped, by using system calls such as request_memory() and release_memory(), cut down swap times since only the memory actively in use is moved.
28
How does the address binding method affect swapping?
Whether a process must be returned to the same physical address upon swap-in depends on the address binding method; dynamic binding allows relocation without requiring the original physical address.
29
What constraints does pending I/O impose on standard swapping?
A process with pending I/O cannot be swapped out because the I/O might target the wrong process. Alternatively, I/O can be double-buffered via kernel space, which adds extra overhead.
30
Why is standard swapping largely avoided in modern operating systems?
Due to the high overhead of moving entire processes and the potential I/O complications, modern systems employ modified strategies – only swapping when free memory is critically low.
31
Why is traditional swapping not used on mobile devices?
Mobile devices use flash memory that has slower throughput, smaller capacity, and limited write endurance; frequent swapping would reduce performance and increase the risk of memory failure.
32
How do iOS and Android handle memory pressure instead of swapping?
iOS asks applications to voluntarily relinquish memory (with failure to do so leading to termination), while Android terminates applications but saves their state for a rapid restart.
33
What is contiguous memory allocation?
It is a memory management technique where each process is assigned a single continuous block of memory in main memory.
34
How is main memory typically partitioned in many systems?
Main memory is often divided into two parts: low memory for the resident operating system (including the interrupt vector) and high memory for user processes.
35
How do relocation registers ensure memory protection in contiguous allocation?
Each process is given a base address and a limit, allowing it to access only the memory within its continuous block and preventing it from corrupting other processes or the operating system.
36
What function do relocation and limit registers serve in memory management?
They enforce memory isolation by translating a process’s logical addresses to physical addresses and triggering a trap if a process attempts to access memory outside its allocated range.
37
What is a key requirement of contiguous allocation for a process?
The entire memory space required by the process must fit into one single, continuous block of main memory.
38
How does multi-partition allocation differ from contiguous allocation?
Instead of assigning one large continuous block per process, multi-partition allocation divides memory into multiple partitions (either fixed or variable sized), each capable of holding one process.
39
What distinguishes fixed-size partitions from variable-sized partitions?
Fixed-size partitions are predetermined equal chunks of memory (which may lead to internal fragmentation), whereas variable-sized partitions are created dynamically to better match process size but require more complex management.
40
How does the operating system manage free memory in multi-partition allocation?
The operating system keeps track of both allocated partitions and free partitions (holes), merging adjacent free spaces when a process terminates to optimise memory usage.
41
What is external fragmentation, and how can it be reduced?
External fragmentation occurs when free memory is split into many small holes, making it difficult to allocate large contiguous blocks. It can be mitigated through compaction, which rearranges memory to consolidate free space, although this requires dynamic relocation and may incur I/O overhead.
42
What is internal fragmentation?
Internal fragmentation occurs when a process is allocated more memory than it actually requires, leaving unused space within its allocated block.
43
What is the first-fit strategy in dynamic storage allocation?
First-fit allocates the first available hole that is large enough for the process, making it fast and relatively efficient in utilising memory.
44
How do best-fit and worst-fit strategies differ in dynamic storage allocation?
Best-fit searches for the smallest hole that can accommodate the process, minimising leftover space, while worst-fit chooses the largest available hole, often leaving a larger fragment. Generally, first-fit and best-fit offer better performance and storage utilisation than worst-fit.
45
What is segmentation in operating systems memory management?
Segmentation divides a programme into logical segments (such as the main programme, functions/methods, the stack, variables, and the symbol table) that reflect the programme’s structure. These segments can be placed in non-contiguous memory locations.
46
What are some common segments identified in a programme during segmentation?
Common segments include the main programme code, functions/methods, the stack, variables, and the symbol table.
47
How does the logical view of segmentation organise memory?
It divides a process’s logical address space into separate segments, each with its own base address and limit as specified in a segment table, allowing flexible non-contiguous placement in physical memory.
48
What key information does a segment table provide in a segmented system?
A segment table maps each logical segment to its physical address by listing the base address and the limit (size) for each segment, thereby ensuring proper access control and memory protection.
49
What is the role of segmentation hardware in memory management?
It uses a segment table containing base and limit registers to translate logical segment addresses into physical addresses and to generate an addressing error trap if an access exceeds the allowed range.
50
What components comprise segmentation hardware?
It consists of a segment table, base registers, limit registers, and a trap mechanism that signals addressing errors when a process accesses memory outside its allocated segment.
51
What is paging in the context of memory management?
Paging divides physical memory into fixed-size blocks called frames and logical memory into blocks of the same size called pages, allowing processes to be allocated non-contiguous memory and thereby eliminating external fragmentation.
52
How does the basic paging method address external fragmentation?
By allocating memory in fixed-size frames and dividing logical memory into equally sized pages, it allows processes to occupy any available frames, avoiding the need for contiguous memory allocation.
53
What is the purpose of a page table in paging?
The page table maps each logical page to a specific physical frame, enabling the system to translate logical addresses into physical addresses during memory access.
54
What is a downside of paging despite eliminating external fragmentation?
Paging can lead to internal fragmentation, where wasted space exists within the allocated frames because the process does not completely fill the last page.
55
What information does a page table entry typically hold?
It contains the frame number corresponding to a logical page and may include additional control bits such as valid/invalid, protection, and dirty bits.
56
How does the page table facilitate address translation?
It provides a mapping from each logical page number to a physical frame number, which, when combined with the page offset, forms the complete physical address.
57
How is a logical address divided in a paging system?
A logical address is split into a page number (p), which indexes the page table, and a page offset (d), which specifies the exact location within the frame.
58
In a system with a logical address space of 2^m and a page size of 2^n, what do m and n represent?
Here, m is the total number of bits in the logical address and n is the number of bits used for the offset within each page.
59
What is the function of paging hardware in a computer system?
Paging hardware uses the page table to translate logical addresses into physical addresses, handling the mapping process directly and efficiently.
60
How do the CPU and paging hardware interact during address translation?
How do the CPU and paging hardware interact during address translation?
61
What does the paging example with a 32-byte memory and 4-byte pages illustrate?
It demonstrates the concrete mapping of logical pages to physical frames using a page table in a small memory system, where 32 bytes of memory are divided into 8 frames.
62
How does the example of the paging example with a 32-byte memory and 4-byte pages show the relationship between page size and memory division?
With 4-byte pages, the system divides the entire 32-byte memory into 8 equal frames, each capable of holding one page.
63
How is internal fragmentation calculated in a paging system?
It is determined by subtracting the actual used bytes in the last page from the page size; for example, if the page size is 2,048 bytes and only 1,086 bytes are used, the fragmentation is 2,048 - 1,086 = 962 bytes.
64
What is the average internal fragmentation in a paging system?
On average, internal fragmentation is half a frame size, since the unused portion of the last page tends to be about 50% of the page size.
65
What trade-off is involved in selecting smaller page sizes?
While smaller pages reduce internal fragmentation, they increase the size of the page table and the overhead associated with managing a larger number of pages.
66
What factors determine the Effective Access Time (EAT) in a paging system?
EAT depends on the main memory access time (MAT), the time required for a page table lookup, and the Translation Lookaside Buffer (TLB) access time, which varies with TLB hit or miss.
67
Why is a Translation Lookaside Buffer (TLB) important in paging systems?
A TLB caches frequently used page table entries, significantly speeding up the address translation process by reducing the need for repeated page table lookups.
68
What is the formula for calculating Effective Access Time (EAT) when using a TLB?
EAT = h × (TLB access time + MAT) + (1 – h) × (TLB access time + 2 × MAT), where h is the TLB hit ratio.
69
Given a Memory Access Time (MAT) of 100 ns, TLB access time of 20 ns, and a TLB hit ratio of 0.8, what is the EAT?
EAT = 0.8 × (20 + 100) + 0.2 × (20 + 200) = 0.8 × 120 + 0.2 × 220 = 96 + 44 = 140 ns.
70
How does a TLB miss affect memory access time?
A TLB miss requires an extra main memory access to fetch the page table entry, effectively doubling the main memory access time in the calculation for that instance.
71
What advantage does sharing read-only code via shared pages provide?
It allows multiple processes to use a single copy of reentrant, read-only code, reducing overall memory usage and supporting inter-process communication.
72
How are private pages different from shared pages?
Private pages are dedicated to a single process and can be located arbitrarily in the logical address space, whereas shared pages can be mapped into the address spaces of multiple processes.
73
What does the shared page example demonstrate in a paging system?
It shows how different processes can map the same physical memory page into their logical address spaces through their individual page tables, thereby sharing code or data.
74
How is memory sharing achieved among processes in the shared page example?
Each process’s page table contains an entry pointing to the same physical frame for the shared content, allowing the code or data to be accessed by all those processes without duplication.