Operating Systems: Memory Management Flashcards
What is the primary purpose of memory management in operating systems?
Memory management ensures that programs are loaded from disk into memory as processes, manages the limited memory resource to avoid bottlenecks, and isolates processes so they do not interfere with one another.
What are the four phases of the CPU instruction cycle?
The CPU instruction cycle comprises:
- Fetch – retrieving the instruction from memory,
- Decode – interpreting what the instruction requires,
- Execute – performing the operation, and
- Store – writing back any results to memory.
How does the CPU interact with memory during program execution?
The CPU uses the Program Counter (PC) to fetch instructions from memory, decodes them, and may fetch additional data from memory. After executing the instruction, the results may be written back to memory. Main memory and registers are the only storage directly accessible by the CPU, and delays in memory access (due to slower main memory cycles) can cause CPU stalls.
Compare registers, main memory, and cache in terms of access speeds and roles in the memory hierarchy.
Registers are the fastest, accessible within a single CPU cycle, main memory is slower and accessed over a memory bus, and cache memory is an intermediary fast memory that stores frequently used data/instructions to reduce access time. The typical hierarchy is: Registers > Cache > Main Memory > Secondary Storage.
What distinguishes L1, L2, and L3 caches in modern CPUs?
L1 cache is the fastest and smallest, located inside the CPU core; L2 cache is larger but slower, still on the CPU chip; and L3 cache is shared among cores, offering a larger capacity at a slower speed than L1 and L2.
What role do base and limit registers play in memory management?
Base and limit registers define a process’s logical address space. The CPU checks every memory access in user mode to ensure the address is within the range specified by these registers, thereby enforcing memory isolation.
What benefits do base and limit registers provide?
They ensure memory isolation by preventing a process from accessing memory outside its allocated region, enhance security by avoiding memory corruption by faulty or malicious programs, and allow dynamic relocation of processes by simply updating the base register without modifying the program code.
How does hardware use base and limit registers for address protection?
Every memory access is compared against the base and limit values; if an access falls outside the allowed range, the hardware triggers a trap (an addressing error) to the operating system, thereby protecting other processes and the OS itself.
What is address binding and what are the differences between logical and physical addresses?
Address binding is the process of translating logical addresses (virtual addresses generated by a program) into physical addresses (actual locations in RAM). The Memory Management Unit (MMU) performs this translation at execution time, ensuring programs operate using an abstract address space.
At which stages can binding of instructions and data to memory occur, and what does each stage imply?
Binding can occur at:
Compile time: The compiler translates source code into machine code with absolute addresses (suitable if the load address is fixed).
Load time: Relocatable code is generated and the loader calculates absolute addresses when the program is loaded, allowing flexibility in placement.
Execution time: Dynamic translation of logical to physical addresses occurs via the MMU, enabling processes to move in memory during execution.
What are the characteristics and limitations of compile time binding?
Compile time binding generates machine code with hardcoded absolute addresses, which works well if the load location is fixed (as with some legacy systems). However, if the load address changes, the program must be recompiled, limiting flexibility.
How does load time binding improve flexibility over compile time binding?
In load time binding, the compiler produces relocatable code with addresses relative to a base. The loader then calculates the absolute addresses during program load, allowing the program to be loaded at different memory locations without recompilation.
What is execution time binding and why is it essential for modern operating systems?
Execution time binding performs address translation dynamically during program execution, using hardware (the MMU). This method allows processes to be relocated in memory while running, which is crucial for supporting advanced features such as paging and segmentation in modern operating systems.
Differentiate between logical and physical address spaces.
A logical address (or virtual address) is the address generated by the CPU during program execution and seen by the program, whereas a physical address is the actual location in main memory accessed by the hardware. Mapping techniques such as paging or segmentation are used to translate between the two.
Contrast static (dynamic) address binding with dynamic address binding.
Static binding is completed before execution (at compile or load time) and works well for simple, predictable memory usage. Dynamic binding occurs during execution, allowing for dynamic memory allocation, process relocation, and support for virtual memory, making it more adaptable for modern, multitasking environments.
What is the role of the Memory Management Unit (MMU) in address translation?
The MMU is specialized hardware that maps virtual addresses to physical addresses. The simplest method uses an offset register that adds a constant value to each logical address, thereby hiding the physical memory details from the user program.
What are dynamic loading and dynamic linking, and how do they enhance memory management?
Dynamic loading is the technique where modules or routines are loaded into memory only when needed, reducing memory usage and speeding up program start-up. Dynamic linking defers the linking of external libraries until runtime, which minimizes duplication in memory and allows multiple processes to share a single copy of common libraries.
What is dynamic linking, and how does it differ from static linking?
Dynamic linking postpones the incorporation of external libraries until runtime. In contrast, static linking embeds these libraries into the binary at compile time, which can lead to duplication when many programs use the same libraries.
What are the key benefits of dynamic linking?
Dynamic linking reduces the overall memory footprint by allowing multiple processes to share a single copy of common libraries, avoiding duplication and improving system efficiency.
What role does a stub play in dynamic linking?
At compile time, a stub is inserted into the program to mark the need for a specific external library. At runtime, this stub is replaced by a link to the appropriate library already resident in memory.
What responsibilities does the operating system have in the dynamic linking process?
The OS must ensure that the correct version of the required library is loaded into memory at runtime and that the stub in the program is updated accordingly, especially when several versions of a system library exist.
What is swapping in the context of memory management?
Swapping is the process of temporarily moving an entire process from main memory to a backing store (disk) and then back into memory, thereby allowing the system to run more processes than can physically fit.
What is the purpose of standard swapping in an operating system?
Standard swapping allows the total memory used by processes to exceed physical memory limits by temporarily moving processes to disk, often based on process priority.
What are the performance drawbacks of standard swapping?
Swapping increases context-switch overhead because reading from and writing to secondary storage is slow. Additionally, the swap time grows in proportion to the process’s memory usage.