Memory Management Flashcards
What is the CPU Instruction Cycle?
- Fetch: Getting the instruction from memory.
- Decode: Understanding what the instruction does.
- Execute: Performing the instruction’s operation.
- Store: Writing back results to memory if needed.
What are registers?
Can be accessed within a single CPU cycle (extremely fast)
How is the main memory accessed?
Accessed over a memory bus; typically slower than registers.
What is cache memory?
Fast memory between the CPU and main memory to reduce access
times. It stores frequently used data and instructions to avoid slow
main memory access.
How is the Memory Hierarchy accessed?
*Helps balance speed, size, and cost, typically organized as:
* Registers > Cache > Main Memory (RAM) > Secondary Storage (Disk)
L1 Cache
Fastest and smallest, located inside the CPU core.
L2 Cache
Larger but slower, still on the CPU chip.
L3 Cache
Shared among cores, significantly larger but
slower than L1 and L2
Memory Isolation
Ensures that a process cannot access memory outside its
allocated space, protecting the OS and other processes.
Security Enhancement
Prevents malicious or faulty programs from causing memory
corruption.
Dynamic Relocation
Allows the OS to move processes in memory by updating only
the base register, without altering the program code.
What is address binding?
Address binding transforms logical addresses (generated
by programs) into physical addresses (used by hardware).
What is a logical address?
A logical address (also known as a virtual address) is an
abstract address that a process uses to access memory.
What is a physical address?
A physical address refers to the actual location in the
physical memory (RAM).
What 3 times can address binding of instructions and data to memory addresses happen?
- Compile time
- Load time
- Execution time
Load Time Binding
- Used when the load address is not known at compile time.
- The compiler generates relocatable code, meaning addresses are
relative to a base address. - The loader calculates the absolute addresses when the program is
loaded into memory. - Flexibility: The program can be loaded at different memory
locations without recompiling
Execution Time Binding
- The most dynamic form of address binding.
- Binding occurs when instructions are executed.
- Requires special hardware support, typically through the MMU.
- Allows processes to move freely between memory locations during
execution, as logical addresses are dynamically translated to
physical addresses. - Essential for modern operating systems that use paging or
segmentation.
How is Logical Address (Virtual Address) created?
- Generated by the CPU during program execution
- The address seen by the program
Physical Address
- The actual address in the memory unit
- The address used by the memory hardware to access memory
cells
Static Binding
- Address binding is completed before execution.
- Occurs at compile time or load time.
- Suitable for simple systems with predictable memory usage.
Dynamic Binding
- Binding occurs during execution.
- Allows for dynamic memory allocation, swapping, and relocation.
- Enables virtual memory systems, allowing processes to use more
memory than physically available
What is dynamic loading?
Dynamic loading is a technique where a program loads a module or
routine into memory only when it is needed.
- Modules can include libraries, functions, or data segments that are
not immediately required when the program starts. - Memory Efficiency
- Faster Start-Up
Dynamic Linking
- When a program is written, it typically contains reference to
external libraries - A program that has these libraries incorporated into its binary at
compile time is said to be statically linked - Dynamic linking postpones the linking until runtime
- Reduces the memory footprint of running programs.
- Allows multiple processes to share the same code, particularly
with shared libraries
Standard Swapping
- A process may be swapped out of memory to a backing store, then
brought back when needed - Allows total memory used by processes to exceed physical memory size
- Often combined with priority-based scheduling
- Low priority process more likely to be swapped out
- Greatly increases the cost of context switching
- Reading/writing secondary storage is relatively slow
- Time to swap is proportional to memory usage of process
Contiguous Allocation
- A process is allocated a single contiguous block of memory in main
memory. - The entire memory space required by the process must fit into one
continuous segment.
Multi-Partition Allocation
- Main memory is divided into multiple fixed or variable-sized
partitions, and each partition can hold a single process. - Fixed vs. Variable Partitions:
- Fixed-Size Partitions: Equal-sized chunks (partitions) in advance.
- Variable-Sized Partitions: Partitions are created dynamically based on
process size. - Unused memory referred to as holes
Fragmentation
Two types of fragmentation
* External fragmentation – when memory is available in holes, but
many small holes rather than one contiguous block
* Can be reduced by Compaction
* Shuffle memory contents to place all free memory together in one large
block
* Compaction is possible only if relocation is dynamic, and is done at
execution time
* I/O Problem
* Internal fragmentation – occurs if processes are allocated slightly
more than requested (e.g. so that a small hole is left)
Dynamic Storage-Allocation Problem
- Different approaches to allocating memory n from a list of holes
- First fit:
- Allocate the first hole that is big enough
- Best fit:
- Allocate the smallest hole that is big enough; must search entire list,
unless ordered by size - Produces the smallest leftover hole
- Worst fit:
- Allocate the largest hole; must also search entire list
- Produces the largest leftover hole
- First-fit and best-fit better than worst-fit in terms of speed and
storage utilisation
Segmentation
- Memory-management scheme that supports user view of memory
- Segmentation involves recognising that a program is a collection of
segments, such as: - Main program
- Function
- Method
- Stack
- Variables
- Symbol table
- These do not need to be contiguous in memory
Paging – Basic Method
- Physical address space of a process can be noncontiguous;
process is allocated physical memory whenever the latter is
available - One way of dealing with external fragmentation
- Physical memory divided into fixed-size blocks called frames
- Logical memory is divided into blocks of the same size, called
pages - Program of N pages neds to find N free frames
- Page table translates logical to physical addresses
- Still have internal fragmentation
Address Translation Scheme pages
- Address generated by CPU is divided into:
- Page number (p) – used as an index into a page table which contains base
address of each page in physical memory - Page offset (d) – combined with base address to define the physical
memory address that is sent to the memory unit