Week 4 Flashcards
Physical Memory Layout (No Memory Abstraction)
Early Computers (pre-1980): Programs directly accessed physical memory without any abstraction.
Drawbacks:
Only one program could run at a time.
Memory conflicts and crashes could occur if two programs
attempted to access shared memory.
Bugs in user programs could overwrite OS memory.
Limited parallelism due to single-process execution.
Running Multiple Programs without Memory Abstraction
Swapping:
Save the entire memory contents to nonvolatile storage before l
loading the next program.
IBM 360: Introduced memory protection by dividing memory into
blocks, each with a protection key to restrict access.
Memory Abstraction Techniques
Address Space (Static Relocation):
Each program references addresses local to it, using a Base Register
(start of physical memory) and Limit Register (program length).
The CPU automatically adds the base register to any address and
checks against the limit to prevent out-of-bounds access.
Swapping in Multitasking:
When RAM is overloaded, swapping processes in and out of
memory helps balance usage.
Memory Compaction: Combines scattered free spaces into a single
block, though this can be CPU-intensive.
Memory Allocation for Processes
Fixed vs. Dynamic Allocation:
Fixed Size: Simple but can be inflexible.
Dynamic Growth: Allows for flexibility but requires managing adjacent free spaces or relocating processes.
Allocation Algorithms:
First Fit: Scans for the first adequate hole (fast but may waste memory).
Next Fit: Similar to First Fit but resumes scanning from the last allocated position.
Best Fit: Finds the smallest hole that fits the request (efficient use but time-consuming).
Quick Fit: Uses separate lists for common request sizes, enabling faster allocations but may fragment memory.
Memory Management with Bitmaps and Linked Lists
Bitmap: Divides memory into units, each represented by a bit (0 = free, 1 = occupied). Efficient but slow for contiguous space search.
Linked List: Keeps a linked list of memory segments, marked as allocated or free, which is effective for updating memory allocation on process termination.
Paging and Translation Lookaside Buffer (TLB)
Paging:
Virtual Page: Logical blocks in virtual memory.
Page Frame: Units of physical memory mapped to pages.
Translation Lookaside Buffer (TLB):
Optimizes memory access by caching recently used virtual-to-
physical address mappings, bypassing the need to consult the page
table for frequently accessed pages.
Page Fault
A page fault occurs when a program tries to access a page not currently mapped in physical memory.
Process:
The Memory Management Unit (MMU) identifies an unmapped
page and signals the CPU to trigger a page fault, which traps the
OS.
The OS selects a page frame (potentially writing it back to disk if
modified).
The OS loads the required page into the chosen page frame,
updates the memory map, and restarts the instruction that
caused the fault.
Page Replacement: Optimal Page Replacement
Challenge: At the time of a page fault, the OS doesn’t know when each page will be referenced next.
Optimal Solution (Theoretical): Replace the page that won’t be used for the longest time. Although impractical to implement perfectly, it serves as a model for creating other algorithms.
Working Set Model
Demand Paging: Pages load into memory only when accessed.
Locality of Reference: Programs tend to access certain pages in clusters, leading to repeated references within a localized set of pages.
Working Set:
Represents the set of pages actively used by a process.
Observation: If a process’s working set is loaded in memory, it will
experience fewer page faults.
Prepaging: The OS may load a process’s working set into memory before running it, potentially minimizing initial page faults.