Midterm 1 Flashcards
1 GB =
2^30
1 MB =
2^20
1 KB =
2^10
giga-
10^9
Static rewriting
rewrites two processes’ addresses to be distinct
Static rewriting issues
- relocation at runtime is hard (have to constantly rewrite process memory and pointers are hard to keep track of and manage)
- security (no isolation, pointers can access any random location)
Dynamic relocation
- base:
virtual address + base = physical address
can just change the base to automatically shift processes to new locations
- bound:
mem access checked by bound (if out of bound –> interrupt)
can processes access base and bound registers?
NO!!!!
base and bounds implementation
new CPU unit called memory management unit (MMU)
MMU has two registers: base and bounds
dynamic relocation issues
programs aren’t contiguous (chunks of free space between allocated memory)
Segmentation
using multiple base and bounds
what is a segment?
a base and bound pair that can be efficiently resized
segmentation implementation
uses MMU with a segment table containing variable # of segments
segmentation issues
requires contiguous regions but also results in external fragmentation
external fragmentation
combined space is large enough but memory is fragmented –> not individual free mem segment is useful
internal fragmentation
the memory reserved for a process can’t be used by OS until the process terminates
external fragmentation solution! (the bad one)
defragmentation: move previously allocated regions to get contiguous free space
defragmentation issues
expensive! running process might be stopped and allocation takes too long
external fragmentation solution! (the good one)
paging!
Paging idea
fine-grained and regular (constant) division of memory (called pages)
frames
fixed sized blocks of physical memory
pages
fixed sized blocks of a process’ address space
frame size = ?
page size
pros of paging
- don’t need a large contiguous region
(process still sees the address space as contiguous) - any page can be mapped to any frame
Page table
structure that maps a page to a frame using the virtual address
- uses the MMU
- only OS can change the page table
each process has its own page table
true
one-to-one paging
page table is just a linear array where each page has its own page table entry
the page table entry contains the final frame number
address space (AS) limit/size
how much memory MAX the OS will allocate to one process
equation relating AS limit, page size, # pages
AS limit/page size = # pages
how to find a where page tables are stored in DRAM?
page table base register (ptbr)
- accessed only in S-mode
- shows physical address
if linear PT:
- can find any PTE from ptbr
if multi-page:
- ptbr points to highest level PT
one-to-one (linear) paging issues?
a PT can require A LOT of contiguous space
= # PTEs * size of a PTE
multi-level page tables
solve the space issue of linear PTs:
- multiple small PT instead of one big PT
- only allocate each table when needed
- remaining space in the address
size of a PTE =
architectural size (64-bit systems have 64-bit PTEs)
layout of a PTE (linear)
upper bits: physical frame #
lower bits: page permissions
where are page tables stored: CPU cache or DRAM?
DRAM: cache is faster but much more expensive (not much memory space)
multi-level paging issues?
memory access is expensive:
if you have N-levels of page tables, you need N + 1 DRAM accesses = N page tables + final frame access
multi-paging issues solutions!
- TLB - translation lookaside buffer
- cache
TLB
stores final translation (not the data itself)
- OS needs to clear old TLB entries if any changes to PT are made
TLB issues
sometimes we need to:
1. check TLB to see if translation is cached
- if not, store translation after walking the page table
** CPUs have optimization algorithms so TLB hits rates are very high and these issues are not significant
cache for paging why?
DRAM is slow, regardless of TLB hit or not
cache stores ___ while TLB stores ___
data, translation
memory access steps
- check cache
- cache miss –> check TLB
2a. TLB hit –> physical frame –> done
2b. TLB miss –> walk the PTs to find physical address
3. insert translation into TLB
4. insert data into cache
library sharing
lazy on-demand memory allocation