week 7 memory management Flashcards
describe memory management
Management of a limited resource:
(Memory hunger of applications increases with capacity!)
⇒ Sophisticated algorithms needed, together with support from
HW and from compiler and loader.
what is logical address
Key point: program’s view memory is set of memory cells starting
at addresss 0x0 and finishing at some value (logical address)
what is physical address
Hardware: have set of memory cells starting at address 0x0 and
finishing at some value (physical address)
goal of memory management
Want to be able to store memory of several programs in main memory at the same time
how can we manage memory
eed suitable mapping from logical addresses to physical addresses
what happens at compile time
absolute references are generated (eg MS-DOS .com-files)
what happens at load time
can be done by special program
what happens at execution time
needs HW support
what can we use to take address mapping one step further
dynamic linking
what is dynamic linking
use only one copy of system library
how does os help memory management
same code accessible to more than one process
why does swapping happen
memory demand is too high,
what happens in swapping
If memory demand is too high, memory of some processes is
transferred to disk
Usually combined with scheduling: low priority processes are
swapped out
Problems:
Big transfer time
What to do with pending I/O?
First point reason why swapping is not principal memory
management technique
what are problems with swapping
Big transfer time
What to do with pending I/O?
what is swapping usallu combined with
scheduling
is swapping a primary memory management technique
no -> seen as a last resort
wha is fragmentation caused by
swapping
what are the two types of fragmentation
internal and external
describe internal fragmentation
programs only a little smaller than hole ⇒ leftover too small to qualify as hole
describe external fragmentation
over time, many small holes appear in memory
what are the 4 strategies to choose holes in fragmentation
first fit
rotating first fit
best fit
buddy system
what is first fit
Start from beginning and use first available hole
what is rotating first fit
start after last assigned part of memory
what is best fit
find smallest usable space
what is buddy system
Have holes in sizes of power of 2. Smallest
possible hole used. Split hole in two if necessary. Recombine
two adjacent holes of same size
how can we avoid external fragmentation
paging
describe paging
Alternative approach: Assign memory of a fixed size (page)
⇒ avoids external fragmentation
Translation of logical address to physical address done via page
table
describe hardware use in paging
Hardware support mandatory for paging:
If page table small, use fast registers. Store large page tables in
main memory, but cache most recently used entries
what do we use if page table is small
fast registers
what do we use if page table is large
Store large page tables in
main memory, but cache most recently used entries
general principle in paging
Whenever large lookup tables are required, use cache (small but
fast storage) to store most recently used entries
describe memory proctection in paging
Memory protection easily added to paging:
protection information stored in page table
describe ides of segmentation
Divide memory according to its usage by programs:
Data: mutable, different for each instance
Program Code: immutable, same for each instance
Symbol Table: immutable, same for each instance, only
necessary for debugging
describe hardware in segmentation
Requires again HW support
same principle as paging but have to do a overflow check
describe difference between paging and segmentation
Paging motivated by ease of allocation, segmentation by use of
memory
⇒ combination of both works well (eg 80386)
describe virtual memory
dea: complete separation of logical and physical memory
⇒ Program can have extremely large amount of virtual memory
works because most programs use only small fraction of memory
intensively.
Efficient implementation tricky
Reason: Enormous difference between
- memory access speed (ca. 60ns)
- disk access speed (ca. 6ms for hard disks, 60μs for SDD
what is virtual memory impemented as
demand paging
describe demand pages
Two strategic decisions to be made:
Which process to “swap out” (move whole memory to disk
and block process): swapper
which pages to move to disk when additional page is required:
pager
Minimisation of rate of page faults (page has to be fetched from
memory) crucial
If we want 10% slowdown due to page fault, require fault rate
p < 10−6 for HDD and p < 10−4 for SDD
what are the 3 page replacement algorithms
FIFO
Optimal algorithm
Leat recently used
descibe FIFO
easy to implement, but does not take locality into account
Further problem: Increase in number of frames can cause increase
in number of page faults (Belady’s anomaly)
describe optimal algorithm
select page which will be re-used at the latest time (or not at all)
⇒ not implementable, but good for comparisons
describe leat recently used algorithm
use past as guide for future and replace page which has been
unused for the longest time
problems with least recently used algorithm
Requires a lot of HW support
how to fix least recently used algorithm
Stack in microcode
-Approximation using reference bit: HW sets bit to 1 when page is
referenced.
Now use FIFO algorithm, but skip pages with reference bit 1,
resetting it to 0
⇒ Second-chance algorithm
describe thrashing
if process lacks frames it uses constantly, page-fault rate very high.
⇒ CPU-throughput decreases dramatically.
⇒ Disastrous effect on performance
what is the 2 solutions to thrashing
working set model
page fault frequency
what is the working set model
Working-set model (based on locality):
Define working set as set of pages used in the most recent ∆ page
references
keep only working set in main memory
⇒ Achieves high CPU-utilisation and prevents thrashing
Difficulty: Determine the working set!
Approximation: use reference bits; copy them each 10,000
references and define working set as pages with reference bit set
what is the page fault frequency solution
akes direct approach:
- give process additional frames if page frequency rate high
- remove frame from process if page fault rate low
four segments in linux kernel device
Kernel Code
Kernel Data
User Code
User Data
kernel address and user address in 32 bit architecture
Have separate logical addresses for kernel and user memory
For 32-bit architectures (4 GB virtual memory):
kernel space address are the upper 1 GB of address
space (≥ 0xC0000000)
user space addresses are 0x0 to 0xBFFFFFFF (lower 3 GB)
kernel memory always mapped but protected against access by user
processes
kernel address and user address in 64 bit architecture
For 64-bit architectures:
kernel space address are the upper half of address
space (≥ 0x8000 0000 0000 0000)
user space addresses are 0x0 to 0x7fff ffff ffff, starting from
above.
kernel memory always mapped but protected against access by user
processes
describe page caches in linux
Experience shows: have repeated cycles of allocation and freeing
same kind of objects (eg inodes, dentries)
can have pool of pages used as cache for these objects (so-called
slab cache)
cache maintained by application (eg file system)
kmalloc uses slab caches for commonly used sizes
what are caches made of
pool of pages
what are caches maintaine by
application (e.g files system)
what is kmalloc used for
uses slab caches for commonly used sizes