week 7 memory management Flashcards

1
Q

describe memory management

A

Management of a limited resource:
(Memory hunger of applications increases with capacity!)
⇒ Sophisticated algorithms needed, together with support from
HW and from compiler and loader.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is logical address

A

Key point: program’s view memory is set of memory cells starting
at addresss 0x0 and finishing at some value (logical address)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is physical address

A

Hardware: have set of memory cells starting at address 0x0 and
finishing at some value (physical address)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

goal of memory management

A

Want to be able to store memory of several programs in main memory at the same time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

how can we manage memory

A

eed suitable mapping from logical addresses to physical addresses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what happens at compile time

A

absolute references are generated (eg MS-DOS .com-files)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what happens at load time

A

can be done by special program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what happens at execution time

A

needs HW support

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what can we use to take address mapping one step further

A

dynamic linking

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is dynamic linking

A

use only one copy of system library

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

how does os help memory management

A

same code accessible to more than one process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

why does swapping happen

A

memory demand is too high,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what happens in swapping

A

If memory demand is too high, memory of some processes is
transferred to disk
Usually combined with scheduling: low priority processes are
swapped out
Problems:
Big transfer time
What to do with pending I/O?
First point reason why swapping is not principal memory
management technique

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what are problems with swapping

A

Big transfer time
What to do with pending I/O?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is swapping usallu combined with

A

scheduling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

is swapping a primary memory management technique

A

no -> seen as a last resort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

wha is fragmentation caused by

A

swapping

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what are the two types of fragmentation

A

internal and external

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

describe internal fragmentation

A

programs only a little smaller than hole ⇒ leftover too small to qualify as hole

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

describe external fragmentation

A

over time, many small holes appear in memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what are the 4 strategies to choose holes in fragmentation

A

first fit
rotating first fit
best fit
buddy system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what is first fit

A

Start from beginning and use first available hole

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

what is rotating first fit

A

start after last assigned part of memory

24
Q

what is best fit

A

find smallest usable space

25
Q

what is buddy system

A

Have holes in sizes of power of 2. Smallest
possible hole used. Split hole in two if necessary. Recombine
two adjacent holes of same size

26
Q

how can we avoid external fragmentation

27
Q

describe paging

A

Alternative approach: Assign memory of a fixed size (page)
⇒ avoids external fragmentation
Translation of logical address to physical address done via page
table

28
Q

describe hardware use in paging

A

Hardware support mandatory for paging:
If page table small, use fast registers. Store large page tables in
main memory, but cache most recently used entries

29
Q

what do we use if page table is small

A

fast registers

30
Q

what do we use if page table is large

A

Store large page tables in
main memory, but cache most recently used entries

31
Q

general principle in paging

A

Whenever large lookup tables are required, use cache (small but
fast storage) to store most recently used entries

32
Q

describe memory proctection in paging

A

Memory protection easily added to paging:
protection information stored in page table

33
Q

describe ides of segmentation

A

Divide memory according to its usage by programs:
Data: mutable, different for each instance
Program Code: immutable, same for each instance
Symbol Table: immutable, same for each instance, only
necessary for debugging

34
Q

describe hardware in segmentation

A

Requires again HW support
same principle as paging but have to do a overflow check

35
Q

describe difference between paging and segmentation

A

Paging motivated by ease of allocation, segmentation by use of
memory
⇒ combination of both works well (eg 80386)

36
Q

describe virtual memory

A

dea: complete separation of logical and physical memory
⇒ Program can have extremely large amount of virtual memory
works because most programs use only small fraction of memory
intensively.
Efficient implementation tricky
Reason: Enormous difference between
- memory access speed (ca. 60ns)
- disk access speed (ca. 6ms for hard disks, 60μs for SDD

37
Q

what is virtual memory impemented as

A

demand paging

38
Q

describe demand pages

A

Two strategic decisions to be made:
Which process to “swap out” (move whole memory to disk
and block process): swapper
which pages to move to disk when additional page is required:
pager
Minimisation of rate of page faults (page has to be fetched from
memory) crucial
If we want 10% slowdown due to page fault, require fault rate
p < 10−6 for HDD and p < 10−4 for SDD

39
Q

what are the 3 page replacement algorithms

A

FIFO
Optimal algorithm
Leat recently used

40
Q

descibe FIFO

A

easy to implement, but does not take locality into account
Further problem: Increase in number of frames can cause increase
in number of page faults (Belady’s anomaly)

41
Q

describe optimal algorithm

A

select page which will be re-used at the latest time (or not at all)
⇒ not implementable, but good for comparisons

42
Q

describe leat recently used algorithm

A

use past as guide for future and replace page which has been
unused for the longest time

43
Q

problems with least recently used algorithm

A

Requires a lot of HW support

44
Q

how to fix least recently used algorithm

A

Stack in microcode
-Approximation using reference bit: HW sets bit to 1 when page is
referenced.
Now use FIFO algorithm, but skip pages with reference bit 1,
resetting it to 0
⇒ Second-chance algorithm

45
Q

describe thrashing

A

if process lacks frames it uses constantly, page-fault rate very high.
⇒ CPU-throughput decreases dramatically.
⇒ Disastrous effect on performance

46
Q

what is the 2 solutions to thrashing

A

working set model
page fault frequency

47
Q

what is the working set model

A

Working-set model (based on locality):
Define working set as set of pages used in the most recent ∆ page
references
keep only working set in main memory
⇒ Achieves high CPU-utilisation and prevents thrashing
Difficulty: Determine the working set!
Approximation: use reference bits; copy them each 10,000
references and define working set as pages with reference bit set

48
Q

what is the page fault frequency solution

A

akes direct approach:
- give process additional frames if page frequency rate high
- remove frame from process if page fault rate low

49
Q

four segments in linux kernel device

A

Kernel Code
Kernel Data
User Code
User Data

50
Q

kernel address and user address in 32 bit architecture

A

Have separate logical addresses for kernel and user memory
For 32-bit architectures (4 GB virtual memory):
kernel space address are the upper 1 GB of address
space (≥ 0xC0000000)
user space addresses are 0x0 to 0xBFFFFFFF (lower 3 GB)
kernel memory always mapped but protected against access by user
processes

51
Q

kernel address and user address in 64 bit architecture

A

For 64-bit architectures:
kernel space address are the upper half of address
space (≥ 0x8000 0000 0000 0000)
user space addresses are 0x0 to 0x7fff ffff ffff, starting from
above.
kernel memory always mapped but protected against access by user
processes

52
Q

describe page caches in linux

A

Experience shows: have repeated cycles of allocation and freeing
same kind of objects (eg inodes, dentries)
can have pool of pages used as cache for these objects (so-called
slab cache)
cache maintained by application (eg file system)
kmalloc uses slab caches for commonly used sizes

53
Q

what are caches made of

A

pool of pages

54
Q

what are caches maintaine by

A

application (e.g files system)

55
Q

what is kmalloc used for

A

uses slab caches for commonly used sizes