Lecture 9 - Memory Management Flashcards

1
Q

Main memory and registers are only storage ___ can

access directly

A

CPU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Memory unit only sees a stream of:

  1. ___ or
  2. ___
A
  1. addresses + read requests, or

2. address + data and write requests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Register access is done in ___ CPU clock (or less)

A

one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Main memory can take ___ cycles, causing a ___

A

many

stall

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

___ sits between main memory and CPU registers

A

cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do we provide protection? Why?

Who makes sure the base and limit is respected?

How do we protect the base and limit registers?

A

We can provide this protection by using a pair of base and limit registers define the logical address space of a process

base + limit

base

The CPU must check every memory access generated in usermode to be sure it is between base and limit for that user

The instructions to loading the base and limit registers are
privileged

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Programs on disk, ready to be brought into memory to execute form an
___

A

input queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Addresses represented in different ways at different stages of a program’s life
1. Source code addresses usually ___
2. Compiled code addresses bind to ___
i.e. “14 bytes from beginning of this module”
3. ___ will bind relocatable addresses to ___
addresses
i.e. 74014
4. Each ___maps one address space to another

A
  1. symbolic
  2. relocatable addresses
  3. Linker or loader, absolute
  4. Binding
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Address binding of instructions and data to memory addresses

can happen at three different stages. What are they?

A
  1. Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes
  2. Load time: Must generate relocatable code if memory location is not known at compile time
  3. Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another
    -> Need hardware support for address maps (e.g., base and
    limit registers)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Draw the diagram of multistep processing of a user program.

A

See slide 7

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The concept of a logical address space that is bound to a separate ___ is central to proper memory management

A

Physical address space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What’s the similarities and difference between logical address and physical address?

A
  1. Logical address – generated by the CPU; also referred to as virtual address
  2. Physical address – address seen by the memory unit

Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define:

  1. Logical address space

2. Physical address space

A

Logical address space is the set of all logical addresses generated by a program

Physical address space is the set of all physical addresses generated by a program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the role of the Memory-Management Unit (MMU)

A

Hardware device that at run time maps virtual to physical address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain how the relocation register MMU works

A

The value in the relocation register is added to every address generated by a user process at the time it is sent to memory

logical = 346
relocation register (14000) = 14346
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  1. Explain dynamic loading.
  2. What’s the advantage?
  3. Where are routines kept?
  4. When is it useful?
  5. Where is it implemented?
A
  1. The entire program does need to be in memory to excute
    Routine is not loaded until it is called
  2. Better memory-space utilization; unused routine is never loaded
  3. All routines kept on disk in relocatable load format
  4. Useful when large amounts of code are needed to handle infrequently occurring cases
  5. No special support from the operating system is required
    Implemented through program design
    OS can help by providing libraries to implement dynamic loading
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What’s the difference between static and dynamic linking?

A

Static linking – system libraries and program code combined by the loader into the binary program image

Dynamic linking –linking postponed until execution time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q
  1. Explain how dynamic linking works.

2. What is it particularly useful for?

A
  1. Small piece of code, stub, used to locate the appropriate memory-resident library routine
    Stub replaces itself with the address of the routine, and executes the routine
    Operating system checks if routine is in processes’ memory address
    If not in address space, add to address space
  2. Dynamic linking is particularly useful for libraries
    System also known as shared libraries
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q
  1. What problem does Contiguous allocation solve?

2. What are the two partitions in Contiguous allocation?

A
  1. Main memory must support both OS and user processes
    Limited resource, must allocate efficiently
    Contiguous allocation is one early method
  2. Main memory usually into two partitions:
    Resident operating system, usually held in low memory with interrupt vector
    User processes then held in high memory
    Each process contained in single contiguous section of memory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

In contiguous allocation,

  1. ___ are used to protect user processes from each other, and from changing operating-system code and data
  2. Base register contains value of smallest ___
  3. Limit register contains range of ___, where each ___ must be less than the limit register

Can then allow actions such as kernel code being ___ and kernel changing size

A
  1. Relocation registers
  2. Physical address
  3. Logical address
  4. Transient
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Define variable partitions. What does it solve?

A

In multiple-partition allocation, the degree of multiprogramming is limited by the number of partitions

We use variable-partition sizes for efficiency (sized to a given process’ needs)

22
Q

Who maintains information about allocated partitions and free partitions in Variable partitions?

A

Operating system maintains information about

a) allocated partitions
b) free partitions (hole)

23
Q

Dynamic Storage-Allocation Problem:

How to satisfy a request of size n from a list of free holes? (3 methods)

Compare the methods.

A
  1. First-fit: Allocate the first hole that is big enough
  2. Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size
    Produces the smallest leftover hole
  3. Worst-fit: Allocate the largest hole; must also search entire list
    Produces the largest leftover hole

First-fit and best-fit better than worst-fit in terms of speed and storage utilization

24
Q

Define:

  1. External fragmentation

2. Internal Fragmentation

A
  1. External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous
  2. Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used
25
Q

First fit analysis reveals that given N blocks allocated, ___ blocks lost to fragmentation

A

0.5 N

26
Q

How do we reduce external fragmentation?

A

Reduce external fragmentation by compaction

Shuffle memory contents to place all free memory together in one large block

27
Q

Compaction is possible only if relocation is ___, and is done at ___

A

Dynamic,

Execution time

28
Q

What is the I/O problem in Compaction? How is it solved?

A

Job moving memory space while doing I/O. Lose data.

Latch job in memory while it is involved in I/O
Do I/O only into OS buffers

29
Q

Define paging. Name two benefits.

A

Physical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter is available.

  1. Avoids external fragmentation
  2. Avoids problem of varying sized memory chunks
30
Q

To run a program of size N pages, need to find ___ free frames and load program

A

N

31
Q

Name two problems with paging.

A
  1. Still have Internal fragmentation

2. Need some kind of book keeping, searching, storage. Time + space costs.

32
Q

In paging, the ___ translates logical to physical addresses

A

page table.

33
Q

In address translation scheme, define Page number and Page offset.

Draw a diagram of the cpu accessing the physical memory through these.

A

Page number (p) – used as an index into a page table which contains base address of each page in physical memory

Page offset (d) – combined with base address to define the physical memory address that is sent to the memory unit

See slide 23

34
Q

How many pages are in an physical address space of 256 bytes divided into frames of 32 byts?

A

8 pages

35
Q

Calculate the internal fragmentation when:
Page size = 2,048 bytes
Process size = 72,766 bytes

A

35 pages + 1,086 bytes

Internal fragmentation of 2,048 - 1,086 = 962 bytes

36
Q

What’s the worst case fragmentation?

Average?

A

Worst case fragmentation = 1 frame – 1 byte

On average fragmentation = 1 / 2 frame size

37
Q

Why can’t we just reduce page size?

A

each page table entry takes memory to track, so memory lost there too.

38
Q

Where is page table kept?

A

in main memory

39
Q

Page table registers
___ points to the page table
___ indicates size of the page table

A

Page-table base register (PTBR)

Page-table length register (PTLR)

40
Q

How many memory access are required for page tables?

How can this be solved?

A

In this scheme every data/instruction access requires two memory accesses
One for the page table and one for the data / instruction

The two memory access problem can be solved by the use of a special fast-lookup hardware cache called translation look-aside buffers (TLBs) (also called associative memory).

41
Q

What do Translation Look-Aside Buffers store? Why?

A

Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely identifies each process to provide address-space protection for that process

Otherwise need to flush at every context switch

42
Q

What’s the size of TLBs

A

TLBs typically small (64 to 1,024 entries)

43
Q

What happens on a TLB miss?

A

On a TLB miss, value is loaded into the TLB for faster access next time
Replacement policies must be considered
Some entries can be wired down for permanent fast access

44
Q

What is associative memory?

A

A association table (TLB) between page # and frame #

45
Q
  1. What is the Hit ration?

2. Calculate the Effective Access Time if 10 nanoseconds to access memory

A
  1. Hit ratio – percentage of times that a page number is found in the TLB

An 80% hit ratio means that we find the desired page number in the TLB 80% of the time.

  1. If we find the desired page in TLB then a mapped-memory access take 10 ns
    Otherwise we need two memory access so it is 20 ns

Effective Access Time (EAT)
EAT = 0.80 x 10 + 0.20 x 20 = 12 nanoseconds
implying 20% slowdown in access time.

46
Q

Memory protection implemented by associating ___ with each frame to indicate if read-only or read-write access is allowed

A

Protection bit.

47
Q

What is the purpose of the valid invalid bit attached to each entry in the page table?

A

Valid-invalid bit attached to each entry in the page table:
“valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page
“invalid” indicates that the page is not in the process’ logical address space

48
Q

Explain the difference between Shared code and Private code and data

A

Shared code
One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window systems)
Similar to multiple threads sharing the same process space
Also useful for interprocess communication if sharing of read-write pages is allowed

Private code and data
Each process keeps a separate copy of the code and data
The pages for the private code and data can appear anywhere in the logical address space

49
Q

Consider a 32-bit logical address space as on modern computers

Page size of 4 KB (2^12)

  1. How many entries would the page table have?
  2. If each entry is 4 bytes -> how much memory would each process need for the page table alone?
A
  1. Page table would have 1 million entries (2^32 / 2^12)

2. 4MB

50
Q

What is the purpose of Hierarchical page tables?

A

Break up the logical address space into multiple page tables.
A simple technique is a two-level page table
We then page the page table

51
Q

What is a clustered page table?

What is it useful for?

A

Variation for 64-bit addresses is clustered page tables

Similar to hashed but each entry refers to several pages (such as 16) rather than 1

Especially useful for sparse address spaces (where memory references are non-contiguous and scattered)

52
Q

How do Inverted page tables work?

How do we implement shared memory?

A

Only page memory that is actually used by storing processes in the table through their pid.

Then, search through table for pid, which is associated to a frame in physical memory