Modul 4 - Virtuelltminne Flashcards

1
Q

In which direction does heap and stack grow?

A
  • Heap grows downward

- Stack grows upward

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the address space?

A

It’s an abstraction of the physical memory and it’s the programs view of memory in the system.

For example the OS can think that it start at the physical address 0, but instead it is loaded at some arbitrary physical address(es).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the address space of a process contain?

A

It contains all of the memory state of the running program.

  • The code of the program (the instructions) (static)
  • Stack (used by the program) (grows)
  • Heap (used by the program) (grows)
  • Other things too such as statically-initialized variables
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does it mean when the OS is visualizing memory?

A

It’s when the OS gives the program an illusion of being loaded at a particular address (say 0) and has a potentially very large address space (say 64-bits), when the reality is different.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is transparency (in terms of virtual memory)?

A

The OS implements the virtual memory in a way that is invisible to the running program. Thus, the program isn’t aware of the fact that memory is virtualized; rather, the program behaves as if it has its own private physical memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Goals with the virtual memory?

A
  • Transparency (processes should be
    unaware of virtualization.)
  • Efficiency (make virtualization as efficient as possible both in terms of time and space. Execution should be as close to real execution as possible.)
  • Protection (processes should not be
    able to interfere with each other or the OS, i.e., affect their memory)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the hardware-based address translation?

A
  • Transforming a virtual address into a physical address.
  • The relocation of the address happens in runtime and is therefore often referred to as dynamic relocation.

With address translation, the hardware transforms each memory access (e.g., an instruction fetch, load, or store), changing the virtual address provided by the instruction to a physical address where the desired information is actually located.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does base and bounds/dynamic relocation work?

A
  • Two hardware registers within each CPU: one called base register and one called bounds.
  • This base-and-bounds pair allow us to place the address space anywhere in physical memory and ensuring the process only can access its own address space.
  • When any memory reference is generated by the process, it’s translated by the processor in the following manner:
  • physical address = virtual address + base

Each memory reference generated by the process is a virtual address; the hardware in turn adds the contents of the base register to this address and the result is a physical address that can be issued to the memory system.

  • The bounds register is there to help with protection.
    Specifically, the processor will first check that the memory reference is within bounds to make sure it is legal.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the memory management unit MMU?

A
  • It’s the part of the processor that helps with address translation.
  • For example the base and bounds registers are hardware structures kept there.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Hardware requirements for address translation?

A
  • Privileged mode (kernel) - Needed to prevent user-mode processes from executing privileged operations
  • Base/bounds registers - Need pair of registers per CPU to support address translation and bounds checks.
  • Ability to translate virtual addresses and check if within bounds - Circuitry to do translations and check limits; in this case, quite simple.
  • Privileged instruction(s) to update base/bounds - OS must be able to set these values before letting a user program run.
  • Privileged instruction(s) to register exception handlers - OS must be able to tell hardware what code to run if exception occurs.
  • Ability to raise exceptions - When processes try to access privileged instructions or out-of-bounds memory.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Pros and cons with base and bounds?

A

Pros:

  • Transparent to a process.
  • Simple to implement.
  • Easy to change process.
  • Offers protection

Cons:

  • How do we share data?
  • Hard to run a program when the entire address space dosen’t fit into memory.
  • Wasted memory. (Internal fragmentation). Because there becomes a big chunk of “free” space between the stack and the heap.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

OS requirements for address translation?

A
  • Memory management:
    • Need to allocate memory for new processes;
    • Reclaim memory from terminated processes;
    • Generally manage memory via free list
  • Base/bounds management - Must set base/bounds properly upon context switch (save and restore)
  • Exception handling - Code to run when exceptions arise; likely action is to terminate offending process
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the process control block (PCB)

A

It’s a per-process structure where the OS saves and store the values of the base and bounds registers in memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How to solve the base-bounds cons? That is support large address space and without wasting “free” space between the stack and the heap?

A
  • Segmentation: Generalized Base/Bounds
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How is the idea of segmentation?

A

Instead of having just one base and bounds pair in the MMU, have a base and bounds pair per logical segment of the address space. That is one for the code, stack and heap of a process. This lets the OS place each segment in different parts of physical memory and avoids filling physical memory with unused virtual address space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

In a 14-bit virtual address, which bits tells the hardware which segment an address refers to? (explicit approach)

A

The top two bits are used to select the segment. The rest of the bits (12) are the offset.

For example if the top two bits are (01) then the hardware knows the address is in the heap. Thus uses the heap base and bounds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How does the hardware handle negative growth (stack that grows upwards)?

A
  • Has one bit that shows if it grows negative (0) or positive (1).
  • The correct negative offset is obtained by subtracting the offset to the maximum segment size.
    => maximum segment size - offset = correct negative offset.
18
Q

What extra support from the hardware is needed so that segments can share memory?

A
  • Protection bits are needed so that a process knows if it can read, write, execute the shared code.
  • By setting a code segment to read-only it can be shared by multiple processes without harming isolation.
  • Processes don’t know that they are sharing code (transparency is kept)
19
Q

Coarse-grained and fine-grained segmentation ?

A

Coarse-grained - Chops the address space into relatively large coarse chunks. (easier to implement)

Fine-grained - Allows address spaces to consists of more smaller segments, requires more hardware support with a segment table. (allows the compiler and operating system to do a better job)

20
Q

Which issues does segmentation arise?

A
  • Upon context switch, each segment registers must be saved and restored correctly.
  • Isn’t flexible enough to support sparse address space. For example if we have a large but sparsely-used heap all in one logical segment, then the entire heap must be in memory in order to access it.
  • External fragmentation because the different segments varies in size. Thus the physical memory quickly becomes full of little holes of free space making it hard to allocate new segments or grow existing ones.

Solution:
- Use a free-list management algorithm that tries to keep large extents of memory available for allocation.

  • Paging
21
Q

What is paging?

A

-It’s a space-management approach where the memory is chopped up into fixed-sized pieces to avoid external fragmentation (which is a problem in different sized chunks).

22
Q

What is a page frame? (Paging)

A
  • The physical memory is viewed as an array of fixed-sized slots called page frames.
23
Q

What is a page? (Paging)

A
  • A process’s address space is divided into fixed-sized units is called a page. (Instead of dividing in logical segments: code, heap stack)
24
Q

What role does the page table have?

A
  • It’s a data structure that keeps track of each virtual page of the address space placed in physical memory.
  • Stores address translations for each of the virtual pages of the address space, thus letting us know where in physical memory each page resides.

Ex. (Virtual Page 0 → Physical Frame 3)

  • Is an per-process data structure (each process has one page table)
  • It has the virtual page numbers (VPNs) and corresponding Physical page numbers (PFNs)
  • A valid bit that indicate whether a particular translation is valid. Ex. All unused space in-between different parts of the process are marked invalid.
  • Protection bits, indicating if the page could be read from, written to or executed from.
  • Present bit indicated whether a page is in physical memory or on disk.
  • A dirty bit indicates if a page has been modified since it was brought into memory.
  • A reference bit is used to track whether a page has been accessed, useful to determine which pages are popular which is critical during page replacement.
25
Q

What role does the virtual page number (VPN) and the offset have? (Virtual address translation in paging)

A

The virtual address generated by a process is split into two components: VPN and offset.

  • The VPN tells us which page to select. We use it to index the page table.
  • Offset tells us which byte of the page.
26
Q

What role does the physical frame number (PFN) have?

A

With the VPN we can index the page table and find which physical frame the given page resides within.

  • Physical page number (PFN) is used to replace the virtual page number (VPN) to issue a load to physical memory.
27
Q

What problems can paging cause?

A

Using paging as the core mechanism to support virtual memory can lead to high performance overheads.

By chopping the address space into small, fixed-sized units (i.e., pages), paging requires a large amount of mapping information. Because that mapping information is generally stored in physical memory, paging logically requires an extra memory lookup for
each virtual address generated by the program. Going to memory for translation information before every instruction fetch or explicit load or store is prohibitively slow.

28
Q

What role does translation-lookaside buffer (TLB) have?

A
  • Speeds up address translation.
  • It’s part of the chip’s memory-management unit (MMU)
  • It’s a hardware cache of popular virtual-to-physical address translations.
  • Upon each virtual memory reference, the hardware first checks the TLB to see if the desired translation is there. (On hit, dosen’t need to check in memory)
29
Q

Explain TLB hit and TLB miss.

A
  • First extract the virtual page number (VPN) from the virtual address and check if the TLB holds the translation for this VPN.

TLB hit:

  • If it holds the VPN, extract the page frame number (PFN) from the relevant TLB entry and form the desired physical address (PA) and access memory.

TLB miss:

  • If the TLB dosen’t hold the VPN, then we need to access the page table to find the translation. When found, update the TLB with the translation.
30
Q

Explain temporal locality (caching)?

A
  • Temporal locality: the idea is that an instruction or data item that has been recently accessed will likely be re-accessed soon in the future.

Ex: Think of loop variables or instructions in a loop; they are accessed repeatedly over time.

31
Q

Explain spatial locality (caching)?

A
  • Spatial locality, the idea is that if a program accesses memory at address x, it will likely soon access memory near x.

Ex: Imagine here streaming through an array of some kind, accessing one element and then the next.

32
Q

How is TLB miss handled by RISC and CISC architecture?

A

RISC architecture (more flexible with software-managed TLB’s)
- MIPS, Sparc, ARM
- The hardware rises an interrupt.
- The operating system jumps to a
trap handler.
- The operating system will access the TLB and update the TLB.
- Re-runs the instruction that caused TLB miss (get TLB hit).

CISC architecture

  • x86
  • The hardware “knows” where to find the page table (CR3 register).
  • The hardware will access the page table and updates the TLB.
33
Q

Page table valid bit and TLB valid bit?

A
  • In a page table, when a page-table entry (PTE) is marked invalid, it means that the page has not been allocated by the process, and should not be accessed by a correctly-working program.
  • A TLB valid bit, in contrast, simply refers to whether a TLB entry has a valid translation within it.
  • TLB has also protection bits, dirty bit and so forth.
34
Q

Problem and solution with TLB on context switch? (switching process)

A

TLB contains virtual-to-physical translations that are only valid for the currently running process; these translations are not meaningful for other processes. As a result, when switching from one process to another, the hardware or OS (or both)must be careful to ensure that the about-to-be-run process does not accidentally
use translations from some previously run process.

Solution:

  • Flush the TLB on context switches. (Emptying it before running the next process) Set valid bits to 0, clearing the contents of the TLB. (Costly)
  • Have an address space identifier (ASID) field in the TLB, it similar to a process identifier (PID). Used to differentiate otherwise identical translation between processes.
35
Q

Common replacement policies for TLB cache?

A
  • Evict the least-recently-used or LRU entry. LRU tries to take advantage of locality in the memory-reference stream, assuming it is likely that an entry that has not recently been used is a good candidate for eviction.
  • Random policy, evicts a TLB mapping at random. Such a policy is useful due to its simplicity and ability to avoid corner-case behaviors; for example, a “reasonable” policy such as LRU behaves quite unreasonably when a program loops over n + 1 pages with a TLB of size n; in this case, LRU misses upon every access, whereas random does much better.
36
Q

What are the problem with simple array-based page tables?

A
  • They are too big and takes too much memory on typical systems.
37
Q

How to reduce size of the page table?

A
  • Have bigger pages (but leads to internal fragmentation => not optimal solution)
  • Hybrid approach, Paging and Segments. Reduces the memory overhead of page tables. Unallocated pages between the stack and heap, dosen’t take space in the page table. (Leads to external fragmentation because of the varied sized segments)
  • Multi-level Page Tables. Also reduces the space wasted invalid regions in the page table. Many modern systems employ it.
38
Q

How does the paging and segment hybrid work?

A
  • Instead of having a single page table for the entire address space of a process, have instead one per logical segment (Heap, stack, code).
  • This avoids having page tables with many unused spaces (invalid entries) .
  • Have the base and bound registers for each segment in the MMU.
  • Use the base to hold the physical address of the page table of that segment.
  • The bound register is used to indicate the end of the page table (i.e., how many valid pages it has).
39
Q

How does the multi-level page table work?

A
  • Chop the page table into page-sized unit.
  • If an entire page of a page-table entries (PTEs) is invalid, don’t allocate that page.
  • Track whether a page of the page table is valid (and then where it is in memory) with the structure called page directory.
40
Q

What is the page directory? (Multi-level page tables)

A
  • It’s a structure that keeps track of where in memory a page of the page table is (if it’s valid) and keeps track of the pages in the page table (PTE) which has no valid page entries.
  • The valid bit in a page directory entrie (PDE) tells us only that at least one page table entry (PTE) pointed by the PDE is valid.
41
Q

Pros and cons of a multi-level page table?

A

Pros:

  • Reduces the memory space wasted by invalid regions in a page table.
  • It allocates page-table space in proportion to the amount of address space that is used.
  • Thus, compact and supports sparse address spaces.
  • Each portion of the page table fits neatly within a page, making it easier to manage memory; the OS can simply grab the next free page when it needs to allocate or grow a page table.
  • We can place the page-table pages wherever we like in physical memory.

Cons:
- TLB miss will lead to two loads from memory, that is needed to get the right translation information from the page table.

  • Complexity on a page-table lookup (TLB miss)
42
Q

Inverted page tables?

A
  • Instead of having many page tables (one per process of the system), keep a single page table that has entry for each physical page of the system.
  • The entry tells us which process is using the page and which virtual page of that process maps to this physical page.
  • Often built with hash table to speed up lookups.