P3L2 Memory Management - Page Table Size Flashcards

1
Q

In a 32-bit architecture, how many different addresses into physical memory can you have?

A

Since the address can have 32 bits, you can have up to 2^32 different addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In a 32-bit architecture, with page size 4K, calculate the number of VPNs.

i.e how many entries would you need in the page table?

A
  1. 2^32 different addresses.
  2. Divide by blocks of 4K, 4k is 2^12
  3. So ..2^32/2^12 —-> 2^20 entries
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

In a 64-bit architecture, with page size 4K, where each page table entry is 64 bits (8 bytes), calculate the total size of the page table

A
  1. 2^64 different addresses.
  2. Divide by blocks of 4K (2*2*2^10), 4k is 2^12
  3. So ..2^64/2^12 —-> 2^52 entries
  4. Multiply by size of entry —> 8* 2^52 bytes —> 2^55 or 2^5 PB,
  5. That’s 32 PB !!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

In a 32-bit architecture, with page size 4K, where each page table entry is 32 bits (4 bytes), calculate the total size of the page table

A
  1. 2^32 different addresses.
  2. Divide by blocks of 4K, 4k is 2^12
  3. So ..2^32/2^12 —-> 2^20 entries
  4. Multiply by size of entry —> 4* 2^20 bytes —> 2 ^22 or 4MB
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Review for quick calculations:

Kb is 2^____

Mb is 2^____

Gb is 2^_____

Tera byte is 2^____

Peta byte is 2^____

A

KB - 2^10

MB - 2^20

GB - 2^30

TB - 2^40

PB - 2^50

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe Hierarchical page tables. How do they use less memory?

A

The outer level is referred to as a page table directory. Its elements are not pointers to individual page frames, but rather pointers to page tables themselves.

The inner level has proper page tables that actually to point to page frames in physical memory. Their entries have the page frame number and all the protection bits for the physical addresses that are represented by the corresponding virtual addresses.

The internal page tables exist only for those virtual memory regions that are actually valid. Any holes in the virtual memory space will result in lack of internal page tables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Hierarchical page table

If a process requests more memory to be allocated to it via malloc the OS will check and potentially create another ________________ for the process, adding a new entry in the _______________________.

A

If a process requests more memory to be allocated to it via malloc the OS will check and potentially create another page table for the process, adding a new entry in the page table directory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Virtual address with 2 layer hierarchical page table

  1. The first portion indexes into the __________________ to get the page table
  2. The second portion indexes into the _____________ to get the PFN.
  3. The last part of the logical address is still the ___________, which is used to actually index into the physical page frame.
A

Virtual address with 2 layer page table

  1. The first portion indexes into the page table directory to get the page table
  2. The second portion indexes into the page table to get the PFN.
  3. The last part of the logical address is still the offset, which is used to actually index into the physical page frame.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the advantage of adding more levels to a hierarchical page table?

A

As we add more levels, the internal page tables/directories end up covering smaller regions of the virtual address space.

As a result, it is more likely that the virtual address space will have gaps that will match that granularity, and we will be able to reduce the size of the page table as a result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the disadvantage of adding more levels to the hierarchical page table?

A

More memory accesses required for translation, since we will have to access more page table components before actually accessing the physical memory. Therefore, the translation latency will be increased.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Multi-level page table quiz

A process with 12 bit architecture has an address space where only the first 2Kb and the last 1Kb is used.

  1. Given 6 bits for the page number, how many entries are there in the single level page table?
  2. Given 2 bits for the directory table and 4 bits for the inner page table, and considering the sparse usage:
    • How many inner page tables are needed?
    • How many inner page table entries are needed?
A
  1. 2^6 –> 64 entries
  2. Given 2 bits for the directory table and 4 bits for the inner page table, and considering the sparse usage:
    • 3 inner tables
    • 3 * 2^4 —> 3*16 = 48 total entries
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

In a four-level page table, we will need to perform _____ memory accesses:

  • Need to perform ______ accesses to navigate through the page table entries, and ____ access to reach the physical memory.
A

In a four-level page table, we will need to perform __5___ memory accesses.

  • Need to perform ___4___ accesses to navigate through the page table entries, and __1__ access to reach the physical memory.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In most architectures, the MMU integrates a hardware _______ to store address translations.

This cache is called the_________________________________

If we have a TLB miss, we still need to perform the multi-level page ____________.

Even a small number of addresses cached in TLB can result in a ______ TLB hit rate because we usually have ___________ and ________ locality in memory references.

A

In most architectures, the MMU integrates a hardware cache to store address translations and by-pass multi-level page table navigation.

This cache is called the translation lookaside buffer (TLB).

If we have a TLB miss, we still need to perform the multi-level page navigation /lookups

Even a small number of addresses cached in TLB can result in a high TLB hit rate because we usually have a high temporal and spatial locality in memory references.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

True or False?

  • On modern x86 platforms, a 64-entry data TLB and 128-entry instruction TLB per core is enough to reduce multi-level page lookup latency.
  • A 2nd level TLB shared accross cores can be even smaller.
A
  • TRUE: On modern x86 platforms, there is a 64-entry data TLB and 128-entry instruction TLB per core,
  • FALSE: as well as a shared 512-entry shared second-level TLB.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

True or False?

The type of address translation that is possible on a particular platform is determined by the operaing system’s segmentation algorithm.

A

FALSE: The type of address translation that is possible on a particular platform is determined by the hardware.

  • Intel x86_32 platforms support segmentation and paging.
  • Linux supports up to 8K segments per process and another 8K global segments.
  • Intel x86_64 platforms support segmentation for backward compatibility, but the default mode is to use just paging.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Inverted page tables are managed on a _____________ basis.

(As opposed to regular page tables which are managed on a __________ basis)

Each entry in the inverted page table points to ___________________

A

Inverted page tables are managed on a system-wide basis.

(as opposed to regular page tables which are managed on a per-process basis.

Each entry in the inverted page table points to a frame in main memory

17
Q

Each process has its own page table, so the total amount of virtual memory “available” in the system is proportional to the amount of _________________ times _______________ currently in the system.

This is the basis of the reason for ____________________.

A

Each process has its own page table, so the total amount of virtual memory “available” in the system is proportional to the amount of physical memory times the number of processes currently in the system.

This is the basis of the reason for inverted page tables

18
Q

The memory address in an inverted page table contains the __________ of the process attempting the memory address, as well as the __________ and the ________.

A _________ search of the inverted page table is performed. The index found is the _________________________________

That index combined with the offset serves to reference the exact physical address.

A

The memory address in an inverted page table contains the process id (PID) of the process attempting the memory address, as well as the virtual address and the offset.

A linear search of the inverted page table is performed. The index found is the frame number in physical memory.

That index combined with the offset serves to reference the exact physical address.

19
Q

The size of the memory page, or frame, is determined by ___________________________________

A

The size of the memory page, or frame, is determined by the number of bits in the offset.

For example, if we have a 10-bit offset, our page size is 2^10 bytes, or 1KB. A 12-bit offset means we have a page size of 4KB.

20
Q
  1. What’s one benefit of larger page sizes?
  2. What’s the downside of larger page sizes?
A
  1. Larger pages means fewer page table entries, smaller page tables, and more TLB hits.
  2. The downside of the larger pages: If a large memory page is not densely populated, there will be larger unused gaps within the page itself, which will leads to wasted memory in pages, also known as internal fragmentation.
21
Q

For a single-level page table, 12-bit architecture, what is the number of entries in the page table if ….

  1. page size is 32 bytes?
  2. page size is 512 bytes?
A

12-bit architecture …

  1. page size is 32 bytes? 2^12 / 2^5 —> 2^7 = 128 entries
  2. page size is 512 bytes? 2^12 / 2^9 —> 2^3 = 8 entries