23. Complete Virtual Memory Systems Flashcards

1
Q

How was VAX memory system constructed?

A

32 bit address space, 512-byte pages. 23 bits for VPN, 2 of which are for segment (so it’s a hybrid of segmentation and paging). Lower half of address space was known as “process space” and is unique to each process. The upper half was known as system space (S), and only half of it was used. Protected OS code and data resided there.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How was process space constructed in VMS?

A

The first half (P0) held the user program and heap that grew downwards. The second half (P1) held the stack which grew upwards

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How did VMS reduce pressure placed on memory by big number of pages in page table?

A

1) Segmenting the user address space (P0 and P1). 2) The page tables for these segments were located in kernel virtual memory (which allowed swapping)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

In real address space in VMS system, what is neat aspect of page 0?

A

The code section does not usually start from this page, instead it’s used for support of null-pointer accesses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What was benefit of mapping VMS OS into each process address space? What was implement for proper protection?

A

Easier use of data and its moving. The page table had protection bits which specified which privilege level CPU must be in to access the page

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Since VMS system didn’t have reference bit, how was it determined which page to evict?

A

Replacement policy was called “Segmented FIFO”. Each process had a maximum number of pages it could keep in memory (called resident set size - RSS). Each of the pages is kept on FIFO list, when process exceeds RSS, the first page is evicted.
VMS also introduced second-chance lists where pages are placed before eviction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain second-chance lists in VMS.

A

The clean-page free list and dirty-page list were used. When a process exceeds its RSS, a page is removed from per-process FIFO. If clean - it’s placed on the end of the clean-page list. If dirty - on the end of dirty-page list. If another process needs page, it would take it from the free list. However if original process faults on that page before it is reclaimed, such process reclaims it from clean or dirty list.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is trick with VMS system related to 0?

A

Demand zeroing. Instead of finding a page in physical memory and zeroing it out right away, then adding to page table, the OS instead just puts an entry into page table and marks the page inaccessible. When it’s needed, the trap to OS will occur and page will be actually loaded and zeroed out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is trick with VMS related to cows?

A

Copy-on-Write (CoW). When OS needs to copy a page from one address space to another, instead of copying it, it will just map it into target address space and mark it as read-only in both spaces. If one of the processes tries to write to that page, it will trap to OS and then the actual copying will occur.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Initially, how was Linux address space constructed?

A

32-bit address space. 3/4 used for process data and rest is used for kernel data. Page 0 is also marked inaccessible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What types of kernel virtual addresses are in Linux?

A

Kernel logical addresses - normal virtual address space, can be obtained with kmalloc(). Most kernel DS live here, it cannot be swapped to disk. The most interesting aspect is that there is a direct mapping between logical kernel addresses and the first portion of physical memory. Such direct mapping allowed for easier translations and this memory was suitable for operation that required contigious physical memory chunks (such as DMA).

Kernel virtual addresses - does not have direct mapping so cannot be allocated on a contigious chunk of physical memory. Obtained with vmalloc(). These addresses also enable kernel to address more than 1 GB of memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Is Linux 32 bit or 64 bit?

A

64 bit, using only 48 bottom bits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What Linux memory system uses for paging?

A

It uses 4-level multi-level page table.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is page size in Linux?

A

4KB. However, it also allows for use of “huge tables” of 2MB and even 1GB. They provide better efficiency due to being in TLB and also shorter TLB-miss path.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Do huge tables need to be enabled?

A

At first, the applications that required huge tables had to do so explicitly with system calls. However, currently it’s done automatically when OS notices such need exists.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Linux page cache?

A

It’s a hash table with most recently used pages that resides in memory

17
Q

From what sources does Linux keeps pages in page cache?

A

1) memory-mapped files
2) file data and metadata from devices
3) heap and stack pages (anonymous memory)

18
Q

What does page cache do beside its main purpose (caching)?

A

It periodically flushes dirty pages to backing file or swap space. It’s done by background thread “pdflush”.

19
Q

Which replacement policy does Linux use?

A

Modification of 2Q replacement policy. Linux maintains two lists and divides memory between them.
When page is accessed for first time, it’s placed in “inactive list”, when it’s re-referenced, the page is promoted to “active list”. The pages from inactive lists are kicked out first. The pages from bottom of active list are periodically moved to inactive list. Linux manages this list with approximation of LRU.

20
Q

What is first and most simple defense against buffer overflow attacks?

A

To prevent execution of any code found within certain regions of address space with NX (No-eXecute) bit. This prevents code, injected by an attacker into target’s stack, from being executed.

21
Q

What is the defense against return-oriented programming and what is ROP itself?

A

ROP is a type of attack where attacker overwrites the stack such that the return address in the currently executing function points to a malicious instruction followed by a return to another instruction.
The defense is address space layout randomization (ASLR). Instead of placing code, stack and heap at fixed locations within virtual address space, the OS randomizes their placement.