Memory Management - 2 Flashcards

1
Q

What is FIFO method in Swapping?

A

We choose to remove the last page who’s been swapped

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is NRU method in Swapping?

A

We choose to remove the page who’s not recently used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is 2nd chance FIFO method?

A

We operate according to FIFO , but if the chosen page referenced bit is set, we clear it and move the page on top of the queue and keep searching.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the LRU swapping method?

A
  • Replace pages that have been used least recently.*
  • This can be implemented using bit table of n*n, where n is the number of pages.*
  • Each time a page k is selected: row k is filled with 1’s, and column k is filled with zeros. This behaviour makes row k to have the greatest binary value.(because it has all ones except (k,k) but for all other j’s (j,k) is also zero). In addition, this set up does not change the order between previous set ups.*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the NFU Swapping method?

A

NFU - Not Frequently Used.

LRU approximatiom.

  • we add a counter field to each page frame, which is updated with every clock tick. That is:*
  • Divide the counter by 2(shift right) - maintain order.*
  • Add reference bit from left(most significant)*
  • Reset reference bit if it is set on.*

Why is it considered approximation?

There might happen to be two process with the same counter value, because within a tick clock there might be several accesses to pages. Therefore, it is only an approximation of LRU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is so strange about Belady’s anomaly?

A

Generally as the number of frames available increases, the number of page faults will decrease.

But in the case of FIFO(First in first out) it might happen that more page frames, for the same referecnce string, will result in more page fault.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is special about stack Algorithms?

A

Assume a process p, a memory which can hold m pages and a total of n virtual pages.

M(m,r): the set of virtual pages in memory after the r’th reference of p, given a memory of size m.

Stack algorithms satisfy the following rule:

For any p,m and reference string r, M(m,r) is a subset of M(m+1,r)

It is special because stack algorithm don’t suffer from Belady’s abnomaly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Page Replacement in Multi-Programming:

Local or global algorithms?

does fair-share is good?

allocate according to process size?

A
  • Local is better. When a page fault is raised on process p, the local algorithm searches for a swap a page which belong to p.*
  • A global algoirthm is free to select any page, this results in more page faults.*

Fair share isn’t good because it doesn’t take into account the size of each process and how many times it has been used.

Allocation according to size is better, but there must be a minimum for a running process, not doing so results with very high page-fault rate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is Thrashing?

A

A process busy in swapping pages in and out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  • The degree of multiprogramming describes the maximum number of processes that a single-processor system can accommodate efficiently.*
  • Do you remember the Thrashing Diagram? you should!*
A
  • Increasing multiprogramming level initially increases CPU utilization.*
  • Beyond some points, Thrashing sets in and multiprogramming level must be decreased.*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the locality model?

A

Locality: a set of pages that are actively used together.

Locality model: states that as a process executes, it shifts from one locatliy to another.

For instance, when we enter a function - we need a set of pages relevant to its execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the techniques to handle locality?

A

Working set model:

  • based on the locality model*
  • basic prinicipal: if we allocate enough frames to a process to accomodate its current locality, it will only fault whenever it moves to some new locality.*
  • but if allocated frames are less than current locality, the process is bound to thrash.*
  • The working set of a process is the set of pages used by A most recently memory references.*
  • Note: A is predetermined.*

WS(t, A) -** denotes the working set at time **t

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  • How is the Effective Access Time time computed given the input:*
  • page fault rate: p*
  • memory access time: m*
  • page-fault service time: l*
  • Exercise: m = 100 nanosecs. l = 25 milsecs*
  • In order to get an effective access time of 10% degradation in access time m. how much p needs to be?*
A

Effective Access Time: (1-p)*m + p*l

Answer:

  • first, recall that nano = 10^-9. mili = 10^-3.*
  • let’s change l from millisecs to nano. That is, multiply 25 by a factor of 10^6 - l = 25,000,000*10^-9*
  • degradation of 10% in access time means 100+10 = 110 nanosecs.*
  • we have (1-p)*100 + p*25,000,000 = 100-100p+25,000,000p = 110*
  • 24999900p = 10 -> p = 10/24999900 = 4*10^-7*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  • The working-set has a fixed parameter A.*
  • what if A is too small? too big? what if A = infinity?*

Dt = sum of all |WS(t)| =** total size of all workings sets at time **t.

What if Dt>m? what can we do about it?

A

Too small:** will not compass en entire **locality

Too big: will encompass several localities

infinity: will encompass entire program

Dt>m: thrashing. to solve such a case, we can suspend one of the processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
  • How is the working-set implemented?*
  • What’s the problem with counting A last references?*
  • what a better alternative exist?*
A
  • counting A last references is expensive.*
  • Instead, we can use the clock ticks and the fact that the reference bits are updated by the hardware.*

Virtual time field is added to the process struct

  • With each tick, update process virtual time and virtual time of referenced paging-table entries. Then, clear the R bit.*
  • Page p’s age is the difference between the process’ virtual time and the time of p’s last reference.*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What’s the idea behind Working Set Clock: global version.

A

Same idea as WSClock algorithm but for pages held by all processes in memory.

Clock Tick: the algorithm uses;

parameter Tau

pages reference bits ,time of last reference [ref(frame)], and current “virtual times” of processes[Tp].

Replace page when:

R = 0 and Tp - ref(frame) > Tau

case no page satisfies R=0 we choose a dirty one

case no page is older than Tau we choose the oldest one.

17
Q

Can you review all Page Replacement Algorithms?

A
18
Q

What is the valid/present bit for?

A

1 - in memory

0 - not-in memory

During address translation, if valid=0 a page fault is raised.

19
Q

What happens in a page fault and how is it handled?

A

The MMU sends a hardware interrupt

Program counter is saved on stack, registers also.

Kernel is called and discovers the virtual page that casued fault.

Kernel check valid bit and varifies protection. If illegal access - send signal to process. otherwise:

check for a free page frame. if non available, apply a page replacement algorithm.

If selected page frame is dirty - write to disk.(context switch). Mark frame as busy.

read the page from disk.

update page table

re-execute faulting instruction, reload registers, continue execution.

20
Q

what is included in PF = time to process page faults?

A

page-fault interrupt service time

+

read-in page time(maybe write-page too?)

+

restart process time.

21
Q

Provide an example of a conflict between virtual memory(page replacement) and I/O?

A
  • Process A issues read from I/O device into buffer.*
  • process A is blocked.*
  • process B causes a page fault.*
  • The page which contains the buffer A is waiting for is chosen to be swapped*
22
Q
  • How can we solve the conflict between I/O and virtual memory?*
  • Note: there are several CPU instruction in a cycle thus thinking that any swapping algorithm will fix this issue is wrong(!)*
A

Need to be able to lock page until I/O completes.

23
Q

Provide an example of a conflict with Page Sharing and Page Replacement? How can it be solved?

A
  • When multiple processes execute the same code, we would like the text pages to be shared.*
  • Paging out process A may cause many page faults for process B that shares them.*

solution: maintain special data structures for sharing pages.

24
Q

What is copy-on-write mechanism?

A

As with fork; the child process initially points to the same physical memory as the parent process, but when child or parent write to memory, the referenced blocks of the child are now being copied to a new physical location.

25
Q

Cleaning Policy

what is pre-paging? is it better than demand-paging? How is it implemented?

A

pre-paging is loading memory pages before they are needed so as to avoid page-faults.

  • It is better than demand-paging because we better prepare.*
  • It is often implemented by a paging daemon. That is, a process executing in the background, periodically inspectiong state of memory. Cleaning pages, possibly removing them.*
  • Too few frames are free? the daemon selects pages to evict using a replacement algorithm.*
  • It can use WSClock..*
26
Q

How non-RAM-resident pages are stored on disk?

What may be the problem with allocating a fixed chunk on disk to a process, upon creation?

are there more alternative?

A

option 1 (static): allocate a fix chunk of the swap area to process upon creation, keep offset in PTE.

problem: memory requirements may change dramatically.

option 2: Reserve separate swap areas for text, data, stack, allow each to extend over multiple chunks.

option 3(dynamic): allocate swap space to a page when it is swapped out, de-allocate when back in.

Need to keep swap address for each page. why? in dynamic allocation, we should track the address for efficiency. every search might be very long because it might be placed anywhere in the file.

27
Q

Virtual Memory - Pros & Cons

A

Pros:

Enables programs to use only the memory they need. That is, RAM is occupied with memory which is in use.

Programs can use much larger memory than is provided by RAM.

External fragmentation is eliminated.

Cons:

Address translation takes time(special hardware for that - MMU)

Complexity of OS

page-fault is an expensive operation

Thrashing problem.