Quiz 1 Flashcards

1
Q

Is this more like a policy or a mechanism?:

Saving the registers in a context switch

A

Mechanism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Is this more like a policy or a mechanism?:

Evicting a page to disk because it hasn’t been used in a while

A

Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Is this more like a policy or a mechanism?:

Using a timer interrupt to jump back to the OS

A

Mechanism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Is this more like a policy or a mechanism?:

Switching to another process when disk I/O occurs

A

Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Is this more like a policy or a mechanism?:

Walking a page table to find a Page Frame

A

Mechanism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Is this more like a policy or a mechanism?:

Address Translation

A

Mechanism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Is this more like a policy or a mechanism?:

Splitting and Coalescing blocks on a free list

A

Mechanism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Is this more like a policy or a mechanism?:

Picking a free block with a best fit strategy

A

Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why is it a good idea to separate policy from mechanisms?

A

It allows one to easily change policies without having to rethink the mechanism. This is a form of modularity, which is desired in any software system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Name and describe five examples of where the OS needed support from the hardware to implement some desired functionality

A

● Hardware timer generating an interrupt to jump back to the OS
● The TLB to speed up page table lookups
● Base and Bounds registers to implement segmentation
● CPUs having a privileged mode to support Limited Direct Execution
● Having a modified bit which is set by hardware when a page is written

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An operating system runs in privileged mode, a hardware state where it has full access to machine resources. Why is such a mode needed, and why can’t normal user processes enter privileged mode?

A

One of the responsibilities of an OS is to provide isolation between processes, so they cannot interfere with one another. Another is to arbitrate resources across all the processes in a system. If a process was run in privileged mode, it would be able to read and write all memory, access any disk sector, and in general be able to wreak havoc in the system. Due to bugs or bad intentions, we cannot assume a user process will behave nicely, so we must restrict certain actions to the kernel.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Since we don’t allow processes free reign in the system, how does a process request services from the Operating System? In other words, what is the mechanism for processes to have the OS do something on their behalf?

A

When a process wants services from the OS, it issues a syscall.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

When a Linux process calls fork(), the system creates (in the child) a nearly exact copy of the calling (parent) process. Why is the copy “nearly exact” and not an identical copy of the parent process?

A

A the time of the fork, the processes are identical, except the return value from fork() is 0 for the child, and the PID of the child for the parent process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The shortest-job-first (SJF) and shortest-time-to-completion-first (STCF) policies are both unrealistic policies to implement in a general purpose operating system. Why?

A

They are unrealistic because they assume you know in advance how long the processes are going to run (in essence, being able to predict the future).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Assume a multi-level feedback queue scheduling policy. In class we discussed a provision that periodically moved all jobs back to the topmost priority queue. What problem does this rule solve?

A

It solves two problems. First, processes are guaranteed not to starve. By being booted, they will be able to make progress (all processes in a high-priority queue are run in a round-robin fashion). Second, if a CPU-bound job has become interactive, the scheduler treats it properly once it has received the priority boost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

For the following question, circle all answers that apply. A translation lookaside buffer (TLB) is generally used to:

(a) Translate virtual page numbers into physical page numbers
(b) Translate physical page numbers into virtual page numbers
(c) Make segmentation have the benefits of a pure paging approach
(d) Translate the addresses generated by loads (memory reads)
(e) Translate the addresses generated by stores (memory writes)
(f) Translate the addresses generated by instruction fetches
(g) Remove the need for a full-sized page table
(h) Make translations happen quickly
(i) Locate the page frame where the page table entry is stored
(j) Decide which pages frames to evict

A

(a) Translate virtual page numbers into physical page numbers
(d) Translate the addresses generated by loads (memory reads)
(e) Translate the addresses generated by stores (memory writes)
(f) Translate the addresses generated by instruction fetches
(h) Make translations happen quickly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Let’s say you decide to implement mm_malloc and mm_free for your lab assignment as follows: one half of available memory is divided into fixed-sized units of 4KB, and the other half is managed by a best-fit free list. If an allocation request is less than or equal to 4KB and there is space in the fixed-sized half, a 4KB unit is allocated from the fixed-sized half; otherwise, the best-fit algorithm is used over the other half of memory, and the requested size is returned (if space is available).

(a) Assuming 32KB of total memory is available, what series of allocation requests will most quickly lead to all of memory getting allocated, all while requesting the least total amount of memory?
(b) What type(s) of fragmentation occurs with this new allocator and how can they occur?

A

(a) First, you have 4 one byte calls to malloc, followed by one 16k call to malloc.
(b) Both internal and external fragmentation occurs. Internal fragmentation occurs when you have a request of less than 4k that is allocated in one of the 4k fixed-sized units. External fragmentation occurs in the other half of memory after repeated calls to malloc and free.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q
  1. Forking and process states
    We have a system with three (3) processes (A, B, and C) and a single CPU. Processes can be in one of three states: Running, Ready, or Blocked. If the process does not exist (yet), or has exited, just leave the entry blank.
    Below is a timeline of process behavior. Fill in the states of each process in the diagram below.

States of A, B, C?:

  1. Process A is loaded into memory and starts executing in main().
  2. Process A calls fork() and creates Process B (but A, the parent, keeps running).
  3. Process A issues a request to the disk; B starts executing at the return from fork().
  4. B calls fork(), creating Process C; B keeps running.
  5. B’s timeslice expires; C runs
  6. A’s I/O completes (but there are no other changes).
  7. C waits for user input. A runs.
A
  1. A: Running
    B: [blank]
    C: [blank]
  2. A: Running
    B: Ready
    C: [blank]
  3. A: Blocked
    B: Running
    C: [blank]
  4. A: Blocked
    B: Running
    C: Ready
  5. A: Blocked
    B: Ready
    C: Run
  6. A: Ready
    B: Ready
    C: Run
  7. A: Run
    B: Ready
    C: Blocked
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.

(a) How many bits are in the offset portion of the virtual address?

A

10 bits. This comes from the page size, which is 2^10 = 1024 bytes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.

(b) How many bits are in the VPN portion of the virtual address?

A

22 bits. This comes from subtracting the offset bits from the virtual address space (32-10).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.

Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:

(c) How many entries are there in the table?

A

You have 2^22 page table entries, one for each virtual page.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.

Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:

(d) What is the total size of the table? It may be be helpful to note that 2^20 = 1 MB

A

2^22 * 4 = 2^24 = 16 MB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.

Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:

(e) In a live system, how much memory would be occupied by page tables? (what factors affect this?)

A

There is a page table for every single process in the system, so the total memory would be 16 MB times the number of processes in the system.

Linear page tables are too big. Hence, people came upon the idea of a multi-level page table, which uses a page directory to point to page-sized chunks of the page table. Assume we wish to build such a table, with two levels (as discussed in class). To access such a table, the VPN is split into two components: the VPN_PageDirIndex_ the upper part of the VPN which indexes into the page directory, and the VPN_PageTableIndex_ the lower part of the VPN which indexes into the page of the page table where the PTEs are located.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.

Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:

(f) How many PTEs fit onto a single page in this system?

A

Each page is 1024 bytes. Each page table entry is 4 bytes, so you can fit 1024 / 4 = 256 entries in a single page.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.

Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:

(g) How many bits are needed in the VPN_PageTableIndex_?

A

Since there are 256 PTEs in a page, you need 8 bits to index the entries in a single page

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.

Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:

(h) How many bits are needed in the VPN_PageDirIndex_?

A

There are 22 bits in the VPN. You need 8 bits for the page table index, so you need 22 - 8 = 14 bits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.

Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:

(h) How many bits are needed in the VPN_PageDirIndex_?

A

There are 22 bits in the VPN. You need 8 bits for the page table index, so you need 22 - 8 = 14 bits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.

Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:

(i) Assuming each page directory entry is 4 bytes, how much memory is needed for the page directory?

A

You need 2^14 * 4 = 2^16 bytes = 64 KB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.

Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:

Finally, for the last part of this question, for the given memory allocations, write down both (a) how much memory our multi-level page table consumes and (b) how much memory a linear page table consumes:

(j) Code is located at address 0 and there are 100 4 byte instructions. The heap starts at page 1 and uses 3 total pages, The stack starts at the other end of the address space, grows backward, and uses 3 total pages.
1. Multi-level page table size?
2. Linear page table size?

A
  1. Each chunk covers 256 consecutive pages. You need 1 page for the heap and one page for the stack. Each page is 1024 bytes. You also need space for the page directory which was calculated in part i as 64 KB. So in total you need 2 KB (page table entries) + 64 KB (directory) = 66 KB.
  2. 16 MB (same as you calculated in part d).

(d) What is the total size of the table? It may be be helpful to note that 2^20 = 1 MB.
2^22 * 4 = 2^24 = 16 MB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

List two (2) mechanisms and two (2) policies that we have talked about in class.

A

(i) Mechanism 1: Context Switch
(ii) Mechanism 2: Splitting and Coalescing (and many others)
(iii) Policy 1: Multi-level feedback Queue scheduler
(iv) Policy 2: Best fit memory allocator (and many others)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is the difference between a mechanism and a policy?

A

Mechanisms provide the answer to a “how” question about a system; policies provide the answer to a “which” question.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Our model of user processes is that we assume they can’t be trusted, and yet we let these user processes run directly on the CPU in what we call Limited Direct Execution (LDE). List two ways that the OS puts restrictions on the running of a user process, so that a process does not have unrestricted use of the CPU.

A
  1. The OS will interrupt a process with a timer interrupt which can cause the process to be switched out with another process.
  2. The CPU has a user mode and kernel mode. Privileged instructions can only be executed with a trap into the kernel.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

The VAX/VMS system for historic reasons had a very small page size of 512 bytes. This could lead to very large page tables, so they designed a hybrid system in an attempt to reduce the size of the page table. How does the VAX/VMS hybrid system work (i.e., what is it a hybrid of)? How does this hybrid approach reduce the size of the page table?

A

It is a hybrid of paging and segmentation. Each segment has a base and bounds register. Page table entries above the bounds register are not needed, keeping the size of the page table smaller.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

We talked about the Multi-Level Feedback Queue (MLFQ) scheduling algorithm in class. What is the feedback that the algorithm uses and how is it used?

(i) What is the feedback?
(ii) How is the feedback used?

A

i. How much of the time slice the process uses.

ii. If a process uses the entire time slice, it is moved to a lower priority queue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Processes can be in one of several states. Briefly describe what it means for a process to be in each of the following states. If a state is not one we’ve talked about in class (and belongs to a student and not a process), put “You’re being silly” for the answer.

A

(i) Running: The process is currently executing on the processor.
(ii) Blocked: The process is waiting on I/O operation to complete before it can continue.
(iii) Ready: The process is able to run but is not currently running.
(iv) Tired of studying: Not a real process state.
(v) Waiting for fall break: Not a real process state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

We studied Proportional Share scheduling as an alternative to the MLFQ. List one advantage and one disadvantage of proportional share scheduling. If you are thinking about writing non-deterministic as one of your answers, remember you can use stride scheduling to implement a deterministic proportional share scheduler.

A

(i) Advantage: Simple to implement, can implement minimal shared state.
(ii) Disadvantage: Difficult to know how to allocate tickets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

To create a new process, Linux goes through a two step procedure, a fork() followed by an exec(). Why are they separated into two system calls?

A

After the call to fork(), the code can alter the environment of the about-to-be-run program, such as by replacing STDIN with a redirected input file.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Copying a process (and hence all of its memory) with a fork(), only to replace it with another program in the exec() immediately seems horribly horribly wasteful. What Simpsonesque technique is used to make all of this efficient and how does it work?

A

It uses Copy On Write (COW) (Don’t have a COW man!). Memory of the new process is mapped to the old process. Only when one of the pages (from either process) is written is it copied.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

True or False?:
The Shortest Job First (SJF) scheduling algorithm is optimal with respect to fairness (e.g., Jain’s Fairness Index). Long-running jobs can be starved by shorter jobs. SJF is optimal with regards to turnaround time.

A

False

40
Q

True or False?:
The First Come, First Served (FCFS) scheduling algorithm is subject to the convoy effect, where short running processes can queue up behind long running processes.

A

True

41
Q

True or False?:

If response time is an important metric to your system, you should use Round-Robin (RR) scheduling.

A

True

42
Q

True or False?:
In a Multi-Level Feeback Queue scheduler (MLFQ), I/O bound processes are moved to lower priority queues because they spend most of their time waiting for I/O anyway. I/O jobs are kept in higher priority queues to increase responsiveness.

A

False

43
Q

True or False?:
The Worst-Fit memory allocator suffers from Internal Fragmentation. The chunk of memory is split before it is returned, so it doesn’t suffer from internal fragmentation. It does suffer from External Fragmentation however.

A

False

44
Q

True or False?:
Most of time spent during a context-switch is swapping out the pages for the old process and swapping in pages for the new process. Memory is not swapped out in a context switch. It would take far too long to do so.

A

False

45
Q

True or False?:

Base and bounds registers are easy to support in hardware and allow for fast address translations.

A

True

46
Q

True or False?:
One of the problems of using Segmentation for virtual memory is that it is expensive to search for what segment a virtual address belongs to. The segment is found directly from the segment bits (the uppermost bits) of the virtual address.

A

False

47
Q
True or False?: 
Fast coalescing (e.g., coalescing done in constant time) of free memory blocks requires allocated blocks to have both a header and a footer.
A

True

48
Q

True or False?:
Malloc keeps track of allocated regions on the Explicit Allocated List (EAL). There is so such thing as the EAL. Once a region is allocated malloc doesn’t keep track of it .

A

False

49
Q

True or False?:

Page tables are stored in memory, one page table per process.

A

True

50
Q

True or False?:
When a page fault occurs, the OS usually terminates the process. A page fault occurs when the page is on disk and not in memory. The page fault will cause the OS to copy the page from disk to main memory and then restart the instruction. The memory access will then succeed since the page is now in memory.

A

False

51
Q
You are given the following code:
int main(int argc, char *argv[]) {
printf("a");
fork();
printf("b");
return 0;
}

Assuming fork() succeeds, what are the possible outputs of this program? Circle either possible or not possible for each output.

(i) ab
(ii) abb
(iii) bab
(iv) bba
(v) a

A

(i) ab Not possible
(ii) abb Possible
(iii) bab Not possible
(iv) bba Not possible
(v) a Not possible

52
Q
You are given the following code:
int main(int argc, char *argv[]) {
printf("a");
fork();
printf("b");
return 0;
}

What is the return value of fork() for the child process?

A

0

53
Q

Assume the following: a 22-bit virtual address space, with a 512-byte page size.

(a) How many bits are in the offset portion of the virtual address?
(b) How many bits are in the VPN portion of the virtual address?

A

(a) How many bits are in the offset portion of the virtual address?
2^9 = 512, so 9 bits

(b) How many bits are in the VPN portion of the virtual address?
22 - 9 = 13 bits

54
Q

Assume the following: a 22-bit virtual address space, with a 512-byte page size.

Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:

(c) How many entries are there in the table?
(d) What is the total size of the table? It might be helpful to know that 2^10 is 1KB and 2^20 = 1 MB.
(e) In a live system, what factors affect how much total memory would be occupied by the page tables?

A

(c) 2^13 = 8192 = 8 KB
(d) 2^13 * 2^2 = 2^15 = 2^10 * 2^5 = 32 KB
(e) Each process needs its own page table, so the number of processes in the system affects the total memory of the page tables.

Linear page tables are too big. Hence, people came upon the idea of a multi-level page table, which uses a page directory to point to page-sized chunks of the page table. Assume we wish to build such a table, with two levels (as discussed in class). To access such a table, the VPN is split into two components: the VPN_PageDirIndex_ the upper part of the VPN which indexes into the page directory, and the VPN_PageTableIndex_ the lower part of the VPN which indexes into the page of the page table where the PTEs are located.

55
Q

In a live system, what factors affect how much total memory would be occupied by the page tables?

A

Each process needs its own page table, so the number of processes in the system affects the total memory of the page tables.

Linear page tables are too big. Hence, people came upon the idea of a multi-level page table, which uses a page directory to point to page-sized chunks of the page table. Assume we wish to build such a table, with two levels (as discussed in class). To access such a table, the VPN is split into two components: the VPN_PageDirIndex_ the upper part of the VPN which indexes into the page directory, and the VPN_PageTableIndex_ the lower part of the VPN which indexes into the page of the page table where the PTEs are located.

56
Q

Assume the following: a 22-bit virtual address space, with a 512-byte page size.

Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:

(f) How many PTEs fit onto a single page in this system?
(g) How many bits are needed in the VPN_PageTableIndex_?
(h) How many bits are needed in the VPN_PageDirIndex_?

A

(f) PTE = 4 bytes
512 Page size
512 / 4 = 2^9 / 2^2 = 2^7 = 128

(g) 2^7 = 128 so 7 bits
(h) 13 - 7 = 6 bits

57
Q

Assume the following: a 22-bit virtual address space, with a 512-byte page size.

Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:

(i) Assuming each page directory entry is 4 bytes, how much memory is needed for the page directory?
(j) With a linear page table, you need a single register to locate the page table, assuming that the hardware does the lookup upon a TLB miss, how many registers do you need to locate a two-level page table? Why?

A

i. 2^6 * 2^2 = 2^8 = 256 bytes

j. Still only one register for the page directory. The page dir has the location for the PTE.

58
Q

Assume the following: a 22-bit virtual address space, with a 512-byte page size.

Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:

Finally, for the last part of this question, for the given memory allocations, write down both (a) how much memory our multi-level page table consumes and (b) how much memory a linear page table consumes:

Code is located at address 0 and there are 200 4 byte instructions. The heap starts at page 2 and uses 3 total pages, The stack starts at the other end of the address space, grows backward, and uses 2 total pages.

(k) Multi-level page table size?
(l) Linear page table size?

A

(k) You need 1 page for the DIR and 2 pages of the PTE = 512 * 3 = 1536 bytes
(i) Same as before 32 KB

59
Q

Consider the C program below. (For space reasons, we are not checking error return codes, so assume that all functions return normally):

main() {
if (fork() == 0) {
if (fork() == 0) {
printf("3");
}
else {
pid_t pid; int status;
if ((pid = wait(&status)) > 0) {
printf("4");
}
}
}
else {
if (fork() == 0) {
printf("1");
exit(0);
}
printf("2");
}
printf("0");
return 0;
}
Out of the 5 outputs listed below, circle only the valid outputs of this program. Assume that all processes run to normal completion: 
A. 2030401
B. 1234000
C. 2300140
D. 2034012
E. 3200410
A

A. 2030401
C. 2300140
E. 3200410

B and D are not valid because ‘3’ and ‘0’ must be printed before ‘4’ is printed due to the wait.

60
Q
The following 3 jobs (with their running times listed) arrive at time 0 in the following order: 
Job 0 ( length = 4 )
Job 1 ( length = 1 )
Job 2 ( length = 6 )

Compute the response time and the turnaround time for the following scheduling algorithms. Assume the RR order is the same as the order in which the jobs are listed and a time quanta of 1.

SJF (Shortest Job First):
Job 0:
Response Time:
Turnaround Time:

Job 1:
Response Time:
Turnaround Time:

Job 2:
Response Time:
Turnaround Time:

A

SJF (Shortest Job First):
Job 0:
Response Time: 1
Turnaround Time: 5

Job 1:
Response Time: 0
Turnaround Time: 1

Job 2:
Response Time: 5
Turnaround Time: 11

61
Q
The following 3 jobs (with their running times listed) arrive at time 0 in the following order: 
Job 0 ( length = 4 )
Job 1 ( length = 1 )
Job 2 ( length = 6 )

Compute the response time and the turnaround time for the following scheduling algorithms. Assume the RR order is the same as the order in which the jobs are listed and a time quanta of 1.

FIFO (First In First Out):
Job 0:
Response Time:
Turnaround Time:

Job 1:
Response Time:
Turnaround Time:

Job 2:
Response Time:
Turnaround Time:

A

FIFO (First In First Out):
Job 0:
Response Time: 0
Turnaround Time: 4

Job 1:
Response Time: 4
Turnaround Time: 5

Job 2:
Response Time: 5
Turnaround Time: 11

62
Q
The following 3 jobs (with their running times listed) arrive at time 0 in the following order: 
Job 0 ( length = 4 )
Job 1 ( length = 1 )
Job 2 ( length = 6 )

Compute the response time and the turnaround time for the following scheduling algorithms. Assume the RR order is the same as the order in which the jobs are listed and a time quanta of 1.

RR (Round Robin):
Job 0:
Response Time:
Turnaround Time:

Job 1:
Response Time:
Turnaround Time:

Job 2:
Response Time:
Turnaround Time:

A

RR (Round Robin):
Job 0:
Response Time: 0
Turnaround Time: 8

Job 1:
Response Time: 1
Turnaround Time: 2

Job 2:
Response Time: 2
Turnaround Time: 11

63
Q

In a Limited Direct Execution environment, circle all the ways a program can stop running to let the OS run:

(i) The program volunteers to stop running
(ii) A program requests a service from the OS via a syscall
(iii) The CPU performs a context switch to the kernel
(iv) A hardware interrupt occurs
(v) The program executes an illegal instruction

A

(i) The program volunteers to stop running
(ii) A program requests a service from the OS via a syscall
(iv) A hardware interrupt occurs
(v) The program executes an illegal instruction

—-
(iii) is wrong because
Context switches occur between processes and is done by the OS. The CPU doesn’t perform context switches.

64
Q

Which are valid process state transitions? Circle all that apply.

(i) READY to BLOCKED
(ii) RUNNING to BLOCKED
(iii) READY to RUNNING
(iv) BLOCKED to RUNNING
(v) BLOCKED to READY
(vi) RUNNING to READY

A

(ii) RUNNING to BLOCKED
(iii) READY to RUNNING
(v) BLOCKED to READY
(vi) RUNNING to READY

65
Q

Which of the following always happens during a context switch? Circle all that apply.

(i) The registers of the currently running process are saved
(ii) The memory of the soon-to-run-process is swapped in from disk
(iii) The OS moves the currently running process to the BLOCKED state
(iv) The OS executes a return-from-trap into the soon-to-run-process to start running the new process
(v) The TLB is invalidated
(vi) If paging is used, the page table base register is updated

A

(i) The registers of the currently running process are saved
(iv) The OS executes a return-from-trap into the soon-to-run-process to start running the new process
(vi) If paging is used, the page table base register is updated
—-
Why others are wrong:
(ii) Memory is not swapped out during a context switch. It would take way too long!
(iii) It may move to the BLOCKED state if the context switched was caused by an I/O request, but a process could have used up its time slice and the OS would move the process to the READY state.
(v) ASID are used so different processes can share the TLB.

66
Q

(d) The Multi-level Feedback Queue (MLFQ) is a fancy scheduler that does lots of things. Circle all the correct things that you could say about this approach.

(i) MLFQ learns things about running jobs
(ii) MLFQ starves long running jobs
(iii) MLFQ uses different length time slices for jobs
(iv) MLFQ uses round robin
(v) MLFQ forgets what it has learned about running jobs sometimes

A

(i) MLFQ learns things about running jobs
Did the job use its full time slice?
(iii) MLFQ uses different length time slices for jobs
Lower priority queues typically have longer time slices.
(iv) MLFQ uses round robin
For scheduling jobs at the same priority.
(v) MLFQ forgets what it has learned about running jobs sometimes
Priority boost resets knowledge of how a process behaves.
—-
Why (ii) is wrong:
Priority boost makes sure long running jobs make progress.

67
Q

(e) Which of the following are policies? Circle all that apply.

(i) Picking a process to run after a timer interrupt
(ii) Performing a context switch
(iii) Saving space with a multi-level page table
(iv) The clock algorithm selecting a page for eviction
(v) Writing a dirty page out to disk

A

(i) Picking a process to run after a timer interrupt

(iv) The clock algorithm selecting a page for eviction

68
Q

(f) Which of the following can lead to external fragmentation? Circle all that apply.

(i) Having a large page size (e.g., 2 meg)
(ii) Using segments for virtual memory
(iii) Using a best fit algorithm for memory allocation (i.e., malloc)
(iv) Using a base and bounds register for virtual memory with a fixed sized segment

A

(ii) Using segments for virtual memory
(iii) Using a best fit algorithm for memory allocation (i.e., malloc)
—-
Why others are wrong:
(i) Could lead to internal fragmentation.
(iv) Since all segments are the same size, there won’t be external fragmentation.

69
Q

(g) Which of the following schedulers are designed for fairness? Circle all that apply.
(i) SJF
(ii) Lottery Scheduling
(iii) The Linux CFS scheduler
(iv) MLFQ
(v) FIFO

A

(ii) Lottery Scheduling

(iii) The Linux CFS scheduler

70
Q

(h) Which of the following are you likely to see in a memory allocator (i.e., malloc)? Circle all that apply.

(i) Block coaleseling
(ii) LRU replacement
(iii) Segmented free lists
(iv) Buddy allocators
(v) Clock algorithm

A

(i) Block coalescing
(iii) Segmented free lists
(iv) Buddy allocators

71
Q

We wish to make virtual memory translation as fast as possible. Which of the following techniques would speed up memory translation? Circle all that apply.

(i) When using page tables, double the size of a page
(ii) Use segmentation instead of a page table
(iii) Move from a single-level to multi-level page table
(iv) For page tables, move from a fully associative to a direct mapped TLB
(v) When using segmentation, double the number of segments used

A

(i) When using page tables, double the size of a page
Smaller page table, less TLB pressure as each page covers a larger section of memory.
Smaller page table, less TLB pressure as each page covers a larger section of memory.
(ii) Use segmentation instead of a page table
Segmentation is faster, as you don’t need to look up entry in the page table.

Why the rest are wrong:
(iii) Move from a single-level to multi-level page table
This would reduce the size of the page table, but takes longer to “walk” the levels in the page table.
(iv) For page tables, move from a fully associative to a direct mapped TLB
In a fully associative TLB, a page entry can go anywhere, in a direct mapped TLB there is a fixed position in the TLB for an entry, which can lead to conflict misses.
For this reason most TLBs have a high level of associativity.
(v) When using segmentation, double the number of segments used
This would help reduce fragmentation, but not make the translation go any faster.

72
Q

(a) A tribute to our ancestors. The VAX/VMS system for historic reasons had a very small page size of 512 bytes. This could lead to very large page tables. What did the creators of VMS do to reduce the pressure page tables placed on memory?

A

They used a hybrid scheme of segmentation and page tables. A bounds register would limit the number of page table entries needed for a segment. They also placed page tables in kernel virtual memory, making it easier to swap out page table entries if the systems runs low on memory.

73
Q

(b) An ode to procrastination. We have seen several examples of where the OS delays doing work until it absolutely has to. Describe one of these examples, explaining what work is being delayed, when is the work finally done, and what is the advantage of procrastination.

A

Copy on Write delays the copy of memory from one address to another by marking it “read only” in the page table. When a process actually does a write, then the page is copied into the new address space.
Another example is “demand zeroing” of pages where a page of memory is not actually allocated and zeroed out until it has been explicitly accessed by the process.
Demand paging, only loading in a page of memory on a page fault.
Delaying page writes to write out multiple pages in a group.

74
Q

(c) An ode to randomness. We saw several examples of where a reasonable course of action for the OS is to do something random. Describe one of these examples and explain why a random action can be prefered over another method.

A

Lottery scheduling decides randomly which process to run (proportionally based on the number of tickets a process has). The advantages are that it is simple to implement, decisions are made quickly, and there is little state to track (the number of tickets a process has and the total number of tickets.
Another example is random page selection for eviction. Again, it is easy to implement and decisions are made quickly and it has the advantage of not having bad corner-case behavior (such as using LRU to loop through pages of memory slightly larger than the available memory).

75
Q

(d) They say an elephant never forgets (why is that?) but perhaps we should say malloc() never forgets. When we call free() to free a previously allocated piece of memory, we just pass in the pointer of the memory to be freed. How does the memory allocator know how much memory to free? Also, describe why a double free (calling free() with the same pointer twice). is a bad idea. What could go wrong and why?

A

There is a block of information (including the size of the block) kept in a header right above the pointer allocated by malloc. free() uses the size from the header block to determine how much space to free. A double free is bad as once the pointer is freed, the block may be reallocated. free() would then try to free something that might already have been allocated.

76
Q

(e) A little help from our hardware friends. Many architectures have a use bit (sometimes called the reference bit). Whenever a page is referenced (i.e., read or written), the use bit is set by hardware to 1. Describe an example of how the OS can make use the use bit. How is it used and why is it useful?

A

This can be used to approximate LRU using the clock algorithm.

77
Q

(f) Everyone deserves a second chance. We have seen examples of the OS giving a page a second chance before being evicted. Describe one of these examples and explain how the page given a second chance and why is this a good thing.

A

Before VMS evicts a page, it puts it on a global clean-page or dirty-page list, making it available for eviction. If a page is accessed again before it is evicted, it is removed from the list. This approximates LRU, avoiding costly swaps to disk for active pages.

78
Q

The following 4 jobs (with their running times listed) arrive at time 0 in the following order:
Job 0 ( length = 10 )
Job 1 ( length = 5 )
Job 2 ( length = 3 )
Job 3 ( length = 4 )
Compute the response time and the turnaround time for the following scheduling algorithms. Assume the RR order is the same as the order in which the jobs are listed and a time quanta of 2.

SJF (Shortest Job First):
Job 0:
Response Time:
Turnaround Time:

Job 1:
Response Time:
Turnaround Time:

Job 2:
Response Time:
Turnaround Time:

Job 3:
Response Time:
Turnaround Time:

A

SJF (Shortest Job First):
Job 0:
Response Time: 12
Turnaround Time: 22

Job 1:
Response Time: 7
Turnaround Time: 12

Job 2:
Response Time: 0
Turnaround Time: 3

Job 3:
Response Time: 3
Turnaround Time: 7

79
Q

The following 4 jobs (with their running times listed) arrive at time 0 in the following order:
Job 0 ( length = 10 )
Job 1 ( length = 5 )
Job 2 ( length = 3 )
Job 3 ( length = 4 )
Compute the response time and the turnaround time for the following scheduling algorithms. Assume the RR order is the same as the order in which the jobs are listed and a time quanta of 2.

FIFO (First In First Out):
Job 0:
Response Time:
Turnaround Time:

Job 1:
Response Time:
Turnaround Time:

Job 2:
Response Time:
Turnaround Time:

Job 3:
Response Time:
Turnaround Time:

A

FIFO (First In First Out):
Job 0:
Response Time: 0
Turnaround Time: 10

Job 1:
Response Time: 10
Turnaround Time: 15

Job 2:
Response Time: 15
Turnaround Time: 18

Job 3:
Response Time: 18
Turnaround Time: 22

80
Q

The following 4 jobs (with their running times listed) arrive at time 0 in the following order:
Job 0 ( length = 10 )
Job 1 ( length = 5 )
Job 2 ( length = 3 )
Job 3 ( length = 4 )
Compute the response time and the turnaround time for the following scheduling algorithms. Assume the RR order is the same as the order in which the jobs are listed and a time quanta of 2.

RR (Round Robin):
Job 0:
Response Time:
Turnaround Time:

Job 1:
Response Time:
Turnaround Time:

Job 2:
Response Time:
Turnaround Time:

Job 3:
Response Time:
Turnaround Time:

A

RR (Round Robin):
Job 0:
Response Time: 0
Turnaround Time: 22

Job 1:
Response Time: 2
Turnaround Time: 18

Job 2:
Response Time: 4
Turnaround Time: 13

Job 3:
Response Time: 6
Turnaround Time: 15

81
Q
Which one of the above algorithms is the most fair?
Circle one: 
SJF 
FIFO 
RR
A

RR

82
Q

The MLFQ (multi-level feedback queue) scheduler has a number of rules. Circle the rules that are actually part of the final MLFQ policy:

  1. If Priority(A) > Priority(B), A runs (B does not)
  2. If Priority(A) > Priority(B), A is run to completion, followed by B
  3. If Priority(A) [LTS] Priority(B), A runs (B does not)
  4. If Priority(A) [LTS] Priority(B), A is run to completion, followed by B
  5. If Priority(A) = Priority(B), A and B are run in a round-robin fashion
  6. If Priority(A) = Priority(B), they are run to completion in FIFO order
  7. When a job enters the system, it is placed at the highest priority queue (the topmost queue)
  8. When a job enters the system, it is placed at the lowest priority queue (the bottommost queue)
  9. When a job enters the system, it is placed in the queue with the fewest number of jobs
  10. Once a job uses up its time slice at a given level, it moves to the end of the round-robin queue
  11. Once a job uses up its time slice at a given level, its priority is reduced
  12. Once a job uses up its time slice, it is given a priority boost
  13. After some time period, move all the jobs in the system to the highest-priority queue
  14. After some time period, move each job in the system one level higher
  15. After some time period, move each job in the system one level lower
A
  1. If Priority(A) > Priority(B), A runs (B does not)
  2. If Priority(A) = Priority(B), A and B are run in a round-robin fashion
  3. When a job enters the system, it is placed at the highest priority queue (the topmost queue)
  4. Once a job uses up its time slice at a given level, its priority is reduced
  5. After some time period, move all the jobs in the system to the highest-priority queue
83
Q
  1. Forking and process states
    We have a system with three (3) processes (A, B, and C) and a single CPU. Processes can be in one of three states: Running, Ready, or Blocked. If the process does not exist (yet), or has exited, just leave the entry blank.
    Below is a timeline of process behavior. Fill in the states of each process in the diagram below.

States of A, B, C?:

  1. Process A is loaded into memory and starts executing in main().
  2. Process A calls fork() and creates Process B (but A, the parent, keeps running).
  3. Process A issues a request to the disk; B starts executing at the return from fork().
  4. B calls fork(), creating Process C; B keeps running.
  5. B’s timeslice expires; C runs
  6. C calls exec(); there are no other changes.
  7. . A’s I/O completes (but there are no other changes).
  8. C waits for user input. A runs.
A
  1. A: Running
    B: [blank]
    C: [blank]
  2. A: Running
    B: Ready
    C: [blank]
  3. A: Blocked
    B: Running
    C: [blank]
  4. A: Blocked
    B: Running
    C: Ready
  5. A: Blocked
    B: Ready
    C: Run
  6. A: Blocked
    B: Ready
    C: Run
  7. A: Ready
    B: Ready
    C: Run
  8. A: Run
    B: Ready
    C: Blocked
84
Q

Assume we have a system with the following parameters:
● The page size is very small 32 bytes (note: this is unrealistically small for a real system)
● The virtual address space is 1024 pages (so 32 KB virtual memory in total)
● Physical memory consists of 128 pages (so 4 KB physical memory in total)

  1. How many bits are there in the virtual address?
  2. How many bits are there in the physical address?
A
  1. 15
  2. ## 12The system assumes a multi-level page table. The upper five bits of a virtual address are used to index into a page directory; the page directory entry (PDE), if valid, points to a page of the page table. Each page table page holds 32 page-table entries (PTEs). Each PTE, if valid, holds the desired translation (physical frame number, or PFN) of the virtual page in question.
    Page table entries (PTE) are 1 byte long. The msb (most significant bit) of each PTE is a valid bit, and the lower 7 bits contain the PFN if the valid bit is 1. For example, a PTE of 0x81 would be a valid entry, with a PFN of 1.
    Page directory entries (PDE) are also 1 byte long, and like the PTE, the msb is the valid bit. If the valid bit is 1, the lower 7 bits contain the PFN of that portion of the page table.
85
Q

The following question traces TLB behavior over time. Each question will give you a few assumptions; you should then produce a series of hits (“H”) and misses (“m”). The string “mHHm” would mean a “TLB miss” followed by two “TLB Hits”, followed by a “TLB miss”. You can assume that instruction references do not affect the TLB and that the array (discussed below) is paged aligned (i.e., it starts that the beginning of the page). If the pattern of hits and misses repeats, you can just write out the pattern and mention that it repeats.

a. Assume you have a 1-entry TLB. Assume you access contiguous 4-byte integers in a large array, starting at index 0 and going to some large number. Assume the page size is 32-bytes. What is the hit/miss pattern for this access?

A

32/4 (byte page size / bytes per int) -> 8 integers per page

86
Q

The following question traces TLB behavior over time. Each question will give you a few assumptions; you should then produce a series of hits (“H”) and misses (“m”). The string “mHHm” would mean a “TLB miss” followed by two “TLB Hits”, followed by a “TLB miss”. You can assume that instruction references do not affect the TLB and that the array (discussed below) is paged aligned (i.e., it starts that the beginning of the page). If the pattern of hits and misses repeats, you can just write out the pattern and mention that it repeats.

b. Assume you have a 2-entry (fully associative) TLB with LRU replacement of TLB entries. Assume you access contiguous 4-byte integers in a large array, again starting at 0. Assume, like above, the page size is 32-bytes. What is the hit/miss pattern for this access?

A

[mHHHHHHH] repeated until done (same as above)

87
Q

The following question traces TLB behavior over time. Each question will give you a few assumptions; you should then produce a series of hits (“H”) and misses (“m”). The string “mHHm” would mean a “TLB miss” followed by two “TLB Hits”, followed by a “TLB miss”. You can assume that instruction references do not affect the TLB and that the array (discussed below) is paged aligned (i.e., it starts that the beginning of the page). If the pattern of hits and misses repeats, you can just write out the pattern and mention that it repeats.

c. Assume you have a 1-entry TLB. Assume you access every other 4-byte integer in contiguous array, starting at index 0, then index 2, 4, 6, and so on (i.e., a stride-2 access pattern). Assume the page size is 32-bytes. What is the hit/miss pattern for this access?

A

[mHHH] repeated until done

88
Q

The following question traces TLB behavior over time. Each question will give you a few assumptions; you should then produce a series of hits (“H”) and misses (“m”). The string “mHHm” would mean a “TLB miss” followed by two “TLB Hits”, followed by a “TLB miss”. You can assume that instruction references do not affect the TLB and that the array (discussed below) is paged aligned (i.e., it starts that the beginning of the page). If the pattern of hits and misses repeats, you can just write out the pattern and mention that it repeats.

d. Assume you have a 8-entry (fully associative) TLB with FIFO replacement. Assume you repeatedly access all 4-byte integers in a small contiguous array of 24 integers in a loop. Again, assume the page size is 32-bytes. What is the hit/miss pattern for the first run of the loop?
e. For the above problem (d), what is the hitt/miss pattern for the second run of the loop?

A

d. [mHHHHHHH] repeated 3 times
8 ints per page
3 pages total

e. [H] repeated 24 times, all hits
all pages are in the TLB

89
Q

In a Limited Direct Execution environment, list four ways the OS can regain control of the system.

A

(i) The program volunteers to stop running with a call to yield()
(ii) A hardware interrupt occurs
(iii) A program issues a syscall
(iv) The program executes an illegal instruction

90
Q

Which of the following happens during a system call (syscall)? Circle all that apply.

(i) The registers of the currently running process are saved
(ii) The memory the calling process is swapped in from disk
(iii) The OS moves the currently running process to the BLOCKED state
(iv) The TLB is invalidated

A

(i) The registers of the currently running process are saved

91
Q

Which of the following are mechanisms? Circle all that apply.

(i) A context switch
(ii) Using LRU for selecting a page to evict
(iii) Base and bounds registers for dynamic relocation
(iv) Splitting and Coalescing free blocks

A

(i) A context switch
(iii) Base and bounds registers for dynamic relocation
(iv) Splitting and Coalescing free blocks

92
Q

Which of the following can lead to more internal fragmentation? Circle all that apply.

(i) Having a larger page size (e.g., 2 meg)
(ii) Using paging versus using segments
(iii) Having a smaller page size (e.g., 512 bytes)
(iv) Using the worst fit algorithm and splitting blocks

A

(i) Having a larger page size (e.g., 2 meg)

93
Q

In Linux, creating a process is a two step process, using fork() followed by exec(). What is the advantage of doing it in this way?

A

It allows the shell to easily implement file redirection and pipes by changing what STDIN, STDOUT, and STDERR refer to.

94
Q

The VMS virtual memory implementation was a hybrid system, using both segments and pages. Briefly describe how that system worked and how it was used to reduce the size of the page tables.

A

The system had a segment for the kernel, and two user segments, one for the stack, and one for everything else (code, data, heap). Each segment had a base and bounds registers. The base contained the start of each page table, and the bounds had the number of page table entries in the segment. Unallocated memory past the end of the segment didn’t need page table entries.

95
Q
You are given the following code:
int main(int argc, char *argv[]) {
printf("a");
fork();
printf("b");
return 0;
}

Assuming fork() might fail (be returning an error code and not creating a new process), what are the possible outputs of the same program as above?

(i) ab
(ii) abb
(iii) bab
(iv) bba
(v) a

A

(i) ab Possible
(ii) abb Possible
(iii) bab Not possible
(iv) bba Not possible
(v) a Not possible