Quiz 1 Flashcards
Is this more like a policy or a mechanism?:
Saving the registers in a context switch
Mechanism
Is this more like a policy or a mechanism?:
Evicting a page to disk because it hasn’t been used in a while
Policy
Is this more like a policy or a mechanism?:
Using a timer interrupt to jump back to the OS
Mechanism
Is this more like a policy or a mechanism?:
Switching to another process when disk I/O occurs
Policy
Is this more like a policy or a mechanism?:
Walking a page table to find a Page Frame
Mechanism
Is this more like a policy or a mechanism?:
Address Translation
Mechanism
Is this more like a policy or a mechanism?:
Splitting and Coalescing blocks on a free list
Mechanism
Is this more like a policy or a mechanism?:
Picking a free block with a best fit strategy
Policy
Why is it a good idea to separate policy from mechanisms?
It allows one to easily change policies without having to rethink the mechanism. This is a form of modularity, which is desired in any software system.
Name and describe five examples of where the OS needed support from the hardware to implement some desired functionality
● Hardware timer generating an interrupt to jump back to the OS
● The TLB to speed up page table lookups
● Base and Bounds registers to implement segmentation
● CPUs having a privileged mode to support Limited Direct Execution
● Having a modified bit which is set by hardware when a page is written
An operating system runs in privileged mode, a hardware state where it has full access to machine resources. Why is such a mode needed, and why can’t normal user processes enter privileged mode?
One of the responsibilities of an OS is to provide isolation between processes, so they cannot interfere with one another. Another is to arbitrate resources across all the processes in a system. If a process was run in privileged mode, it would be able to read and write all memory, access any disk sector, and in general be able to wreak havoc in the system. Due to bugs or bad intentions, we cannot assume a user process will behave nicely, so we must restrict certain actions to the kernel.
Since we don’t allow processes free reign in the system, how does a process request services from the Operating System? In other words, what is the mechanism for processes to have the OS do something on their behalf?
When a process wants services from the OS, it issues a syscall.
When a Linux process calls fork(), the system creates (in the child) a nearly exact copy of the calling (parent) process. Why is the copy “nearly exact” and not an identical copy of the parent process?
A the time of the fork, the processes are identical, except the return value from fork() is 0 for the child, and the PID of the child for the parent process.
The shortest-job-first (SJF) and shortest-time-to-completion-first (STCF) policies are both unrealistic policies to implement in a general purpose operating system. Why?
They are unrealistic because they assume you know in advance how long the processes are going to run (in essence, being able to predict the future).
Assume a multi-level feedback queue scheduling policy. In class we discussed a provision that periodically moved all jobs back to the topmost priority queue. What problem does this rule solve?
It solves two problems. First, processes are guaranteed not to starve. By being booted, they will be able to make progress (all processes in a high-priority queue are run in a round-robin fashion). Second, if a CPU-bound job has become interactive, the scheduler treats it properly once it has received the priority boost.
For the following question, circle all answers that apply. A translation lookaside buffer (TLB) is generally used to:
(a) Translate virtual page numbers into physical page numbers
(b) Translate physical page numbers into virtual page numbers
(c) Make segmentation have the benefits of a pure paging approach
(d) Translate the addresses generated by loads (memory reads)
(e) Translate the addresses generated by stores (memory writes)
(f) Translate the addresses generated by instruction fetches
(g) Remove the need for a full-sized page table
(h) Make translations happen quickly
(i) Locate the page frame where the page table entry is stored
(j) Decide which pages frames to evict
(a) Translate virtual page numbers into physical page numbers
(d) Translate the addresses generated by loads (memory reads)
(e) Translate the addresses generated by stores (memory writes)
(f) Translate the addresses generated by instruction fetches
(h) Make translations happen quickly
Let’s say you decide to implement mm_malloc and mm_free for your lab assignment as follows: one half of available memory is divided into fixed-sized units of 4KB, and the other half is managed by a best-fit free list. If an allocation request is less than or equal to 4KB and there is space in the fixed-sized half, a 4KB unit is allocated from the fixed-sized half; otherwise, the best-fit algorithm is used over the other half of memory, and the requested size is returned (if space is available).
(a) Assuming 32KB of total memory is available, what series of allocation requests will most quickly lead to all of memory getting allocated, all while requesting the least total amount of memory?
(b) What type(s) of fragmentation occurs with this new allocator and how can they occur?
(a) First, you have 4 one byte calls to malloc, followed by one 16k call to malloc.
(b) Both internal and external fragmentation occurs. Internal fragmentation occurs when you have a request of less than 4k that is allocated in one of the 4k fixed-sized units. External fragmentation occurs in the other half of memory after repeated calls to malloc and free.
- Forking and process states
We have a system with three (3) processes (A, B, and C) and a single CPU. Processes can be in one of three states: Running, Ready, or Blocked. If the process does not exist (yet), or has exited, just leave the entry blank.
Below is a timeline of process behavior. Fill in the states of each process in the diagram below.
States of A, B, C?:
- Process A is loaded into memory and starts executing in main().
- Process A calls fork() and creates Process B (but A, the parent, keeps running).
- Process A issues a request to the disk; B starts executing at the return from fork().
- B calls fork(), creating Process C; B keeps running.
- B’s timeslice expires; C runs
- A’s I/O completes (but there are no other changes).
- C waits for user input. A runs.
- A: Running
B: [blank]
C: [blank] - A: Running
B: Ready
C: [blank] - A: Blocked
B: Running
C: [blank] - A: Blocked
B: Running
C: Ready - A: Blocked
B: Ready
C: Run - A: Ready
B: Ready
C: Run - A: Run
B: Ready
C: Blocked
Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.
(a) How many bits are in the offset portion of the virtual address?
10 bits. This comes from the page size, which is 2^10 = 1024 bytes.
Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.
(b) How many bits are in the VPN portion of the virtual address?
22 bits. This comes from subtracting the offset bits from the virtual address space (32-10).
Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.
Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:
(c) How many entries are there in the table?
You have 2^22 page table entries, one for each virtual page.
Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.
Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:
(d) What is the total size of the table? It may be be helpful to note that 2^20 = 1 MB
2^22 * 4 = 2^24 = 16 MB.
Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.
Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:
(e) In a live system, how much memory would be occupied by page tables? (what factors affect this?)
There is a page table for every single process in the system, so the total memory would be 16 MB times the number of processes in the system.
Linear page tables are too big. Hence, people came upon the idea of a multi-level page table, which uses a page directory to point to page-sized chunks of the page table. Assume we wish to build such a table, with two levels (as discussed in class). To access such a table, the VPN is split into two components: the VPN_PageDirIndex_ the upper part of the VPN which indexes into the page directory, and the VPN_PageTableIndex_ the lower part of the VPN which indexes into the page of the page table where the PTEs are located.
Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.
Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:
(f) How many PTEs fit onto a single page in this system?
Each page is 1024 bytes. Each page table entry is 4 bytes, so you can fit 1024 / 4 = 256 entries in a single page.
Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.
Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:
(g) How many bits are needed in the VPN_PageTableIndex_?
Since there are 256 PTEs in a page, you need 8 bits to index the entries in a single page
Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.
Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:
(h) How many bits are needed in the VPN_PageDirIndex_?
There are 22 bits in the VPN. You need 8 bits for the page table index, so you need 22 - 8 = 14 bits.
Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.
Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:
(h) How many bits are needed in the VPN_PageDirIndex_?
There are 22 bits in the VPN. You need 8 bits for the page table index, so you need 22 - 8 = 14 bits.
Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.
Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:
(i) Assuming each page directory entry is 4 bytes, how much memory is needed for the page directory?
You need 2^14 * 4 = 2^16 bytes = 64 KB
Paging and page tables:
Assume the following: a 32-bit virtual address space, with a 1 KB (2^10 = 1024 byte) page size.
Now, let’s focus on the page table. Assume each page table entry is 4 bytes in size. Assuming a linear page table:
Finally, for the last part of this question, for the given memory allocations, write down both (a) how much memory our multi-level page table consumes and (b) how much memory a linear page table consumes:
(j) Code is located at address 0 and there are 100 4 byte instructions. The heap starts at page 1 and uses 3 total pages, The stack starts at the other end of the address space, grows backward, and uses 3 total pages.
1. Multi-level page table size?
2. Linear page table size?
- Each chunk covers 256 consecutive pages. You need 1 page for the heap and one page for the stack. Each page is 1024 bytes. You also need space for the page directory which was calculated in part i as 64 KB. So in total you need 2 KB (page table entries) + 64 KB (directory) = 66 KB.
- 16 MB (same as you calculated in part d).
(d) What is the total size of the table? It may be be helpful to note that 2^20 = 1 MB.
2^22 * 4 = 2^24 = 16 MB.
List two (2) mechanisms and two (2) policies that we have talked about in class.
(i) Mechanism 1: Context Switch
(ii) Mechanism 2: Splitting and Coalescing (and many others)
(iii) Policy 1: Multi-level feedback Queue scheduler
(iv) Policy 2: Best fit memory allocator (and many others)
What is the difference between a mechanism and a policy?
Mechanisms provide the answer to a “how” question about a system; policies provide the answer to a “which” question.
Our model of user processes is that we assume they can’t be trusted, and yet we let these user processes run directly on the CPU in what we call Limited Direct Execution (LDE). List two ways that the OS puts restrictions on the running of a user process, so that a process does not have unrestricted use of the CPU.
- The OS will interrupt a process with a timer interrupt which can cause the process to be switched out with another process.
- The CPU has a user mode and kernel mode. Privileged instructions can only be executed with a trap into the kernel.
The VAX/VMS system for historic reasons had a very small page size of 512 bytes. This could lead to very large page tables, so they designed a hybrid system in an attempt to reduce the size of the page table. How does the VAX/VMS hybrid system work (i.e., what is it a hybrid of)? How does this hybrid approach reduce the size of the page table?
It is a hybrid of paging and segmentation. Each segment has a base and bounds register. Page table entries above the bounds register are not needed, keeping the size of the page table smaller.
We talked about the Multi-Level Feedback Queue (MLFQ) scheduling algorithm in class. What is the feedback that the algorithm uses and how is it used?
(i) What is the feedback?
(ii) How is the feedback used?
i. How much of the time slice the process uses.
ii. If a process uses the entire time slice, it is moved to a lower priority queue.
Processes can be in one of several states. Briefly describe what it means for a process to be in each of the following states. If a state is not one we’ve talked about in class (and belongs to a student and not a process), put “You’re being silly” for the answer.
(i) Running: The process is currently executing on the processor.
(ii) Blocked: The process is waiting on I/O operation to complete before it can continue.
(iii) Ready: The process is able to run but is not currently running.
(iv) Tired of studying: Not a real process state.
(v) Waiting for fall break: Not a real process state.
We studied Proportional Share scheduling as an alternative to the MLFQ. List one advantage and one disadvantage of proportional share scheduling. If you are thinking about writing non-deterministic as one of your answers, remember you can use stride scheduling to implement a deterministic proportional share scheduler.
(i) Advantage: Simple to implement, can implement minimal shared state.
(ii) Disadvantage: Difficult to know how to allocate tickets.
To create a new process, Linux goes through a two step procedure, a fork() followed by an exec(). Why are they separated into two system calls?
After the call to fork(), the code can alter the environment of the about-to-be-run program, such as by replacing STDIN with a redirected input file.
Copying a process (and hence all of its memory) with a fork(), only to replace it with another program in the exec() immediately seems horribly horribly wasteful. What Simpsonesque technique is used to make all of this efficient and how does it work?
It uses Copy On Write (COW) (Don’t have a COW man!). Memory of the new process is mapped to the old process. Only when one of the pages (from either process) is written is it copied.