Midterm Flashcards

1
Q

Draw Von Neumann architecture of a computer.

A

Make drawing.

A von Neumann architecture consists of five basic components:

A central processing unit (CPU)
A control unit
A main memory system
Input devices 
Output devices

The key feature of von Neumann machines is that they can carry out sequential instructions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the two main characteristics of an OS?

A

Efficiency and convenience.

The two main characteristics of an OS is that is strives to make the computer efficient and convenient. This means that the OS executes user programs and makes solving user problems easier; it makes the computer system convenient to the user, and it uses the hardware in an efficient manner.

An OS can either focus on one or both of these characteristics. E.g., a mobile device is very convenient, but not very efficient (compared to mainframes, which are very efficient, but perhaps not very convenient).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

For networked distributed computing, the networks are categorized into four groups in terms of the distance between members. What are these four types of networks?`

A

WAN - Wide Area Network
LAN - Local Area Network
MAN - Metropolitan Area Network
PAN - Personal Area Network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Computing environments, in terms of the role of connected members of a network, are divided into two types. What are these two types of networks?

A

Peer-to-peer

Client-server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the difference between emulation and virtualization?

A

Short answer: Emulation is used when the source CPU (physically present) is different from the target CPU type (the CPU that the program is compiled for). E.g., Apple desktops switched from IBM CPU to Intel CPU and old software used Rosetta to run on an emulated IBM CPU. Virtualization allows a guest OS to run as an application on a host OS.

Virtualization is a technique that allows operating systems to run as applications within other operating systems, but on the same CPU. This method works rather efficiently because the applications were compiled for the same instruction set as the target system uses. But, if you have an application or operating system that needs to run on a different CPU, then it will be necessary to translate all of the source CPU’s instructions so that they are turned into equivalent instructions for the target CPU. Such an environment is no longer virtualized but rather is fully emulated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the difference between “full virtualization” and “para-virtualization”?

A

Short answer: In full virtualization, the guest is an original OS and wants to manage the memory and perform protection, etc. In para-virtualization, guest OS is designed to run as a guest in a virtual environment and is aware of other operating systems and knows its limitations.

Para-virtualization is a technique in which the guest operating system is modified to work in cooperation with the VMM to optimize performance. In contrast, in full virtualization, the guest OS is unaware that it is in fact being run on a VMM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the three types of cloud computing?

A

Cloud computer delivers computing, storage and apps as a service across a network. Cloud computing is a logical extension of virtualization because it uses virtualization as the base for its functionality. There are three types of cloud computing:
Public cloud: The public cloud is available via internet to anyone who’s willing to pay.
Private cloud: Is run by a company for the company’s own use.
Hybrid cloud: Includes both public and private components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the three advantages of a multiprocessor system over a single processor?

A

Short answer:
1. Increased throughput; 2. Lower cost than using a collection of single processors; 3. Reliability is higher, and the system is more fault tolerant.

Multiprocessor systems is growing in use and importance. They are also known as parallel systems or tightly-coupled systems. Advantages over a single processor system include:
INCREASED THROUGHPUT. By increasing the number of processors, we expect to get more work done in less time. The speed-up ratio with N processors is not N, however, because with additional processors comes the overhead of getting them to work together correctly.
ECONOMY OF SCALE (multiprocessor systems can cost less than equivalent multiple single-processor systems)
INCREASED RELIABILITY. If functions can be distributed properly among several processors, then the failure of one processor will not halt the system, only slow it down. This increased reliability is crucial in many applications. The ability to continue providing service proportional to the level of surviving hardware is called graceful degradation. Some systems go beyond graceful degradation and are called fault tolerant, because they can suffer a failure of any single component and still continue operation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the difference between “symmetric” and “asymmetric” multiprocessors?

A

Short answer: Asymmetric multiprocessing is a CPU scheduling method, in which all scheduling decisions, I/O processing, and other system activities are handled by a single processor - the master processor. The other processors execute only user code. This is a simple method because only one core accesses the system data structures, reducing the need for data sharing. The downfall of this approach is that the master processor becomes a bottleneck where overall system performance may be reduced. Symmetric multiprocessing is the standard approach for supporting multiprocessors where each processor is self-scheduling.

Asymmetric multiprocessing is when each processor is assigned a specific task. A “boss” processor controls the system; the other processors either look to the boss for instruction or have predefined tasks. The boss processor schedules and allocates work to the worker processors.

Symmetric multiprocessing is when each processor performs all tasks within the operating system. In this system, all processors are peers. Each processor has its own set of registers, as well as a private, or local, cache. However all processors share physical memory. I/O must be carefully controlled to ensure the right data reaches the right processor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the five activities of process management?

A

The five activities of process management include:

  1. Creating and deleting both user and system processes
  2. Suspending and resuming processes
  3. Providing mechanisms for process synchronization
  4. Providing mechanisms for process communication
  5. Providing mechanisms for deadlock handling

A process is a program in execution. A process needs certain resources, including CPU time, memory, files and I/O devices, to accomplish its task. These resources are either given to the process when it is created or allocated to it while it’s running.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the difference between program and process?

A

A program is a passive entity, such as the contents of a file stored on a disk (or simply a collection os instructions). A program in execution, is a process. A process is active.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a memory unit exposed to?

A
  1. A stream of addresses + read requests

2. A stream of addresses + data and write requests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How long does one memory access take?

A

It takes many CPU cycles. AMAT = cache-hit-time + miss rate * miss penalty.

Static RAM (SRAM): 0.5-2.5 ns $2000-5000 per GB
Dynamic RAM (DRAM): 50-70 ns; $20-75 per GB
Magnetic disk: 5-20 ms; $0.2-2 per GB

Our ideal memory is the access time of SRAM with but with the capacity and cost/GB of the magnetic disk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How long does one register access take?

A

It takes one clock cycle (or less).

Registers that are built into the CPU are generally accessible within one cycle of the CPU clock. Most CPUs can decode instructions and perform simple operations on register contents at the rate of one or more operations per clock tick.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does memory management mean?

A

Memory management means a system that determines what is in memory and when. It is a system that optimizes CPU utilization and the computer’s overall response to users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does memory management do?

A

The operating system is responsible for the following activities in connection with memory management:

  1. Keeping track of which parts of memory are currently being used and who is using them.
  2. Deciding which processes (or parts of processes) and data to move into and out of memory
  3. Allocating and deallocating memory space as needed
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is memory hierarchy?

A

Short answer: Creating a pyramid with slow, cheap and large memory at the bottom and placing fast, expensive and small memories at the top.

A memory hierarchy consists of a pyramid of different kinds of memories. At the top, closest to the CPU, is the cache. This is a relatively small, but very fast memory. Under the cache (there can be multiple levels of cache) is the main memory (DRAM). Cache and DRAM are both volatile storage, which means that they do not keep their data when they don’t have access to power. Beneath the DRAM is where the secondary non-volatile type of storage begins. The first layer is often the sold-state disk, then magnetic disk, the optical disk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is locality of reference?

A

Locality of reference is referring to the fact that when we access a memory address, we often want to access the same address soon again, or an address close to the current address.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the effect of “low” locality of reference?

A

Low locality causes high miss rate, which forces the memory management to refer to slower parts of the hierarchy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

25‐ Suppose reading one word from the main memory takes 50 clks and reading one block of words from memory would take 100 clks. Also, assume that reading one word from cache would take 2clks. What should be the maximum cache “miss rate” for this memory system to worth having the cache rather than directly accessing the main memory?

A

AMAT = hit rate + miss rate * miss penalty
50 = 2 + x * 100
48 / 100 = x
x = 48%

A 48% miss rate is the maximum miss rate for this memory system to worth having the cache rather than directly accessing the main memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the primary and secondary memory layers?

A

The primary volatile storage includes the registers, cache and main memory. The secondary non-volatile storage includes storage that keeps the data even if the power is turned off, e.g., solid state disk, magnetic disk, optical disks, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are the four categories of memories in terms of “access type?”

A

RAM - Random Access Memory
SAM - Sequential Access Memory
DASD - Direct Access Storage Device
CAM - Content-Addressable Memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the “inclusion principle” in the context of the memory hierarchy?

A

The inclusion principle in the context of the memory hierarchy is that everything in level i is included in level i + 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the four policies or questions in the management of a memory hierarchy?

A
  1. Where can a block be placed? (block placement)
  2. How is a block found if it is in the upper level? (block identification)
  3. Which block should be replaced on a miss? (replacement strategy)
  4. What happens on a write? (write strategy)

Thus, the four policies are: block placement, block identification, replacement strategy and write/update strategy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is “mapping” in the context of memory management?

A

Mapping in the context of memory management is the conversion of virtual addresses to physical addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is “address binding?”

A

Short answer: As a general definition, binding means mapping from one address space to another. Traditionally, binding is categorized into three groups: compile-time-binding, load-time-binding and execution-time-binding.

Address binding is the process a program goes through before being executed. In the source program, addresses are usually symbolic. The compiler then binds the symbolic addresses to relocatable addresses. The linkage editor or loader then binds these relocatable addresses into absolute addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Explain why load-time binding is not relocatable and why execution-time binding is relocatable?

A

His answer:
Load-time binding: The compiler translates symbolic addresses to relative (relocatable) addresses. If it is not known at compile time where the process will reside in memory, then the compiler must generate relocatable code. The loader translates these to absolute addresses. This is done once, and swapping cannot be performed, which makes the method static.
Execution-time binding: If the process can be moved during its execution from one memory segment to another, then binding must be delayed until run time. The absolute addresses are generated by the hardware. Most general-purpose OSs use this method.

Static binding means locations are determined before execution. Dynamic binding means locations are determined during execution.

Load-time binding is not relocatable because the final binding was created when the program was loaded for execution. Execution-time binding is relocatable because the final binding is delayed until runtime, because the program may move from one memory segment to another during execution time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What are logical and physical addresses?

A

Logical address – addresses generated by a program and sent to the CPU; also referred to as virtual address. Physical address – address seen by the memory unit which are assigned to a program depending on where the program resides in the main memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is the job of the memory management unit (MMU)?

A

The memory management unit’s (MMU) job is to map logical addresses to physical addresses at runtime.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What are the minimum hardware requirements of an MMU for dynamic relocation?

A

The minimum hardware requirements of an MMU for a dynamic relocation is to have a relocation register.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Which type of program benefits most from dynamic relocation? What is the 20-80 rule?

A

In some programs, a significant portion of the code is used for rare cases, and only a small piece of the code is frequently used. For such applications, dynamic relocation can be used efficiently. Such applications are known as 20-80, which means that 20% of the code is used 80% of the time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is a stub?

A

Short answer: Stub is a small piece of code, which is used to locate the appropriate memory-resident library routine. The stub replaces itself with the address of the routine, and executes the routine.

With dynamic linking, a stub is included in the image for each library-routine reference. The stub is a small piece of code that indicates how to locate the appropriate memory-resident library routine, or how to load the library if the routine is not already present. When the stub is executed, it checks to see whether the routine is already in memory. If not, the program loads the routine into memory. Either way, the stub replaces itself with the address of the routine and executes the routine. Thus, the next time that particular code segment is reached, the library routine is executed directly, incurring no cost for dynamic linking. Under this scheme, all processes that use a language library execute only one copy of the library code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What is contiguous allocation of memory?

A

Contiguous allocation means allocating a contiguous region of the main memory to a program.

In contiguous memory allocation, each process is contained in a single section of memory that is contiguous to the section containing the next one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Why is segmentation considered contiguous and paging is considered non-contiguous?

A

Segmentation allocates one piece of memory, called segment, to one program. In paging, different parts of the memory, called pages, could contain a part of a program. These pages could be non-contiguous.

Segmentation is considered contiguous because the entire process is kept together in one sequence in memory, whereas with paging, the user process can be split up and may not be necessarily stored sequentially.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What are the elements of the two-tuple that a segment is referred by?

A

segment-number, offset

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What are the minimum hardware requirements in a multiprogram system to ensure each process only accesses its own memory space? Draw a simple block diagram for this protection system.

A

You need base register and limit register to ensure that each program only accesses its own memory space.

CPU must check every address generated in user mode to be sure it’s between base and limit for that user. The base and limit registers can only be loaded by the operating system, which uses a special privileged instruction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What strategies are possible for “placement policy” in a “dynamic allocation” memory system?

A

There are three strategies for placement policy in a dynamic allocation memory system:

First-fit: the process is allocated to the first hole that fits
Best-fit: the process is allocated to the smallest hole that can contain the process
Worst-fit: the process is allocated to the largest hole available

For best-fit and worst-fit, all holes must be searched, unless they are ordered by size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What are the two types of memory fragmentation?

A

There is external fragmentation, which can occur with segmentation - when there are lots of little holes between processes. Internal fragmentation occurs within paging, where a process can take up part of a page but not the entire page.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Which type of fragmentation occurs in the contiguous allocation system of the type segmentation?

A

External fragmentation occurs in between segments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

True or false? The 50-percent rule states that 50% of memory is lost to external fragmentation when the first-fit policy is used. Explain your answer.

A

False. Statistical analysis of first-fit, reveals that, even with some optimization, given N allocated blocks, another 0.5N blocks will be lost to fragmentation. That is, one-third of memory may be unusable. This property is known as the 50-percent rule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is a solution for memory fragmentation? What are the prerequisites of such a remedy and what are the problems?

A

Short answer: compaction is the solution. It requires that the code and data be relocatable.

One of the solutions to external fragmentation is compaction. The goal is to shuffle the memory contents so as to place all free memory together in one large block. Compaction is not always possible, however. If relocation is static and done at assembly or load time, compaction cannot be done. Compaction is only possible if relocation is dynamic and is done at execution time. If addresses are relocated dynamically, relocation requires only moving the program and data and then changing the base register to reflect the new base address. When compaction is possible, we must determine its cost. The simplest compaction algorithm is to move all processes toward one end of memory; all holes move in the other direction, producing one large hole of available memory - but this scheme can be expensive.

Another problem concerns I/O. You could refuse to move a program while it’s involved in I/O or you could have a system where I/O only goes into OS buffers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What is the difference between a “frame” and a “page”?

A

Frame and page are of the same size. A frame is referred to as a physical piece of memory, and a page is referred to as the virtual address space.

Frames is what the physical memory is divided into. Pages is what virtual memory is divided into.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is the job of a page table?

A

It contains a translation (mapping) of virtual to physical addresses.

The job of a page table is to keep the frame numbers associated with each virtual address. The page table helps translate logical to physical addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

52‐ Assume that the logical address is 24 bits and the physical memory is 1MB. Also, assume that each page is 4KB. Show how the logical and physical addresses are partitioned. What is the size of the page table? Assume the logical address is ABCDEF, the content of page table in line DEF is AB, and in line ABC is F5. Wherein the physical memory, this logical address is mapped?

A

Logical address: 24 bits; p = 12 bits, d = 12 bits
Physical address: 20 bits; f = 8 bits, d = 12 bits
Page size: 2^12 = 12 bits
Size of page table: 2^12 pages

ABCDEF is mapped to F5DEF

45
Q

Accessing the page table may have considerable delays. What is a solution to reduce the average delay of the page address mapping?

A

Short answer: Use of TLB is a solution to reducing the average delay of the page address mapping.

The page table is kept in main memory. A page-table register (PBTR) points to the page table. A page-table length register (PTLR) indicates the size of the page table. In this scheme, every data/instruction access requires two memory accesses: one for the page table and one for the data/instruction. The two memory access problem can be solved by the use of a special fast-lookup hardware cache called translation look-aside buffers (TLBs) (of associative memory type). Some TLBs store address-space identifiers (ASIDs) in each TLB entry, which uniquely identifies each process to provide address-space protection for that process. TLBs are typically small (64 to 1024 entries). On a TLB miss, the value is loaded into the TLB for faster access next time. Replacement policies must be considered. Some entries can be wired down for permanent access.

46
Q

55‐ Suppose the logical address is 32 bits. The physical memory is 16MB. Pages are 4KB. There is a TLB with 64 lines of associative memory with an access time 0.5 cycle. The access time of the page table is one cycle. The miss rate of TLB is ten percent. How big is the page table? What is the average access time of this system? Now consider the content of the page table and TLB as shown below. Suppose logical address D00EF123 is issued which cannot be found in the TLB. Where in the physical memory this address is mapped? Where will the logical address 12345F05 be mapped in the main memory?

A

Logical memory: 32 bits; p = 20; d = 12
Physical memory: 24 bits; f = 12; d = 12
Page size: 12 bits
Page table is 2^20 lines long.

EAT = 0.5 * 0.9 + 1 * 0.1 = 0.55 cycles

D00EF123 → 56F123
12345F05 → 567F05

47
Q

What is the purpose of using a “valid bit” in the page memory table?

A

The purpose of using a “valid-bit” in the page table is for memory protection purposes. The valid-invalid bit attached to each entry in the page table indicates whether the associated page is in the process’ logical address space and thus is a legal page.

48
Q

What is a “reentrant page?” Usually, reentrant pages belong to some specific processes. Name two examples of such processes.

A

Short answer: Reentrant page is a read-only copy of code shared among processes. Examples are text editors, compilers and windows systems.

Reentrant code is non-self-modifying code: it never changes during execution. Thus, two or more processes can execute the same code at the same time. Each process has its own copy of registers and data storage to hold the data for the process’ execution. The data for two different processes, will, course, be different.

Two examples of such processes that reentrant pages can belong to are text editors and compilers. For processes to be shareable, the code must be reentrant.

49
Q

Name three strategies that are used for reducing the size of page tables when the logical address is large.

A

There are three strategies that are used for reducing the size of page tables when the logical address is large:

  1. Hierarchical paging
  2. Hashed page tables
  3. Inverted page tables
50
Q

Suppose that logical address is 32 bits and page offset is 12 bits. Show an example of a hierarchical page address.

A

If we have a logical address of 32 bits, and the page offset is 12 bit, that leaves 20 bits for the page number. Since in hierarchical paging, the page table is paged, the page number is further divided into a 10-bit page number and a 10-bit page offset. This leaves us with a logical address of:

p1 = 10 bit index into the outer page table; p2 = 10 bit displacement within the page of the inner page table; d = 12 bit page offset.

This is known as forward-mapped page table.

51
Q

Why is hierarchical page addressing not possible for 64-bit logical addressing? What are the two alternative methods?

A

Even a three-level page tables requires a 2^32 line third level table. Alternative approaches are using hash tables and TLBs.

52
Q

a) What is an “inverted page table?

b) How can we limit the search process?

A

a) The page table is as big as the number of physical pages. This method makes the size of the page table much smaller than using a table that has 2^(number of virtual pages) pages.
b) Hashing could be used to limit the search. For example, the virtual page number is hashed in three different methods to restrict the search to only three entries of the inverted table.

An inverted page table has one entry for each real frame of memory. Each entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns the page. Thus, only one page table is in the system, and it has only one entry for each page of physical memory.

A hash table is used to limit the search process.

53
Q

The 64-bit SPARC Solaris uses two hash tables. What are these two tables? What is a “page walk” in memory mapping process of this system?

A

Short answer: Two hash tables are used. One table is used for the user codes and one table for the OS. A TLB is used for fast search. Page walk: If the address is not in the TLB then the system “walks back” and searches the hash tables.

Solaris is an operating system running on the SPARC CPU, which is a 64-bit operating system. It uses two hash tables - one for the kernel and one for all user processes. Each maps memory addresses from virtual to physical memory. Each hash-table entry represents a contiguous area of mapped virtual memory, which is more efficient than having a separate hash table entry for each page. Each entry has a base address and a span indicating the number of pages the entry represents.

Virtual-to-physical translation would take too long if each address required searching through a hash table, so the CPU implements a TLB that holds translation table entries (TTEs) for fast hardware lookups. A cache of these TTEs reside in a translation storage buffer (TSB), which includes an entry per recently accessed page. When a virtual address reference occurs, the hardware searches the TLB for a translation. If none is found, the hardware walks through the in-memory TSB looking for the TTE that corresponds to the virtual address that caused the lookup. This TLB walk functionality is found on many modern CPUs. If a match is found in the TSB, the CPU copies the TSB entry into the TLB, and the memory translation completed. If no match is found in the TSB, the kernel is interrupted to search the hash table. The kernel then creates the TLB by the CPU memory-management unit. Finally, the interrupt handler returns control to the MMU, which completes the address translation and retrieves the requested byte or word from main memory.

54
Q

Some operating systems occasionally deactivate “swapping.” What is the purpose and what are the occasions?

A

Short answer: Swapping requires the transfer of data and code from the main memory to the backing store. The data transfer is time-consuming. In UNIX, swapping is disabled until the number of allocated pages becomes higher than a threshold.

A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. Total physical memory space of processes can then exceed physical memory.

The reason swapping is disabled on some operating systems is that it requires too much swapping time and provides too little execution time to be reasonable memory-management solution. Swapping is normally disabled on many systems such as UNIX, Linux and Windows. It is started if more than a threshold amount of memory is allocated, and disabled again once memory demand is reduced below the threshold.

55
Q

Suppose that a process takes 200MB of the main memory. This process has to be swapped by sending it to the backing memory and bringing a new process, of the same size, into the main memory. The transfer rate is 150MB/sec. How long does this context switch take?

A

It will take (200/150) * 2 = 2.667 s to send the memory to the backing store and bring the new process in. This is 2667 ms of total swapping time.

56
Q

What is a “pending I/O” during process swapping? Explain “double buffering” that is used to remedy pending I/O.

A

Short answer:
Sometimes a process has to be swapped
into the backing store, and that process may have been transferring data from I/O. The swapped out process would lose the data. OS could use its memory space to
buffer the I/O data, and when the process is brought back into the main memory, OS should deliver the buffer data to the process. The data is moved twice, which
is called double buffering.

Swapping is constrained by other factors (besides swapping time). If we want to swap a process, we must be sure it is completely idle. Of particular concern is any pending I/O, i.e. a process may be waiting for an I/O operation when we want to swap that process to free up memory. However, if the I/O is asynchronously accessing the user memory for I/O buffers, then the process cannot be swapped. Assume that the I/O operation is queued because the device is busy. If we were to swap out process P1 and swap in process P2, the I/O operation might then attempt to use memory that now belongs to process P2. There are two main solutions to this problem: never swap a process with pending I/O, or execute I/O operations only into operating-system buffers. Transfers between operating-system buffers and process memory then occur only when the process is swapped in. This is called double buffering (double buffering itself also adds overhead; we now need to copy the data again, from kernel memory to user memory, before the user process can access it).

57
Q

What is the maximum size of a segment in an IA-32 system? What are the differences between LDT and GDT? The logical addresses in IA-32 systems consist of two parts. What are these two parts called and what are their purposes?

A

The maximum size of a segment in an IA-32 architecture is 4GB. The maximum number of segments per process is 16K.

First partition of up to 8K segments are private to process (kept in local descriptor table (LDT)).
Second partition of up to 8K segments shared among all processes (kept in global descriptor table (GDT)).

The logical address consist of two parts: the selector and the offset (selector, offset).

The selector is a 16-bit number:
s = 13 bits; g = 1 bit; p = 2 bits
s = segment number
g = indicates whether segment is in the GDT or LDT
p = deals with protection

The offset is a 32-bit number specifying the location of the byte within the segment in question.

The logical address space consists of up to 8K segments that are private to that process. The second partition consists of up to 8K segments that are shared among all the processes. Information about the first partition is kept in the local descriptor table (LDT); information about the second partition is kept in the global descriptor table (GDT). Each entry in the LDT and GDT consists of an 8-byte segment descriptor with detailed information about a particular segment, including the base location and limit of that segment.

58
Q

What are the five main categories of modern computer systems?

A
Personal Mobile Device (PMD)
Desktop computing
Servers
Clusters/warehouse scale computers
Embedded computers
59
Q

What is the primary task of an operating system?

A

Option 1: The primary task of an operating system is to act as an intermediary between the user of a computer and the computer’s hardware.
Option 2: The primary task of an operating system is to be a resource allocator that manages all resources, decides between conflicting requests for efficient and fair resource use, and controls programs within allocated space.

60
Q

What do we mean by “throughput” of memory?

A

Number of bytes accessed in a time unit (bytes/sec).

61
Q

What is meant by the performance difference between DRAM and CPU?

A

Throughput of DRAM is much smaller than what the CPU requires.

62
Q

How can we overcome the performance difference?

A

By using a memory hierarchy.

63
Q

What is dynamic relocation

A

Dynamic relocation is the process of relocating data or code currently in the computer memory to other parts of the computer, creating more efficient memory storage while a program is still active.

64
Q

a) How could we define efficiency in a “variable partitioning” memory system?
b) What happens when a new process arrives
c) What happens when the process exits?
d) What kind of information does the OS maintain in such a system?

A

a) The percentage of occupied memory, when there is no room for a new process, is efficiency. The higher this percentage, the lower is the size of useless fragments.
b) When a process arrives, it is allocated memory from a hole large enough to accommodate it.
c) Process exiting frees its partition, adjacent free partitions combined (i.e., you’re creating a big hole).
d) OS maintains information about: allocated partitions, and free partitions (holes)

65
Q

How does the “paging” system partition a program? Does it have to bring the whole program into memory?

A

Page size is a power of 2. In paging, only one page of a program or data has to be brought into the memory.

66
Q

53- Imagine that we have fixed block sizes of size 2048 bytes. Also, imagine that
the process size is P. Write a C code that for 10000 times, randomly
generates an integer P, between 1 and 20000. Then for each value of P,
determine the amount of internal fragmentation and calculate the mean
value of that. Copy/paste your code in your solution file. Also, report the
average value of internal fragmentation.

A

import random
import statistics

test_size = 10000
rand_min = 1
rand_max = 20000
l = []
block_size = 2048

for i in range(test_size):
P = random.randint(rand_min, rand_max)
internal_fragmentation = block_size - (P % block_size)
l.append(internal_fragmentation)

mean = statistics.mean(l)
print(“Arithmetic mean (average value of internal fragmentation): “ + str(mean))

67
Q

The following diagram shows the addressing system of the IA-32. What is the difference between addressing of a 4KB page and 4MB page?

A

For 4KB-pages, IA-32 uses a two-level paging scheme (p1=10bits; p2=10bits; d=12bits). One entry in the page directory is the Page_Size flag, which, if set, indicates the size of the page frame is 4MB and not the standard 4KB. If this flag is set, the page directory points directly to the 4MB page frame, bypassing the inner page table; and the 22 low-order bits in the linear address refer to the offset in the 4MB page frame.

68
Q

69 Addressing in the ARM architecture is shown below. Explain how this architecture works for 4KB, 16KB, 1MB, and 16MB pages.

A

The paging system is use depends on whether a page or a section is referenced (1 MB and 16 MB pages are called sections). One-level paging is used for 1MB and 16MB sections; two-level paging is used for 4KB and 16KB pages.

The ARM architecture also supports two levels of TLBs. At the outer level are two micro TLBs - a separate TLB for data and another for instructions. The micro TLB supports ASIDs as well. At the inner level is a single main TLB. Address translation begins at the micro TLB level. In the case of a miss, the main TLB is then checked. If both TLBs yield misses, a page walk must be performed in hardware.

69
Q

What are the advantages of using virtual memory?

A
  1. Only part of the program needs to be in memory for execution.
  2. Logical address space can be much larger than physical space.
  3. Several processes can share address space
  4. More efficient utilization of physical memory
  5. More programs can run concurrently
  6. Less I/O needed to load processes
70
Q

How could we implement virtual memory?

A

Demand paging and demand segmentation.

71
Q

What is the usual practice in the utilization of virtual address space?

A

A program consists of fixed-size code and different types of data. The stack starts at the highest virtual address, and the code begins at address zero. On top of the code are data and on top of the data a heap of linked variables is placed.

The virtual address space of a process refers to the logical (or virtual) view of how a process is stored in memory. Typically, this view is that a process begins at a certain logical address, e.g., address 0, and exists in contiguous memory. We allow the heap to grow upward in memory as it is used for dynamic memory allocation. Similarly, we allow for the stack to grow downward in memory through successive function calls. The large blank space (or hole) between the heap and the stack is part of the virtual address space but will require actual physical pages only if the heap or stack grows. Virtual address spaces that include holes are known as sparse address spaces. Using a sparse address space is beneficial because the holes can be filled as the stack or heap segments grow or if we wish to dynamically link libraries (or possibly other shared objects) during program execution.

72
Q

What is a lazy swapper?

A

Short answer: A lazy swapper never swaps a page into memory unless the page will be needed. A swapper that deals with pages is a pager.

A demand-paging system is similar to a paging system with swapping where processes reside in secondary memory (usually a disk). When we want to execute a process, we swap it into memory. Rather than swapping the entire process into memory, though, we use a lazy swapper. A lazy swapper never swaps a page into memory unless that page will be needed. A lazy swapper in the context of a demand-paging system is called a pager.

73
Q

What is a pager?

A

Short answer: It performs the swapping operation on pages rather than on segments.

In the context of a demand-paging system, use of the term “swapper” is technically incorrect. A swapper manipulates entire processes, whereas a pager is concerned with the individual pages of a process. We thus use “pager,” rather than “swapper,” in connection with demand paging.

74
Q

How is the “valid-invalid” bit used in a page table?

A

Short answer: If there were no such bit, then the initial content of the table may be considered as valid frame numbers even if all the entries of the table were to be zero. Hence, initially, all of the “VI” bits are zero. Initially, the first demand page causes a “page fault” because the VI bit is zero. After the page is read then the bit becomes valid (one).

When a process is to be swapped in, the pager guesses which pages will be used before the process is swapped out again. Instead of swapping in a whole process, the pager brings only those pages into memory. Thus, it avoids reading into memory pages that will not be used anyway, decreasing the swap time and the amount of physical memory needed.

With these scheme, we need some form of hardware support to distinguish between the pages that are in memory and the pages that are on the disk. The valid-invalid bit scheme can be used for this purpose. When this bit is set to “valid” is both legal and in memory. If the bit is set to “invalid”, the page either is not valid (that is, not in the logical address space of the process) or is valid but is currently on the disk. The page-table entry for a page that is brought into memory is set as usual, but the page-table entry for a page that is not currently in memory is either simply marked invalid or contains the address of the page on disk. Notice that marking a page invalid will have no effect if the process never attempts to access that page.

75
Q

77- What is the intention of the following diagram? Write down steps 1 to 6.

A
  1. CPU is referencing a page that is not in the physical memory, and it has an “invalid” bit in the page table
  2. A trap (interrupt) occurs, and OS takes over teh execution process
  3. OS finds the demanded page in the hard disk
  4. OS brings the demanded page into the physical memory
  5. OS updates the page table’s frame number and valid bit
  6. The interrupted instruction of the process takes back the control and restart execution
76
Q

What is zero-fill-on-demand strategy?

A

Short: OS adds a frame to the list of free frames by zeroing the content of a frame.

The zero-fill-on-demand strategy is a technique used to allocate free pages. Zero-fill-on-demand pages have been zeroed-out before being allocated, thus erasing the previous contents.

77
Q

Does the zero-fill-on-demand strategy increase the efficiency of demand paging?

A

No. It adds an overhead. It doesn’t help to identify free frames because checking the zero content of a frame is a lengthy process. It is much faster to test the valid bit of a frame to see if it is free. Hence, the only reason for zeroing the content is for security purposes.

78
Q

For a demand paging system assume the following. The main memory access
time is 300 ns and access time to hard dist is 11 ms (includes all the overheads
and swap in and out time.) We want the effective access time for such a system
to be only 20% larger than the main memory access time. What should be the
maximum page fault rate?

A

My answer (maybe not correct):

360 ns = (1 - p) * 300 + 1.1 * 107 * p
360 ns = 300 - 300p + 1.1 * 107 p
60 = 10999700 p
p = 0.000005455

79
Q

What is copy-on-write?

A

Copy-on-write allows both parent and child process to initially share the same pages in memory. If either process modifies a shared page, only then is the page copied.

Copy-on-write is a technique that works by allowing the parent and child processes to initially share the same pages. These shared pages are marked as copy-on-write pages, meaning that if either process writes to a shared page, a copy of the shared page is created.

E.g., assume that the child process attempts to modify a page containing portions of the stack, with the pages set to be copy-on-write. The OS will create a copy of this page, mapping it to the address space of the child process. The child process will then modify its copied page and not the page belonging to the parent process. Obviously, when the copy-on-write technique is used, only the pages that are modified by either process are copied; all unmodified pages can be shared by the parent and child processes. Note, too, that only pages that can be modified need be marked as copy-on-write. Pages that cannot be modified (pages containing executable code) can be shared by the parent and child.

80
Q

When a page fault occurs, what happens if there is no free frame?

A

The operating system has to replace a frame so it can bring in the demanded page.

81
Q

What is a modify (dirty) bit? What is the difference between the dirty bit and a valid bit?

A

The valid shows whether the content of the page table is valid or not. The dirty bit shows whether the content of the frame has been changed or not. If the dirty bit is zero, then there is no need to write it back into the disk.

82
Q

What are the two main methods for implementing LRU?

A
  1. Using a counter next to each frame. Every time that frame is referenced the counter is incremented. When it comes to replacing a frame, the frame with the smallest counter value is replaced. After each frame replacement, all counters are reset to zero.
  2. We can use a stack. When a page enters the main memory, the page number will be on top of the stack. Every time a page is referenced, its number goes to the top of the stack. When we want to replace a frame, the one at the bottom of the stack is replaced.
83
Q

How is the “second chance” (or clock replacement) method implemented?

A

A reference bit (W) is assigned to each frame. Every time that a frame is referenced, its W bit is set to 1. When a frame is replaced, all W bits are reset to 0. The FIFO strategy is implemented. When a page fault occurs, the frames are tested based on FIFO. If the frame has a zero W, then it is replaced. If the frame has a one W, then it is given a second chance, and its W is reset to zero. Then the next frame based on FIFO is check and so on.

84
Q

What is enhanced second-chance replacement method?

A

It uses two bits for each frame, a reference bit and a dirty bit.

00 means that the page is not referenced since the last page fault and it is not modified.
10 means that the frame has been referenced but not modified. This page has the second highest priority replacement since it doesn’t need a write-back.
01 means that the page has not been referenced, but it’s been modified. This page has the second lowest priority for removal since we need to write it back into the disk.
11 means that the page has been referenced and it’s been modified. This page has the lowest priority for replacement.

Based on FIFO, the frames are tested. If the pair of bits are 00, then the page is replaced. Otherwise, the reference bit of the page is set to zero and the next frame is tested. If no frame is found in the first round, a second round is performed.

85
Q

What is thrashing?

A

If a process is using all of its allocated memory space and it needs another page, then a page fault would occur. An active page of this process has to be replaced, which causes another page fault. Hence, each page fault replaces a frame which is being used and causes another page fault. Continuous page faults severely reduce performance. This situation is known as thrashing.

86
Q

What is the relationship between locality and thrashing?

A

If the overall locality of all active processes is larger than the physical memory, then thrashing occurs. This situation means that the active processes need to refer to a more extensive space than the main memory. Hence, one active frame has to be replaced by a new page, and this loop causes thrashing.

87
Q

Windows implements demand-paging using clustering. What is clustering?

A

Besides bringing in the demanded page, it brings in the surrounding pages too. E.g. if the page number N from the virtual space is demanded, then pages N-1 and N+1 are brought into the main memory also.

88
Q

What is the working set of a Process Pi?

A

It is the list of all pages referenced in the most recent (delta) time steps.

89
Q

What happens if the working set consists of a small number of page references?

A

It means that locality of the process is very high; hence, the process has needed to reference only a few pages.

90
Q

Windows uses two concepts of “working set minimum” and “working set maximum.” What are these two concepts and how are they used?

A

Working set minimum is the minimum number of pages the process is guaranteed to have in memory. A process may be assigned as many pages up to its working set maximum.

91
Q

What are the similarities between Windows, Solaris, and Linux in dealing with virtual memory.

A

They all use demand paging and copy-on-write. Each system also uses a variation of LRU approximation known as the clock algorithm.

92
Q

What does Windows do when a process loses frames below its “working set minimum.”

A

Windows will take away frames from those processes that have frames more than their working set minimum, and gives them to those processes that have less frames than their minimum.

93
Q

What is prepaging?

A

When a process starts, it has high page fault rate because none of its needed pages are resident in the memory. To avoid high page faults, all or most of the pages of the process are brought into the main memory before they are demanded and before generating page faults. After a while, if some of these pages are not being used, they will be replaced.

94
Q

What is page-fault frequency?

A

Number of page faults that occur in a unit of time for a process is called page fault frequency. E.g., if the number of faults that arise for a process in every 10,000 references is very low, it means that the process has high locality and is not using all of its allocated frames, and we can take away some of its unused frames. If the page-fault frequency is very high, it means that the process needs more frames and we should assign more frames to that process.

95
Q

68‐ The following diagram shows the addressing system of the IA‐32. What is the difference between addressing of a 4KB page and 4MB page?

A

For 4KB-pages, IA-32 uses a two-level paging scheme (p1=10 bits; p2=10 bits, d = 12 bits). One entry in the page directory is the Page_Size flag, which, if set, indicates that the size of the page frame is 4MB and not the standard 4KB. If this flag is set, the page directory points directly to the 4MB page frame, bypassing the inner page table and the 22 low-order bits in the linear address refer to the offset in the 4MB page frame.

96
Q

92‐ Suppose we are using a stack to implement LRU replacement method. Consider the string of references shown below. Suppose that we know the content of stack after point a. What will be the content of the stack after point b?

A

7 → 2 → 1 → 0 → 4

97
Q

94‐ Suppose that the following image is showing the circular queue of pages in a FIFO page‐ replacement mechanism. FIFO is now looking at the “next victim.” Which frame will be replaced in this list? How does the list of reference bits will look like just before the page replacement?

A

Based on the FIFO strategy this frame should be replaced, but it is given a second chance because it has been referenced and has a 1 reference bit. Its bit is set to 0 and the next frame in line is checked. This goes on until it finds a frame with a 0 reference bit, which is then replaced.

98
Q

98- Let us assume that each row of the following matrix is stored in one page. Which of the two pieces of codes would cause fewer page-faults?

A

A) B) Solution: code in part B) reads one i and 128 j which is one row of the matrix. For example, data[0][0],….,data[0][127] are read all from one page. Hence, for each row of the matrix, one page-fault occurs. Overall, the code in part B) causes 128 page-faults, one for reading each row of the matrix. On the other hand code of part A) causes one page-fault for each element of the matrix, totaling to 128x128 page-faults.

99
Q

101- Consider the following figure, which shows a sequence of page references. Why is the working set, at time t1, consists of 1,2,5,6,7?

A

The value of Δ is 10; hence, the list of pages that are referenced in the
past ten steps, at time t1, consists of 1,2,5,6,7.

100
Q

103- What is the working set in the following diagram at time t2?

A

{3, 4}

101
Q

93 - Suppose that the main memory consists of 5 frames. The following sequence of
frames have been references: 1,3,2,4,5,2,1,3. At the end of this sequence, a
page fault occurs, and one of the frames must be replaced using the “clock”
strategy. Which frame is replaced? A

A

Frame 4 should be replaced. (The row with all 0s should be replaced - every time a page is referenced you put all 1s in the rows and all 0s in the column)

102
Q

61‐ Explain the following diagram. What are p, q, s, r, and d? Suppose the 64‐bit logical
address of F1234567890ABCDE is mapped into the physical address of 4FFCDE.
What are the values of p, d, r, and p? Also, give an example of the value of q.

A

We see that pages are 4KB since the low significant 12 bits of the physical
and logical addresses are the same. P is the page number which is 52 bits and the
first 13 Hex digits of the address. r is 4FF and d is CDE.

103
Q

76‐ What are the contents of entries of the page table below?

A

see diagram

104
Q

85‐ Suppose that we have the following sequence of references to virtual pages. Each
number in this sequence of 21 references shows the page number that the CPU
has referred.
1, 4, 1, 6, 1,1,1,1, 6, 1,1,1,1, 6, 1,1,1,1, 6, 1,1

Using FIFO, how many page faults occur if a) we only have one frame in our main
memory? B) we have two frames in the memory? C) if we have three frames? D)
if we have four frames in the memory?

A

If the main memory is only one
frame then each time the sequence wants to access a new page a page fault occurs.
For example, if the sequence has four consecutive 1’s then only the first reference
to page 1 would cause a fault and the other three references to page 1 are hits.
Hence, in the above sequence, 11 page‐faults occur if we only have one frame. If
we have 2 frames in the memory, then 4 page‐faults occur. If we have 3 frames,
then 3 faults occur, and with 4 frames we will have 3 page‐faults.

105
Q

Suppose that we have the following string of references:
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1.
Further, assume that we only have three frames in the main memory. We want to
use a FIFO strategy for page replacement. Fill out the frames in the image below
with the appropriate page number.

A

15 page faults

106
Q

88‐ For the following string of references fill out the image with appropriate page
numbers based on Belady’s Optimal replacement method:
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1.

A

9 page faults

107
Q

For the following string of references fill out the image with appropriate page
numbers based on LRU (least recently used) replacement method:
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1.

A

12 page faults

108
Q

For a demand paging system consider that page fault rate is p .What is Effective Access Time (EAT) for such a system?

A

EAT = (1 - p) * memory access time + p * hard disk access time
Average access time is: (probability
of finding the page in a frame × access time to frame) + (probability of page fault × delay time to get the page from the disk).

109
Q

What is Belady’s optimal replacement method?

A

It considers that it knows the future and replaces the page that will not be used shortly.