CSC 246 Final Flashcards

1
Q

Deadlock Avoidance - simplest technique

A

Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need

The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition

Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Deadlock avoidance - Safe state (and basic facts)

A

System is in safe state if there exists a sequence of ALL the processes in the systems such that for each Pi, the resources that Pi can still request can be satisfied by currently available resources + resources held by all the Pj, with j < I

That is:
If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished
When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and terminate
When Pi terminates, Pi +1 can obtain its needed resources, and so on

Basic facts:
If a system is in safe state ⇒ no deadlocks

If a system is in unsafe state ⇒ possibility of deadlock

Avoidance ⇒ ensure that a system will never enter an unsafe state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Resource-Allocation Graph Scheme

A

If Single instance of a resource type, Use a resource-allocation graph

  • Claim edge Pi → Rj indicated that process Pj may request resource Rj; represented by a dashed line
  • Claim edge converts to request edge when a process requests a resource
  • Request edge converted to an assignment edge when the resource is allocated to the process
  • When a resource is released by a process, assignment edge reconverts to a claim edge
  • Resources must be claimed a priori in the system

Suppose that process Pi requests a resource Rj:
The request can be granted only if converting the request edge to an assignment edge does not result in the formation of a cycle in the resource allocation graph

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Banker’s Algorithm

A

Used Multiple instances of resources, Each process must a priori claim maximum use

Let n = number of processes, and m = number of resources types.

Available

Max

Allocation

Need [i,j] = Max[i,j] – Allocation [i,j]

Request algorithm:
Must check that Request <= need (cannot exceed maximum claim)

Must check that request <= allocation, if not, must wait for other processes to go

Give safety sequence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Deadlock Detection - single instance of resources

A

Maintain wait-for graph
Nodes are processes
Pi → Pj if Pi is waiting for Pj

Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle, there exists a deadlock

An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is the number of vertices in the graph

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Deadlock Detection algorithm usage issues

A

When, and how often, to invoke depends on:
How often a deadlock is likely to occur?
How many processes will need to be rolled back?
one for each disjoint cycle

If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and so we would not be able to tell which of the many deadlocked processes “caused” the deadlock.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Recovery from Deadlock: Process Termination (order considerations)

A

Abort all deadlocked processes

Abort one process at a time until the deadlock cycle is eliminated

In which order should we choose to abort?

  • Priority of the process
  • How long process has computed, and how much longer to completion
  • Resources the process has used
  • Resources process needs to complete
  • How many processes will need to be terminated
  • Is process interactive or batch?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Recovery from Deadlock: Resource Preemption

A

Selecting a victim – minimize cost

Rollback – return to some safe state, restart process for that state

Starvation – same process may always be picked as victim, include number of rollback in cost factor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Base and Limit Registers (basic definition)

A

A pair of base and limit registers define the logical address space
CPU must check every memory access generated in user mode to be sure it is between base and limit for that user

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Address Binding

A

Programs on disk, ready to be brought into memory to execute form an input queue
Without support, must be loaded into address 0000

Further, addresses represented in different ways at different stages of a program’s life

  • Source code addresses usually symbolic
  • Compiled code addresses bind to relocatable addresses
  • i.e. “14 bytes from beginning of this module”
  • Linker or loader will bind relocatable addresses to absolute addresses
  • i.e. 74014
  • Each binding maps one address space to another
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Address binding of instructions and data to memory addresses can happen at three different stages

A

-Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes
-Load time: Must generate relocatable code if memory location is not known at compile time
-Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another
Need hardware support for address maps (e.g., base and limit registers)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Logical vs. Physical Address Space (Concepts)

A

-The concept of a logical address space that is bound to a separate physical address space is central to proper memory management
Logical address – generated by the CPU; also referred to as virtual address
Physical address – address seen by the memory unit
-Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme

  • Logical address space is the set of all logical addresses generated by a program
  • Physical address space is the set of all physical addresses generated by a program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Memory Management Unit (MMU)

A

Hardware device that at run time maps virtual to physical address

To start, consider simple scheme where the value in the relocation register is added to every address generated by a user process at the time it is sent to memory

  • Base register now called relocation register
  • MS-DOS on Intel 80x86 used 4 relocation registers

The user program deals with logical addresses; it never sees the real physical addresses

  • Execution-time binding occurs when reference is made to location in memory
  • Logical address bound to physical addresses
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Dynamic relocation using relocation register (benefits)

A

Routine is not loaded until it is called
Better memory-space utilization; unused routine is never loaded
All routines kept on disk in relocatable load format
Useful when large amounts of code are needed to handle infrequently occurring cases
No special support from the operating system is required
Implemented through program design
OS can help by providing libraries to implement dynamic loading

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Static vs. Dynamic linking

A
  • Static linking – system libraries and program code combined by the loader into the binary program image
  • Dynamic linking –linking postponed until execution time

Small piece of code, stub, used to locate the appropriate memory-resident library routine
Stub replaces itself with the address of the routine, and executes the routine
Operating system checks if routine is in processes’ memory address
-If not in address space, add to address space
Dynamic linking is particularly useful for libraries
-Also known as shared libraries

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Swapping

A
  • A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution
    • Total physical memory space of processes can exceed physical memory
  • Backing store and roll in, roll out
  • Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped
  • System maintains a ready queue of ready-to-run processes which have memory images on disk

Does the swapped out process need to swap back in to same physical addresses?

  • Depends on address binding method
  • Plus consider pending I/O to / from process memory space

Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows)
Swapping normally disabled
Started if more than threshold amount of memory allocated
Disabled again once memory demand reduced below threshold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Backing store (swapping)

A

fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Roll out, roll in (swapping)

A

swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Context Switch Time including Swapping

A

If next processes to be put on CPU is not in memory, need to swap out a process and swap in target process
Context switch time can then be very high

Can reduce if reduced size of memory swapped – by knowing how much memory is really being used
System calls to inform OS of memory use via request_memory() and release_memory()

Other constraints as well on swapping

  • Pending I/O – can’t swap out as I/O would occur to wrong process
  • Or always transfer I/O to kernel space, then to I/O device
    • Known as double buffering, adds overhead
  • Standard swapping not used in modern operating systems
    • But modified version common
    • Swap only when free memory extremely low
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Swapping on Mobile Systems

A

Not typically supported

Instead use other methods to free memory if low

  • iOS asks apps to voluntarily relinquish allocated memory
  • Read-only data thrown out and reloaded from flash if needed
  • Failure to free can result in termination
  • Android terminates apps if low free memory, but first writes application state to flash for fast restart
  • Both OSes support paging as discussed below
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Contiguous Memory Allocation

A

Contiguous allocation is one early method to allocate resources efficiently

Main memory usually into two partitions:
Resident operating system, usually held in low memory with the interrupt vector
User processes then held in high memory
Each process contained in single contiguous section of memory

Relocation registers used to protect user processes from each other, and from changing operating-system code and data

 - Base register contains value of smallest physical address
 - Limit register contains range of logical addresses
 - MMU maps logical address dynamically
 - Can then allow actions such as kernel code being transient and kernel changing size
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Multiple-partition allocation

A

Multiple-partition allocation:
Degree of multiprogramming limited by number of partitions
Variable-partition sizes for efficiency (sized to a given process’ needs)
Hole – block of available memory; holes of various size are scattered throughout memory
When a process arrives, it is allocated memory from a hole large enough to accommodate it
Process exiting frees its partition, adjacent free partitions combined
Operating system maintains information about:
a) allocated partitions b) free partitions (hole)

Dynamic Storage-Allocation Problem:
How to satisfy a request of size n from a list of free holes?

First-fit: Allocate the first hole that is big enough

Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size
Produces the smallest leftover hole

Worst-fit: Allocate the largest hole; must also search entire list
Produces the largest leftover hole (worst choice for speed and storage)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Fragmentation

A

External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous

Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used

First fit analysis reveals that given N blocks allocated, 0.5 N blocks lost to fragmentation
1/3 may be unusable -> 50-percent rule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

compaction

A

Reduce external fragmentation by compaction

  • Shuffle memory contents to place all free memory together in one large block
  • Compaction is possible only if relocation is dynamic, and is done at execution time
  • I/O problem
    • Latch job in memory while it is involved in I/O
    • Do I/O only into OS buffers

Now consider that backing store has same fragmentation problems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Paging
Physical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter is available -Avoids external fragmentation -Avoids problem of varying sized memory chunks Divide physical memory into fixed-sized blocks called frames -Size is power of 2, between 512 bytes and 16 Mbytes Divide logical memory into blocks of same size called pages Keep track of all free frames To run a program of size N pages, need to find N free frames and load program Set up a page table to translate logical to physical addresses Backing store likewise split into pages Still have Internal fragmentation
26
Address Translation Scheme (for Paging)
``` Address generated by CPU is divided into: Page number (p) – used as an index into a page table which contains base address of each page in physical memory Page offset (d) – combined with base address to define the physical memory address that is sent to the memory unit ``` For given logical address space 2^m and page size 2^n
27
Swapping vs. Paging
In general: swapping refers to copying an entire process to disk Paging is a memory mgmt. technique However, the concept of paging also includes copying data to disk. Windows has a page file and a swap file (pagefile.sys, swapfile.sys) To complicate things, a lot of times these terms are used interchangeably!
28
Implementation of Page Table
Page table is kept in main memory Page-table base register (PTBR) points to the page table Page-table length register (PTLR) indicates size of the page table In this scheme every data/instruction access requires two memory accesses -One for the page table and one for the data / instruction The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs)
29
translation look-aside buffers (TLBs)
special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs) used in page tables to solve two memory access problem
30
Effective Access Time (EAT - Paging)
Hit ratio = α Hit ratio – percentage of times that a page number is found in the associative registers; ratio related to number of associative registers Consider α = 80%, 20ns for TLB search, 100ns for memory access Effective Access Time (EAT) = 0.80 x 100 + 0.20 x 200 = 120ns (THE 200ns is double for 2 memory accesses) Consider more realistic hit ratio -> α = 99%, 100ns for memory access EAT = 0.99 x 100 + 0.01 x 200 = 101ns
31
Valid-invalid (Paging)
Valid-invalid bit attached to each entry in the page table: - “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page - “invalid” indicates that the page is not in the process’ logical address space - Or use page-table length register (PTLR) Any violations result in a trap to the kernel Memory protection implemented by associating protection bit with each frame to indicate if read-only or read-write access is allowed -Can also add more bits to indicate page execute-only, and so on
32
Structure of the Page Table (3 different types)
Memory structures for paging can get huge using straight-forward methods Consider a 32-bit logical address space as on modern computers Page size of 4 KB (212) Page table would have 1 million entries (232 / 212) If each entry is 4 bytes -> 4 MB of physical address space / memory for page table alone -That amount of memory used to cost a lot -Don’t want to allocate that contiguously in main memory Hierarchical Paging* Hashed Page Tables Inverted Page Tables*
33
Hierarchical Page Tables
Break up the logical address space into multiple page tables A simple technique is a two-level page table We then page the page table Forward mapped page table -where p1 is an index into the outer page table, and p2 is the displacement within the page of the inner page table
34
Inverted Page Table
Rather than each process having a page table and keeping track of all possible logical pages, track all physical pages One entry for each real page of memory Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs Use hash table to limit the search to one — or at most a few — page-table entries -TLB can accelerate access But how to implement shared memory? -One mapping of a virtual address to the shared physical address
35
Shared Pages Vs private code
Shared code - One copy of read-only (reentrant) - code shared among processes (e.g., text editors, compilers) - Similar to multiple threads sharing the same process space - Also useful for interprocess communication if sharing of read-write pages is allowed Private code and data -Each process keeps a separate copy of the code and data -The pages for the private code and data can appear anywhere in the logical address space
36
Segmentation
``` Memory-management scheme that supports user view of memory A program is a collection of segments A segment is a logical unit such as: main program procedure function method object local variables, global variables common block stack symbol table arrays ```
37
Segmentation architecture
Logical address consists of a two tuple: , Segment table – maps two-dimensional physical addresses; each table entry has: - base – contains the starting physical address where the segments reside in memory - limit – specifies the length of the segment Segment-table base register (STBR) points to the segment table’s location in memory Segment-table length register (STLR) indicates number of segments used by a program; segment number s is legal if s < STLR Protection: With each entry in segment table associate: -validation bit = 0 ⇒ illegal segment -read/write/execute privileges Protection bits associated with segments; code sharing occurs at segment level Since segments vary in length, memory allocation is a dynamic storage-allocation problem No internal fragmentation since segments can be different sizes A segmentation example is shown in the following diagram
38
Virtual Memory (Pros and Cons/background)
Code needs to be in memory to execute, but entire program rarely used Error code, unusual routines, large data structures Entire program code not needed at same time Pros Consider ability to execute partially-loaded program Program no longer constrained by limits of physical memory Each program takes less memory while running -> more programs run at the same time Increased CPU utilization and throughput with no increase in response time or turnaround time Less I/O needed to load or swap programs into memory -> each user program runs faster
39
Virtual Memory (definition)
Virtual memory – separation of user logical memory from physical memory Only part of the program needs to be in memory for execution Logical address space can therefore be much larger than physical address space Allows address spaces to be shared by several processes Allows for more efficient process creation More programs running concurrently Less I/O needed to load or swap processes
40
Virtual Address Space (definition)
Virtual address space – logical view of how process is stored in memory Usually start at address 0, contiguous addresses until end of space Meanwhile, physical memory organized in page frames MMU (memory management unit) must map logical to physical Virtual memory can be implemented via: Demand paging Demand segmentation
41
Demand Paging
Could bring entire process into memory at load time ``` Or bring a page into memory only when it is needed: Less I/O needed, no unnecessary I/O Less memory needed Faster response More users ``` Similar to paging system with swapping (diagram on right, but processes reside in secondary memory like a disk) Page is needed ⇒ reference to it invalid reference ⇒ abort not-in-memory ⇒ bring to memory Lazy swapper – never swaps a page into memory unless page will be needed Swapper that deals with pages is a pager Extreme case – start process with no pages in memory OS sets instruction pointer to first instruction of process, non-memory-resident -> page fault And for every other process pages on first access Pure demand paging Actually, a given instruction could access multiple pages -> multiple page faults Consider fetch and decode of instruction which adds 2 numbers from memory and stores result back to memory Pain decreased because of locality of reference Hardware support needed for demand paging Page table with valid / invalid bit Secondary memory (swap device with swap space) Instruction restart
42
Page Fault
If there is a reference to a page, first reference to that page will trap to operating system: page fault 1) Operating system looks at another table to decide: Invalid reference ⇒ abort Just not in memory 2) Find free frame 3) Swap page into frame via scheduled disk operation 4) Reset tables to indicate page now in memory Set validation bit = v 5) Restart the instruction that caused the page fault
43
Demand Paging Optimizations
Swap space I/O faster than file system I/O even if on the same device Swap allocated in larger chunks, less management needed than file system Copy entire process image to swap space at process load time Then page in and out of swap space Used in older BSD Unix Demand page in from program binary on disk, but discard rather than paging out when freeing frame Used in Solaris and current BSD Still need to write to swap space Pages not associated with a file (like stack and heap) – anonymous memory Pages modified in memory but not yet written back to the file system Mobile systems Typically don’t support swapping Instead, demand page from file system and reclaim read-only pages (such as code)
44
Copy-on-Write (COW)
Copy-on-Write (COW) allows both parent and child processes to initially share the same pages in memory If either process modifies a shared page, only then is the page copied COW allows more efficient process creation as only modified pages are copied In general, free pages are allocated from a pool of zero-fill-on-demand pages Pool should always have free frames for fast demand page execution Don’t want to have to free a frame as well as other processing on page fault vfork() variation on fork() system call has parent suspend and child using copy-on-write address space of parent Designed to have child call exec() Very efficient
45
Page Replacement (definition and basic steps)
Prevent over-allocation of memory by modifying page-fault service routine to include page replacement Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk Page replacement completes separation between logical memory and physical memory – large virtual memory can be provided on a smaller physical memory Basic Steps: 1) Find the location of the desired page on disk 2) Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim frame - Write victim frame to disk if dirty 3) Bring the desired page into the (newly) free frame; update the page and frame tables 4) Continue the process by restarting the instruction that caused the trap Note now potentially 2 page transfers for page fault – increasing EAT
46
Page and Frame Replacement Algorithms
Frame-allocation algorithm determines How many frames to give each process Which frames to replace Page-replacement algorithm Want lowest page-fault rate on both first access and re-access Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string Results depend on number of frames available FIFO, Optimal, LRU
47
First-In-First-Out (FIFO) Algorithm (page replacement)
Adding more frames can cause more page faults! Belady’s Anomaly How to track ages of pages? Just use a FIFO queue
48
Optimal Algorithm (page replacement)
Replace page that will not be used for longest period of time 9 is optimal for the example How do you know this? Can’t read the future Used for measuring how well your algorithm performs No Belady's anamoly
49
Least Recently Used (LRU) Algorithm (page replacement)
Use past knowledge rather than future Replace page that has not been used in the most amount of time Associate time of last use with each page 12 faults – better than FIFO but worse than OPT Generally good algorithm and frequently used But how to implement? Counter implementation Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter When a page needs to be changed, look at the counters to find smallest value Search through table needed Stack implementation (better) Keep a stack of page numbers in a double link form: Page referenced: move it to the top requires 6 pointers to be changed easier to get from top, But each update more expensive No search for replacement No Belady's anamoly LRU needs special hardware and is still slow so need approximations
50
LRU approximation algorithms
LRU needs special hardware and is still slow Reference bit With each page associate a bit, initially = 0 When page is referenced bit set to 1 Replace any with reference bit = 0 (if one exists) We do not know the order, however Second-chance algorithm Generally FIFO, plus hardware-provided reference bit Clock replacement If page to be replaced has Reference bit = 0 -> replace it reference bit = 1 then: set reference bit 0, leave page in memory replace next page, subject to same rules
51
Counting Algorithms (page replacement)
Keep a counter of the number of references that have been made to each page Not common Least Frequently Used (LFU) Algorithm: replaces page with smallest count Most Frequently Used (MFU) Algorithm: based on the argument that the page with the smallest count was probably just brought in and has yet to be used
52
Fixed frame allocation
Each process needs minimum number of frames Equal allocation – For example, if there are 100 frames (after allocating frames for the OS) and 5 processes, give each process 20 frames Keep some as free frame buffer pool Proportional allocation – Allocate according to the size of process Dynamic as degree of multiprogramming, process sizes change
53
Priority frame allocation
Use a proportional allocation scheme using priorities rather than size If process Pi generates a page fault, select for replacement one of its frames select for replacement a frame from a process with lower priority number
54
Global vs local allocation (replacement)
Global replacement – process selects a replacement frame from the set of all frames on the system; one process can take a frame from another But then process execution time can vary greatly But greater throughput so more common Local replacement – each process selects from only its own set of allocated frames More consistent per-process performance But possibly underutilized memory
55
Thrashing
``` If a process does not have “enough” pages, the page-fault rate is very high Page fault to get page Replace existing frame But quickly need replaced frame back This leads to: Low CPU utilization Operating system thinking that it needs to increase the degree of multiprogramming Another process added to the system ``` Thrashing ≡ a process is busy swapping pages in and out Why does demand paging work? Locality model Process migrates from one locality to another Localities may overlap Why does thrashing occur? Σ size of locality > total memory size Limit effects by using local or priority page replacement
56
Working-Set Model
Δ ≡ working-set window ≡ a fixed number of page references Example: 10,000 instructions WSSi (working set of Process Pi) = total number of pages referenced in the most recent Δ (varies in time) if Δ too small will not encompass entire locality if Δ too large will encompass several localities if Δ = ∞ ⇒ will encompass entire program D = Σ WSSi ≡ total demand frames Approximation of locality if D > m ⇒ Thrashing Policy if D > m, then suspend or swap out one of the processes Approximate with interval timer + a reference bit Example: Δ = 10,000 Timer interrupts after every 5000 time units Keep in memory 2 bits for each page Whenever a timer interrupts copy and sets the values of all reference bits to 0 If one of the bits in memory = 1 ⇒ page in working set Why is this not completely accurate? Improvement = 10 bits and interrupt every 1000 time units Direct relationship between working set of a process and its page-fault rate Working set changes over time Peaks and valleys over time
57
page fault frequency (establish as fix)
More direct approach than WSS Establish “acceptable” page-fault frequency (PFF) rate and use local replacement policy If actual rate too low, process loses frame If actual rate too high, process gains frame
58
Memory mapped files
Memory-mapped file I/O allows file I/O to be treated as routine memory access by mapping a disk block to a page in memory A file is initially read using demand paging A page-sized portion of the file is read from the file system into a physical page Subsequent reads/writes to/from the file are treated as ordinary memory accesses Simplifies and speeds file access by driving file I/O through memory rather than read() and write() system calls Also allows several processes to map the same file allowing the pages in memory to be shared But when does written data make it to disk? Periodically and/or at file close() time For example, when the pager scans for dirty pages
59
Allocating Kernel Memory
Treated differently from user memory Often allocated from a free-memory pool Kernel requests memory for structures of varying sizes Some kernel memory needs to be contiguous I.e. for device I/O
60
Buddy System (kernel memory strategy)
Allocates memory from fixed-size segment consisting of physically-contiguous pages Memory allocated using power-of-2 allocator Satisfies requests in units sized as power of 2 Request rounded up to next highest power of 2 When smaller allocation needed than is available, current chunk split into two buddies of next-lower power of 2 Continue until appropriate sized chunk available For example, assume 256KB chunk available, kernel requests 21KB Split into AL and AR of 128KB each One further divided into BL and BR of 64KB One further into CL and CR of 32KB each – one used to satisfy request Advantage – quickly coalesce unused chunks into larger chunk Disadvantage - fragmentation
61
Slab Allocator (Kernel memory strategy)
Alternate strategy Slab is one or more physically contiguous pages Cache consists of one or more slabs Single cache for each unique kernel data structure Each cache filled with objects – instantiations of the data structure When cache created, filled with objects marked as free When structures stored, objects marked as used If slab is full of used objects, next object allocated from empty slab If no empty slabs, new slab allocated Benefits include no fragmentation, fast memory request satisfaction
62
Prepaging
Prepaging To reduce the large number of page faults that occurs at process startup Prepage all or some of the pages a process will need, before they are referenced But if prepaged pages are unused, I/O and memory was wasted Assume s pages are prepaged and α of the pages are used Is cost of s * α saved pages faults > or < than the cost of prepaging s * (1- α) unnecessary pages? α near zero ⇒ prepaging loses
63
Page size selection considerations
``` Page size selection must take into consideration: Fragmentation Page table size Resolution I/O overhead Number of page faults Locality TLB size and effectiveness Always power of 2, usually in the range 212 (4,096 bytes) to 222 (4,194,304 bytes) On average, growing over time ```
64
TLB Reach
TLB Reach - The amount of memory accessible from the TLB TLB Reach = (TLB Size) X (Page Size) Ideally, the working set of each process is stored in the TLB Otherwise there is a high degree of page faults Increase the Page Size This may lead to an increase in fragmentation as not all applications require a large page size Provide Multiple Page Sizes This allows applications that require larger page sizes the opportunity to use them without an increase in fragmentation
65
I/O Interlock
I/O Interlock – Pages must sometimes be locked into memory Consider I/O - Pages that are used for copying a file from a device must be locked from being selected for eviction by a page replacement algorithm Pinning of pages to lock into memory
66
Virtual Machine Overview
Fundamental idea – abstract hardware of a single computer into several different execution environments Similar to layered approach But layer creates virtual system (virtual machine, or VM) on which operation systems or applications can run Formal def A VMM provides an environment for programs that is essentially identical to the original machine Programs running within that environment show only minor performance decreases The VMM is in complete control of system resources Several components Host – underlying hardware system that runs VMs Virtual machine manager (VMM) or hypervisor – creates and runs virtual machines by providing interface that is identical to the host (Except in the case of paravirtualization) Guest Process – process provided with virtual copy of the host Usually an operating system Provided with a virtual copy of the host Single physical machine can run multiple operating systems concurrently, each in its own virtual machine
67
Virtual Machine Types
Type 0 hypervisors - (old idea "partitions") Hardware-based solutions that provide support for virtual machine creation and management via firmware IBM LPARs and Oracle LDOMs are examples Type 1 hypervisors - (company data center OSs) Operating-system-like software built to provide virtualization Including VMware ESX, Joyent SmartOS, and Citrix XenServer Type 1 hypervisors – Also includes general-purpose operating systems that provide standard functions as well as VMM functions Including Microsoft Windows Server with HyperV and RedHat Linux with KVM Type 2 hypervisors - (use at home) Applications that run on standard operating systems but provide VMM features to guest operating systems Including VMware Workstation and Fusion, Parallels Desktop, and Oracle VirtualBox (www.virtualbox.org)
68
Variations of VMMs
Paravirtualization - Technique in which the guest operating system is modified to work in cooperation with the VMM to optimize performance - VMM provides services that guest must be modified to use - Leads to increased performance - Less needed as hardware support for VMs grows Programming-environment virtualization - VMMs do not virtualize real hardware but instead create an optimized virtual system Used by Oracle Java (JVM) and Microsoft.Net -similar to interpreted languages -Programs written in Java run in the JVM no matter the underlying system Emulators – Allow applications written for one hardware environment to run on a very different hardware environment, such as a different type of CPU - performance challenge - translate all guest instructions from guest CPU to native CPU - Useful when host system has one architecture, guest compiled for other architecture Application containment - Not virtualization at all but rather provides virtualization-like features by segregating applications from the operating system, making them more secure, manageable -ex. Oracle containers / zones for example create virtual layer between OS and apps
69
Benefits and features of VMMs
Host system protected from VMs, VMs protected from each other I.e. A virus less likely to spread Sharing is provided though via shared file system volume, network communication Freeze, suspend, running VM, Snapshot of a given state, able to restore back to that state Clone by creating copy and running both original and copy Great for OS research, better system development efficiency Run multiple, different OSes on a single machine Consolidation, app dev, … Templating – create an OS + application VM, provide it to customers, use it to create multiple instances of that combination Live migration – move a running VM from one host to another! No interruption of user access All those features taken together -> cloud computing
70
VMM implementation
Most VMMs implement a virtual CPU (VCPU) to represent state of CPU per guest as guest believes it to be When guest context switched onto CPU by VMM, information from VCPU loaded and stored Whatever the type, a VM has a lifecycle - Created by VMM - Resources assigned to it (number of cores, amount of memory, networking details, storage details) - In type 0 hypervisor, resources usually dedicated - Other types dedicate or share resources, or a mix - When no longer needed, VM can be deleted, freeing resources Steps simpler, faster than with a physical machine install Can lead to virtual machine sprawl with lots of VMs, history and state difficult to track CPU scheduling Even single-CPU systems act like multiprocessor ones when virtualized One or more virtual CPUs per guest -overcommitment of CPU
71
Solid State disks (SSDs)
Nonvolatile memory used like a hard drive Many technology variations Can be more reliable than HDDs More expensive per MB Maybe have shorter life span Less capacity But much faster Busses can be too slow -> connect directly to PCI for example No moving parts, so no seek time or rotational latency
72
Network-attached storage (NAS)
Network-attached storage (NAS) is storage made available over a network rather than over a local connection (such as a bus) Remotely attaching to file systems NFS and CIFS are common protocols Implemented via remote procedure calls (RPCs) between host and storage over typically TCP or UDP on IP network
73
File Concept (types and attributes)
``` Contiguous logical address space Types: Data numeric character binary Program Contents defined by file’s creator Many types Consider text file, source file, executable file ``` Attributes Name – only information kept in human-readable form Identifier – unique tag (number) identifies file within file system Type – needed for systems that support different types Location – pointer to file location on device Size – current file size Protection – controls who can do reading, writing, executing Time, date, and user identification – data for protection, security, and usage monitoring Information about files are kept in the directory structure, which is maintained on the disk Many variations, including extended file attributes such as file checksum Information kept in the directory structure
74
Disk structure
Disk can be subdivided into partitions Disks or partitions can be RAID protected against failure Disk or partition can be used raw – without a file system, or formatted with a file system Entity containing file system known as a volume Each volume containing file system also tracks that file system’s info in a device directory or volume table of contents
75
Tree structure directories (and linux commands)
``` Absolute or relative path name .. (one up) . (current directory) / (one down) touch filename (create file in this directory) mkdir rm (delete file) pwd (print working directory) cd (where you are or go to) ``` mount A file system must be mounted before it can be accessed An unmounted file system (i.e., Fig. 11-11(b)) is mounted at a mount point
76
file sharing
Sharing of files on multi-user systems is desirable Sharing may be done through a protection scheme On distributed systems, files may be shared across a network Network File System (NFS) is a common distributed file-sharing method If multi-user system User IDs identify users, allowing permissions and protections to be per-user Group IDs allow users to be in groups, permitting group access rights Owner of a file / directory Group of a file / directory ``` Mode of access: read, write, execute Three classes of users on Unix / Linux RWX a) owner access 7 ⇒ 1 1 1 RWX b) group access 6 ⇒ 1 1 0 RWX c) public access 1 ⇒ 0 0 1 ``` Ask manager to create a group (unique name), say G, and add some users to the group. For a particular file (say game) or subdirectory, define an appropriate access.
77
logical file system
Logical file system manages metadata information Translates file name into file number, file handle, location by maintaining file control blocks (inodes in UNIX) Inodes contain info about files – ownership, permissions, location Directory management Protection
78
Boot control block
Boot control block contains info needed by system to boot OS from that volume Needed if volume contains OS, usually first block of volume
79
Volume control block (superblock, master file table)
Volume control block (superblock, master file table) contains volume details Total # of blocks, # of free blocks, block size, free block pointers or array Directory structure organizes the files Names and inode numbers, master file table
80
Partitions and Mounting
Partition can be a volume containing a file system (“cooked”) or raw – just a sequence of blocks with no file system Boot block can point to boot volume or boot loader which is a set of blocks that contain enough code to know how to load the kernel from the file system Or a boot management program for multi-os booting Root partition contains the OS, other partitions can hold other Oses, other file systems, or be raw Mounted at boot time Other partitions can mount automatically or manually At mount time, file system consistency checked Is all metadata correct? If not, fix it, try again If yes, add to mount table, allow access
81
Interrupts
CPU Interrupt-request line triggered by I/O device Checked by processor after each instruction Interrupt handler receives interrupts Maskable to ignore or delay some interrupts Interrupt vector to dispatch interrupt to correct handler Context switch at start and end Based on priority Some nonmaskable Interrupt chaining if more than one device at same interrupt number
82
Block and Character Devices
Block devices include disk drives Commands include read, write, seek Memory-mapped file access possible Character devices include keyboards, mice, serial ports Commands include get(), put() Libraries layered on top allow line editing