Operating Systems Flashcards

1
Q

Why do we need operating systems?

A

The OS works as an interface between users, applications and hardware. It hides the technical complexities (abstracts them away) and provides the user with interfaces to actually utilise the hardware

Provides a file system which allows data to be organised into files and folders (also permission management, metadata, searching)

It’s a resource manager: responsible for allocating resources to users and processes while ensuring there is no starvation (all users everyone should be able to access resources without a single user hogging them all), allocates resources according to policy (e.g. CPU timeslices can be allocated according to first-come-first-served etc.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the definition of an operation system?

A

Software that acts as an intermediary between a user and hardware, executes programs which serve users’ needs, makes solving user problems easier, makes the computer convenient to use (e.g. by providing a file system) and uses the resources of the system fairly according to policies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Structure of an operating system

A

Users
Application programs - define the way in which the system resources are used to solve the computing problems of the users
OS - controls and coordinates the use of hardware by applications
Hardware

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a kernel?

A

The core of the operating system, a program that is constantly running (in privileged/kernel mode)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Privileged mode/user mode

A

Programs in privileged/kernel mode can decide other programs’ access to hardware. The OS also prevents applications from interfering with each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the mode bit?

A

The bit provided by hardware that determines whether code is coming from user mode programs or kernel mode programs. Some instructions can only be executed in privileged mode, including managing interrupts, I/O, and halting processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the mode bit?

A

The bit provided by hardware that determines whether code is coming from user mode programs (1) or kernel mode programs (0). Some instructions can only be executed in privileged mode, including managing interrupts, I/O, and terminating processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

When does a transition from user mode to kernel mode occur?

A

When a user mode process asks the operating system to execute a function which requires a privileged instruction (system call)
When an interrupt occurs
When an error condition occurs
When an attempt is made to execute a privileged instruction while in user mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does the OS allow programs to communicate (on the same PC or over a network)?

A

Processes can communicate via shared memory or through message passing (packages moved by the OS across a network)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Error detection

A

The OS needs to be constantly aware of possible errors (which may occur in the CPU and RAM, I/O devices, user programs)
It needs to take the appropriate action to ensure correct and consistent computing
It also provides debugging facilities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Error detection

A

The OS needs to be constantly aware of possible errors (which may occur in the CPU and RAM, I/O devices, user programs)
It needs to take the appropriate action to ensure correct and consistent computing
It also provides debugging facilities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Protection and security

A

In a multiuser system, the OS should ensure that all access to system resources is controlled, programs do not interfere with each other, users cannot access each other’s personal data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

CLI vs GUI

A

CLI = command line (implemented by kernel, sometimes by system programs such as cmd.exe). Commands are either built in or names of programs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

User goals and system goals

A

User goals: OS should be convenient to use, easy to learn, reliable, safe, fast

System goals: Easy to design, implement and maintain, flexible, error-free, reliable, efficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How are operating systems implemented?

A

Lowest levels in assembly, main body in C, system programs in C, C++, Python, shell scripts

Higher level languages are easier to port to other hardware, emulation can allow an OS to run on non-native hardware

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is monolithic kernel architecture and how does UNIX use this?

A

An operating system architecture where the entire OS runs in kernel mode

The UNIX OS consisted of two parts: system programs and the kernel, which is anything below the system call interface and above the hardware

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is microkernel system architecture and how does Mac OS use this?

A

The kernel is small and only contains the bare minimum amount of software

Communication between modules such as device drivers, application programs and the file system takes place via message passing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does a process contain?

A

Code
Current activity (program counter and content of CPU registers)
Stack (temporary data such as local variables)
Data section (global variables)
Heap (memory allocated while process is running)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a process control block and what does it contain?

A

Data structure which stores information about a process
Process ID, status, CPU registers, priority, memory usage, I/O status

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Process states

A

New - the process is being created
Running - instructions are being executed
Waiting - the process is waiting for some event to occur
Ready - the process is ready to be dispatched to the CPU
Terminated - the process has completed its execution, or some other event causing termination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Transitions between process states

A

admitted: new=>ready
scheduler dispatch: ready=>running
interrupt: running=>ready
exit: running=>terminated
I/O or event wait: running=>waiting
I/O or event completion: waiting=>ready

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How are processes stored in memory?

A

Each process has a separate memory space, delimited by a base register and a limit register. The CPU has to check every memory access generated in user mode to make sure it’s within these bounds

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What does memory consist of?

A

Memory cells: electronic circuits which store one bit of information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Logical vs physical addresses

A

Logical addresses are generated by the CPU, and are also known as virtual addresses
Physical addresses are the ones used in the RAM itself, never known by user programs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the memory management unit?

A

The hardware device that maps virtual to physical addresses. A simple MMU adds the value in the relocation register to the logical address, to make a physical address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

How is main memory partitioned?

A

One partition for kernel processes and another for user processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How are new processes allocated memory?

A

When a process arrives, it is allocated memory from a “memory hole” large enough to accommodate it. The OS needs to maintain information about each partition and its allocated memory and free holes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is the first-fit strategy for allocating a hole to a new process?

A

Allocate the first hole which is large enough

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is the best-fit strategy for allocating a hole to a new process?

A

Allocate the smallest hole which is large enough
Requires you to search the whole list
Produces the smallest leftover hole

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is the worst-fit strategy for allocating a hole to a new process?

A

Allocate the largest hole
Requires you to search the whole list
Produces the largest leftover hole

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Resource sharing: three possible cases

A

The parent and child processes share all resources
The child process shares a subset of the parent’s resources
The parent and child processes share no resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Execution: two possible cases

A

The parent and child execute concurrently
The parent waits until the child terminates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Process termination

A

The process asks the OS to delete it using the exit() system call
Outputs data from child’s process to parent (via wait())
The child process’s resources are de-allocated by the operating system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

When do parent processes terminate execution of child processes

A

The child process has exceeded its allocated resources
The task assigned to child is no longer required
The parent is itself terminating

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Zombie and orphan processes

A

Zombie proccess: no parent waiting for it (did not invoke wait())
Orphan process: Parent terminated without invoking wait()

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Process scheduling

A

Purpose is to maximise CPU use, and quickly switch processes onto CPU for time sharing
Process scheduler selects process for next execution on CPU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Different types of scheduling queue

A

Job queue: Set of all processes
Ready queue: Set of all processes residing in main memory, ready and waiting to execute
Device queue: Set of all processes waiting for an I/O device

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Process creation

A

Child duplicates parent’s address space for easier communication with the parent
fork() system call creates new process which starts executing from the line below the parent’s fork() call

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Four components of the kernel

A

Privileged instruction set
Interrupt mechanism
Memory protection
Real-time clock

FLIH manages interrupts
Dispatcher/scheduler switches CPU between processes
Intra-OS communications (e.g. via system bus)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

First-level interrupt handler (FLIH)

A

Determines the source of an interrupt, what happened and to prioritise the processes
Initiates servicing of the interrupt (selects which process should be sent to the dispatcher/scheduler)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

The dispatcher

A

Assigns processing resource (CPU time) to processes

Is initiated when:
- A current process cannot continue
- An interrupt changes a process state
- A system call results in the current process not being able to continue (e.g. waiting for an input or output)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What are threads

A

Unit of execution
Lists a sequence of instructions that execute
Belongs to a process and executes within it
Kernel threads are initiated by the operating system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Multithreading

A

When an OS supports multiple threads within a single process

Single threading is where the OS does not recognise the separate concept of threads (e.g. MS-DOS)
One thread per process

Local variables are per thread, allocated on the stack (each thread has its own stack)

Global variables are shared between all threads, allocated in the data section. Need to take into account concurrent access

Dynamically allocated memory can be global or local (programmer chooses)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Benefits of multithreading

A

Creating, terminating and switching threads is quicker than doing the same thing but with processes
Responsiveness: May allow continued execution if part of process is blocked (if a tab crashes then the whole browser doesn’t crash)
Resource sharing: Threads share resources of process, easier than shared memory or message passing

Thread switching has lower communication overhead than context switching
Processes can take advantage of multiple cores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Amdahl’s law

A

Identifies performance gains from adding additional cores, to an application that has both serial and parallel components.

S = serial portion
N = processing cores

speedup <= 1/(S + (1-S)/N)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Long-term scheduler (job scheduler)

A

Selects which processes should be brought into the ready queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Medium-term scheduler

A

In charge of handling the swapped out-processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Short-term scheduler

A

Selects the processes to be executed next and allocated to the CPU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Dispatch latency

A

The time it takes for the dispatcher to stop one process and start another. The dispatcher saves the current process’s state into PCB, and restores the next process’s state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Optimisation criteria for schedulers

A

Maximum CPU utilisation
Maximum throughput (number of processes that complete their execution per unit time)
Minimum turnaround (time taken to execute a particular process)
Minimum waiting time (total amount of time a process has been waiting in the ready queue)
Response time (amount of time it takes for a request to receive a response (access to the CPU))

49
Q

First Come, First Served scheduling algorithm

A

The first process to request the CPU is allocated the CPU until it is completed. Simple algorithm, may result in a convoy effect where short processes have to wait for a long process to finish

50
Q

Shortest Job First

A

Associate with each process the length of its next CPU burst, and use these lengths to schedule the process with the shortest time (FCFS if they have the same length)
Gives minimum average waiting time

51
Q

Calculating waiting time and turnaround time

A

Waiting time: (start time - arrival time)
Turnaround time: (end time - arrival time)

52
Q

Shortest Remaining Time First

A

If a new process arrives in the ready queue with a CPU burst length less than the remaining time of the executing process, then switch to the new process and then back to the old process when it’s done
Pre-emptive version of SJF

53
Q

Round Robin

A

Each process gets a small unit of CPU time: a time slice usually 10-100 milliseconds
After the time slice has elapsed, the process is pre-empted and added to the end of the ready queue

54
Q

Round Robin performance

A

Performance depends on the size of the time slice
If the time slice is extremely large, the RR policy is the same as the FCFS policy
If the time slice is extremely small, a lot of time will be wasted on context switching
The rule of thumb is that 80% of CPU bursts should be shorter than the time quantum

55
Q

Priority scheduling

A

An algorithm associates a priority number with each process, and CPU time is allocated to the process with the highest priority (smallest number)

56
Q

What is starvation and when can it occur

A

When processes are given CPU time in order of priority, low priority processes may never execute (starvation). The solution is either aging (increasing priority as time goes on) or combining it with Round Robin: executing the highest-priority process without RR and the lower-priority ones with RR

57
Q

Multilevel queue

A

The ready queue is partitioned into separate queues, each with its own scheduling algorithm. Scheduling must be undertaken between queues: e.g. serving all from the foreground then from the background, or allowing each queue a certain percentage of CPU time

58
Q

Multilevel queue scheduling

A

Separate queue for each priority level
The problem with this is that it’s not flexible as processes cannot change queue

59
Q

Multilevel feedback queue

A

An adaption of multilevel queue scheduling where processes can move between various queues, allowing aging to be implemented. Multilevel feedback queue schedulers vary in the number of queues used, scheduling algorithms used for each queue, method used for promoting/demoting, and method for queue allocation

60
Q

Multilevel feedback queue evaluation

A

Deterministic modelling: taking a predetermined case and analysing the performance of each algorithm in this case

Little’s formula: n = LW
n = avg. long-term queue length
L = avg. arrival rate for new processes
W = avg. waiting time in queue

Simulation: using a complex model to determine average performance

Post-implementation: Examining the running operating system

61
Q

How does Windows scheduling work

A

The dispatcher uses a queue for each scheduling priority, and traverses the set of queues looking for the highest priority thread to run. When no thread is found in any queue, the dispatcher will select the idle thread for execution.
When a thread’s time slice runs out, it’s interrupted and dispatched to the ready queue + priority lowered
When a thread is released from a wait operation it is dispatched to the ready queue and its priority is raised

62
Q

Types of memory

A

Cache memory (invisible to operating system)
Main memory (RAM)
Storage memory
Virtual memory

63
Q

Structure of a process’s memory space

A

Base and limit registers define the logical address space. The CPU must check every memory access generated in user mode to be sure it’s between (base) and (base+limit)

64
Q

Logical vs physical address

A

Logical/virtual address: Generated by the CPU
Physical address: Address seen by the memory controller, never seen by programs

65
Q

Memory Management Unit

A

Hardware device which maps virtual to physical addresses. A simple method would be adding the value in the relocation register to every address generated by a user process, to get the physical address.

66
Q

How is main memory partitioned

A

Area for kernel processes and area for user processes

67
Q

How does the MMU convert logical to physical addresses

A
  1. Extract the page number p from the logical address
  2. From the page table, extract the frame number f which is stored at index p
  3. Replace the page number p in the logical address which the frame number f
68
Q

Contiguous allocation

A

It’s quicker to read memory contiguously without skipping. Memory allocation strategies must take into account holes (blocks of available memory) and how to allocate memory to newly-arrived processes.

69
Q

First-fit algorithm for memory hole allocation

A

Allocate the first hole which is big enough

70
Q

Best-fit algorithm for memory hole allocation

A

Allocate the smallest hole which is big enough
Produces the smallest leftover hole
Must search the entire list of holes

71
Q

Worst-fit algorithm for memory hole allocation

A

Allocates the largest hole
Produces the largest leftover hole
Must search entire list of holes

72
Q

External fragmentation

A

Memory space which is able to satisfy a request but is not contiguous

73
Q

Internal fragmentation

A

Where memory is allocated successfully but is slightly larger than the requested amount of memory (e.g. request for 27 bytes of memory => allocated to 32-byte block leaving 5 bytes unused)

74
Q

Fragmentation

A

With First Fit, if N blocks are allocated, 0.5N are lost to fragmentation, making 1/3 of memory unavailable. We reduce external fragmentation by compaction (shuffling memory contents so all free memory is together in one large block)

75
Q

Frames and pages

A

Physical memory is divided into fixed-size blocks called frames (size is typically between 512 and 8192 bytes)
Logical memory is divided into blocks of the same size called pages

76
Q

Paging

A

To run a program whose size is n pages, we need to find n free frames and load the program. A page table is set up to help manage the mapping of logical to physical addresses.
Logical addresses are divided into two parts:
page number (p): index into a page table (contains physical base address of all pages)
page offset (d): the difference between the base address and the current address

77
Q

Implementing the page table

A

The page table is kept in memory, and a page-table base register points to it
With this method every data/instruction access requires two memory accesses, one for the page table and one for the data/instruction

78
Q

Translation look-aside buffer (TLB)

A

When a logical address is generated by the CPU, the MMU first checks if its page number is present in the TLB. If it is, its frame number is immediately available. If not (TLB miss), the frame number must be obtained from the page table and the pair added to the TLB, overwriting a pre-existing entry if necessary.

79
Q

Effective Access Time

A

a = Hit ratio (percentage of times a page number is found in the TLB)
m = memory access time

Effective Access Time = m(2-a)

80
Q

Implementing memory protection with page tables

A

A valid-invalid bit is attached to each entry in the page table indicating whether or not the associated page is in the process’s logical address space

81
Q

What is virtual memory?

A

The OS lets programs address more memory locations than are actually provided in main memory

The logical address space is larger than the physical address space

Pages are swapped (paged) in and out of main memory

The physical address space is shared by several processes

Only part of the process needs to be in the physical address space for execution

A free frame list of the physical address space is maintained by the operating system.

82
Q

Virtual Address Space

A

Logical view of how a process is stored in memory
Each process the address space as a contiguous block of memory
Consists of:
Code (read-only)
Data (read and write)
Heap: address space in memory that holds data produced during execution of a process (expands and contracts)
Stack: address space which holds instructions and data known at the time of the procedure call. The contents of the stack grow as the process issues nested procedure calls and shrinks as the called procedures return.

83
Q

Demand paging

A

Bringing a page into memory only when it’s required by the process during its execution
Reduces: number of pages to be transferred, number of frames allocated to a process
Increases: number of processes executing, overall response time for a collection of processes

84
Q

How does demand paging work

A

When a reference is made to a page’s address, if valid/invalid bit is zero then it’s loaded into memory.

During address translation, when the valid/invalid bit in the page table entry is 0, this causes a page fault trap

85
Q

How is memory address handled

A

A process requests access to an address within a page
OS checks the validity of the process’s access to that address
If it’s invalid, the process is terminated and its memory is cleared
If it’s valid and the page has been allocated a frame in main memory, break
If it’s valid and a page fault trap has occurred, the OS will:
- Identify a free frame from the list
- Read the page from the backing store into the identified frame
- Update the process’s page table
- Restart the instruction interrupted by the page fault trap

86
Q

Performance of demand paging

A

p = page fault rate (0 <= p <= 1)
E = effective access time

E = (1-p)(memory access) + p(page fault overhead + swap page out + swap page in)

Page fault overhead = Context switching when relinquishing CPU, time waiting in the paging device queue, time waiting in ready queue, context switching when regaining the ready queue

87
Q

What happens when all main memory frames available to a process are already allocated?

A

Page replacement: Find a page in memory which isn’t really in use, and swap it out
The page which is swapped is determined by an algorithm

88
Q

Basic page replacement

A

Find the location of the desired page on the disk
If there is a free frame, use it
Otherwise, use a page-replacement algorithm to select a victim page
Write the victim’s page to the disk, update the process’s page table and the free frame list accordingly
Read the desired page into the newly-freed frame, update the process’s page table and restart the user process

89
Q

First in first out (FIFO) algorithm

A

Associate each page with the time it was brought into memory, and let the oldest page be the victim page.
Example:
Max frames in memory: 3, Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
Page frames: [7], [7, 0], [7, 0, 1], [2, 0, 1], [2, 0, 1], [2, 3, 1], etc.

90
Q

Belady’s anomaly

A

“For some page fault algorithms, the page fault rate may increase as the number of allocated frames increases”
FIFO suffers from this problem

91
Q

Optimal algorithm for page replacement

A

Replace the page which will not be needed for the longest period of time. This algorithm always results in the fewest page faults but is also impossible to implement as it requires seeing into the future

92
Q

Least Recently Used algorithm for page replacement

A

Associate with each page the time it was last used
The victim page will be the one that has not been used for the longest period of time
LRU belongs to a class of page replacement algorithms known as stack algorithms.
These algorithms can never exhibit Belady’s anomaly as a set of pages in memory for n frames is always a subset of the set of pages that would be in memory with n+1 frames.

93
Q

How is the LRU algorithm implemented?

A

Counter implementation: every page entry has a counter, and every time that page is referenced the time is recorded

Stack implementation: Keep a stack of page numbers in a double link form. When a page is referenced, move it to the top. The tail pointer points to the bottom of the stack (LRU page)

94
Q

Counting-based page replacement algorithms

A

Keep a counter of the number of references that have been made to each page

Least Frequently Used: Replaces page with smallest count

Most Frequently Used: Replaces page with the greatest count

95
Q

What is thrashing?

A

Thrashing is when a process is spending more time paging than doing useful work, usually because it doesn’t have enough pages. This leads to low CPU utilisation and the OS thinking that it needs to increase the degree of multiprogramming.

96
Q

What is prepaging?

A

Paging all the necessary pages at once

Each process stores a list of pages within its working set model. If a process is suspended we remember the working set for that process. When it resumes we automatically bring back into memory its entire working set before restarting the process.

97
Q

Ways of allocating frames

A

Global replacement: Selecting a replacement frame from the set of all frames (one process can take a frame from another)

Local replacement: Selecting from only the process’s own set of allocated frames

Fixed allocation: Allocate according to size of process
Priority allocation: Select a victim frame from a process with a lower priority number

98
Q

Page-Fault Frequency Scheme

A

An alternative approach to prevent thrashing. It establishes an acceptable page-fault rate, and then gives or takes frames from processes depending on their actual page fault rate.

99
Q

Disk performance parameters

A

Seek time: The time it takes to move the head to the target track
Rotational delay/latency: The time taken for the target sector to move under the head
Access time = Seek time + latency

100
Q

Disk scheduling

A

The OS maintains a queue of requests. An idle disk can work immediately on request, a busy disk means work must queue. Optimisation algorithms only make sense when a queue exists. Several algorithms exist to schedule the servicing of disk I/O requests. We illustrate scheduling algorithms with a request queue (e.g. 98, 183, 37, 122, 14, 124, 65, 67, head pointer 53)

101
Q

FCFS disk scheduling algorithm

A

The requests in the queue are fulfilled from oldest to newest

102
Q

SSTF disk scheduling algorithm

A

Shortest Seek Time First
This algorithm selects the request with the minimum seek time from the current head position. This algorithm may cause starvation of some requests.

103
Q

SCAN/Elevator disk scheduling algorithm

A

The disk arm moves towards one end of the disk, servicing requests along the way, before “bouncing back” towards the other end and servicing more requests along the way.

104
Q

C-SCAN disk scheduling algorithm

A

The head moves from the starting point to one end of the disk (while servicing requests) before hitting that end and jumping all the way back to the beginning. It then travels back towards where it started, servicing requests.
This treats the cylinders as a circular list.

105
Q

(C-)LOOK disk scheduling algorithm

A

A variant of (C-)SCAN where the arm only goes as far as the last request in each direction before bouncing back.

106
Q

Choosing a disk scheduling algorithm

A

SSTF is common
(C-)SCAN performs better for systems that place a heavy load on the disk
The disk-scheduling algorithm should be written as a separate module of the operating system, allowing it to be replaced with a different algorithm if necessary
Either SSTF or LOOK is a reasonable choice for the default algorithm

107
Q

Storage array

A

A collection of disks
Ports connect hosts to array
Memory and controlling software
RAID, hot spares, hot swap
Shared storage => More efficiency

108
Q

Storage Area Network

A

Common in large storage environments Multiple hosts attached to multiple storage arrays - flexible

109
Q

Network-Attached Storage (NAS)

A

Network-attached storage (NAS) is storage made available over a network rather than over a local connection (such as a bus)
Implemented via remote procedure calls (RPCs) between host and storage over typically TCP or UDP on IP network

110
Q

Redundant Array of Inexpensive Disk (RAID)

A

Multiple Disk drives provide reliability via redundancy
RAID schemes improve performance and improve the reliability of the storage system by storing redundant data
Mirroring or shadowing keeps duplicate of each disk
Block interleaved parity uses much less redundancy

111
Q

What is deadlock?

A

A set of processes is deadlocked if each process in the set is waiting for an event that only another process in the set can cause

112
Q

System model for deadlock characterisation

A

System contains resources of types R_1, R_2, …, R_m (e.g. CPU cycles, memory space, I/O devices)
Each resource type R_i has W_i instances
Each process utilises a resource as follows: request, use, release

113
Q

Deadlock characterisation

A

Mutual exclusion: only one process at a time can use a resource

Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes

No preemption: A resource can be released only voluntarily by the process holding it, after that process has completed its task

Circular wait: There exists a set {P0, P1, …, Pn} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, and P_n is waiting for a resource held by P0

114
Q

Resource allocation graph

A

A set of vertices V and a set of edges E
V is partitioned into two types: P = {P1, P2, …, P_n} consisting of all processes, and R = {R1, R2, …, R_m} consisting of all resource types in the system
Request edge = directed edge P_i => R_j
Assignment edge = directed edge R_j => P_i

115
Q

How to determine if a deadlock can/will occur?

A

If the graph contains no cycles => no deadlock
If the graph contains a cycle and there’s only one instance per resource type => deadlock
Otherwise => Possibility of deadlock

116
Q

Dealing with deadlocks

A
  1. Pretend there is no problem (reasonable if deadlocks are rare and cost of prevention is high)
  2. Deadlock prevention (mutual exclusion, hold and wait: must guarantee that whenever a process requests a resource, it does not hold any other resources, no preemption, circular wait)
  3. Deadlock avoidance (each process must declare maximum number of resources of each type that it may need)
117
Q

Safe state

A

When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state

This is when there exists a sequence <P1, P2, …, P_n> of all the processes such that for each P_i, the resources that P_i can still request can be satisfied by currently available resources + resources held by all P_j, j < i

118
Q

RAG scheme

A

Claim edge P_i => R_j (dashed line) indicates P_j may request R_j. This changes to a request edge when a process requests a resource. Request edge converted to an assignment edge when the resource is allocated to the process. Turns back into a claim edge when the resource is released.

119
Q

RAG algorithm

A

If P_i requests R_j, the request can only be granted if converting the request edge to an assignment edge does not form a cycle

120
Q

Banker’s algorithm

A

Let n = number of processes, m = number of resource types

Available: vector of length m, represents number of available resources of type R_j
Max: n x m matrix where the cell (i,j) represents the max number of resources of type R_j that could be accessed by P_i
Allocation: n x m matrix where the cell (i,j) represents the number of resources of type R_j currently allocated to P_i
Need: Max - Allocation

121
Q

Banker’s safety algorithm

A
  1. Let Work and Finish be vectors of length m and n respectively
    Work = Available
    Finish[i] = false for i from 0 to n-1
  2. Find an i such that both Finish [i] = false and Need_i <= Work
    If none exists, go to step 4
  3. Work = Work + Allocation_i
    Finish[i] = true
  4. If Finish[i] == true for all i, then the system is in a safe state

P_i’s request is valid if Request <= Available_i

122
Q

Banker’s Resource-Request Algorithm

A

Request_i = request vector for process P_i. Request_i[j] means process P_i wants k instances of resource type R_j
1. If Request_i <= Need_i go to step 2, otherwise raise error condition
2. If Request_i <= Available, go to step 3, otherwise P_i must wait since resources are not available
3. Pretend to allocate resources to P_i by modifying the state as follows:
Available = Available - Request_i
Allocation_i = Allocation_i + Request_i
Need_i = Need_i - Request_i
4. If this is safe, allocate the resources, otherwise P_i must wait (undo the changes from step 3)