Operating Systems Flashcards

1
Q

define operating system

A

a program that acts as an intermediary between a user and hardware

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

OS goals

A
  • execute user programs
  • makes solving user problems easier
  • makes computer system convenient to use
  • use system resources efficiently and fairly (priority and scheduling)
  • manage processes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

OS is a …. (roles and responsibilities)

A
  • resource allocator: responsible for management of all computer resources
  • kernel: the one program that runs the whole time
  • control program: controls execution of user programs and operation of I/O devices
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is a process

A

a unit of execution that uses associated resources to execute collection of instructions completing a reasonable task

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

OS process management roles

A
  • creating and deleting processes
  • holding and resuming processes
  • mechanisms for process synchronisation (priority, scheduling)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

process includes

A
  • code: text section
  • heap: memory allocated during program execution
  • data stack: temporary data e.g. local variables
  • data section: global variables
  • current activity: through PC and CPU register contents
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

information of process stored as:

A

a process control block. holds info on:

  • unique identifier: PID (process ID)
  • state
  • CPU utilisation
  • CPU scheduling info e.g. priority
  • memory usage
  • other info e.g. owner, description
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

process state (~status)

A
  • new: process is being created
  • running: instructions are being executed
  • waiting: process is waiting for some event to occur
  • ready: process is ready to be dispatched to CPU
  • terminated: process has completed execution or some other process is causing termination
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

process state cycle

A

new –(admitted)–> ready –(scheduler dispatch)–>running
running–(interrupt)–> ready
running –(I/O or event wait)–> waiting – (event completion or I/O)–> ready
running – (exit) –> terminated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

process creation

A

a new process, as a parent process, can create a number of child processes which can create processes themselves, forming a tree of processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

process creation: sharing resources

A

parent and child processes can either:

  • share all resources
  • child shares subset of parent processes
  • share no processes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

process creation: execution

A

parent and child either:

  • execute concurrently
  • parent waits for child process to terminate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

process termination

A

process executes final instruction and requests OS to delete it:

  • data outputted from child to parent process
  • child processes resources deallocated by OS
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

why would parent process terminate execution of child process?

A
  • child process exceeded its allocated resources
  • child task no longer necessary
  • parent itself is terminating (cascading termination)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

aim of kernel

A

provide an environment in which processes can exist

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

four essential components of a kernel

A
  • privileged instruction set
  • interrupt mechanism
  • clock
  • memory protection mechanisms
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

kernel consists of

A
  • FLIH (first level interrupt handler): manage interrupts
  • dispatcher: switch CPU between interrupts
  • intra OS communications via system bus
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what is an interrupt

A

a signal from either hardware or software of an event causing a change in process:

e. g. hardware: triggers an interrupt by sending a signal to the CPU via the system bus (eg IO event- printing complete)
software: triggers an interrupt by making a system call asking for some action by the OS (eg asking for time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

interrupt routines

A

OS routines executed whenever an interrupt has occurred

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

function of FLIH

A

determine source of interrupt

initiate servicing of interrupt: select suitable dispatcher process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

privileged instruction set

A

some instructions can only be accessible to OS:

  • managing I/O
  • halting processes
  • interrupt management
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

dual mode

A
  • distinguishes between user-defined code and OS code. don’t let user execute instructions that could cause harm
    1) user mode 2) kernel mode/supervisor/system/privileged mode
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

when switch from user to kernel mode?

A
  • an interrupt occurs (hardware)
  • an error condition occurs in a user process (software)
  • user process calls on OS to execute a function requiring privileged instruction set
  • attempt to execute privileged instruction while in user mode
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

what does dispatcher do

A

assigns resources to processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

when is dispatcher later initiated

A
  • when current process can no longer continue
  • CPU would be better used elsewhere e.g. after interrupt changes processes state, after system call resulting in current process not being able to continue, error causing process to suspend
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

advantages of multiprogramming

A

convenience- single user can run many tasks
computational speed up- multiple CPUs and cores
info sharing- shared files
modularity- programming e.g. OOP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

multiprogramming is…

A

when there are many processes in memory concurrently. when one process is waiting, the CPU executes (to running) one of the processes in the ready queue.
CPU scheduling is the method used to support multiprogramming in process management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

factors affecting multiprogramming

A

number of CPUs
cores vs CPUs
threading vs hyperthreading

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

thread

A

basic unit of CPU utilisation (like a subset of a process.) Threads share attributes (e.g. priority) with peer threads but may execute different code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

benefits of using threading

A

responsiveness: process continues running even if part of it (i.e. thread) is blocked or is carrying out a lengthy operation
resource sharing: processes share resources of their common process
performance: cooperation of multiple threads in same job –> inc throughput + improved performance
economy: allocating memory and resources to process creation = costly
multiprocessor architecture: single threaded process can only run on one processor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

thread management in user programs supported by which libraries

A

Java threads

win32 (windows API) threads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

thread management in OS programs supported by…

A

a kernel module

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

multithreading models:

A
  • one to one
  • many to one
  • many to many
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

multithreading model: one to one

A

one user application thread to one kernel thread

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

multithreading model: many to one

A

many user application threads to one kernel thread. programmer controls number of user application threads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

multithreading model: many to many

A

N user application threads to <= N kernel threads

defined by capabilities of OS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

how do windows schedule threads

A

priority based, pre-emptive scheduling algorithm. highest priority thread always running.
thread selected by dispatcher runs until:
-higher priority thread pre-empts it
-it terminates (e.g. completed or another event causes it to terminate)
-time quantum ends
-system call is made (privileged)

38
Q

windows priority scheme

A

32 level priority scheme: higher level, higher priority.
priorities split into 2 classes:
- variable class: level 1-15, priorities can change
-real time class: level 16-31
-level 0: used in memory management

39
Q

when thread’s time quantum runs out

A

-thread interrupted and dispatched to ready queue. priority is lowered but never below base priority

40
Q

when thread released from wait operation

A

-thread dispatched to ready queue. priority is increased depending on what thread was waiting for.

41
Q

CPU scheduling improves

A
  • utilisation of CPU

- speed of computer’s response to its users

42
Q

types of memory

A
  • cache
  • main memory (volatile e.g. RAM)
  • storage memory (non-volatile e.g. flash memory in memory sticks, SSDs)
  • virtual memory
43
Q

primary memory

A

cache and main memory

44
Q

cache is managed by

A

CPU. It is supported by the CPU too.

45
Q

similarity between cache and main memory

A

same memory management approaches

46
Q

what is memory made up of

A

memory cells that store one bit: 0/1, dependent on whether voltage reaches predetermined arbitrary threshold

47
Q

logical address aka virtual address

A

address created by the CPU

48
Q

physical address

A

address on the physical memory

49
Q

memory management unit

A

translate logical to physical addresses

50
Q

user application view of memory

A

only deals with logical addresses- never knows real physical address

51
Q

memory is split into two partitions

A

kernel processes and user processes in separate partitions. each partition is contained in a single contiguous section of memory

52
Q

when is memory read most efficiently

A

when memory is contiguous, read in sequence and faster

53
Q

memory allocation strategy has to be based on

A
  • holes (block of available memory)
  • holes of various size scattered across memory
  • when a process arrives, its allocated a hole large enough to accommodate it
54
Q

memory management: OS needs to store information about

A

-each partition: its allocated and free holes

55
Q

memory hole allocation strategies

A

-first fit: allocate process to first available hole (best for speed)
-best fit: allocate process to smallest (but large enough) of available holes (best for storage utilisation)
-worst fit: allocate process to largest of available holes
(for best/worst fit, must search entire list)

56
Q

external fragmentation

A

memory space that is able to satisfy a request but is not contiguous. can reduce these occurrences by compaction: shuffling memory so free space is contiguous/in one block

57
Q

internal fragmentation

A

memory that is successfully allocated but may be slightly larger than the requested memory. occurs when memory is stored in blocks divisible by 2,4,8,16 etc, so memory allocated is rounded up to one of the powers of 2.

58
Q

describe paging

A

physical memory split into fixed-size blocks called frames, with size of power of 2 e.g. 512-8192.
logical memory split into same sized blocks called pages.
to run program with n pages, need to find n free frames = paging.

59
Q

page table

A

manages the mapping of logical –> physical addresses, using an address translation

60
Q

logical memory address generated by the CPU consists of

A

p: page number: used as an index in the page table, with field containing base address of each page in physical memory
d: page offset: combined with base address to define physical memory address sent to memory unit

61
Q

virtual memory address mapping

A

mapping of addresses between main memory and secondary storage: same technique as mapping b/w logical and physical address

62
Q

memory protection bit

A

a valid/invalid protection bit is associated with each frame, attached to each entry in the page table.
valid bit: the associated page is in the process’s logical address space
invalid bit: the associated page is not in the process’s logical address space

63
Q

virtual memory

A

capability of OS allowing programs to address more memory locations than are available in main memory- freeing developer burden of memory management so can focus on application development

64
Q

how virtual memory works

A

there is more virtual memory than physical memory. many processes share the same physical address space. Only part of the process needs to be in physical memory for execution. free frame list of physical address is maintained by OS

65
Q

how does process view address space

A

contiguous block of memory holding objects it needs to execute:

  • code (read only)
  • data (read and write)
  • heap (address space in memory which expands and contracts during execution. it holds data produced during execution)
  • stack (address space which holds data and instructions known at time of procedure call. grows when nested procedure calls are issued and shrinks as procedures return)
66
Q

when is a page brought into memory

A

-only when required by a process during execution:
this reduces - number of pages to be transferred and number of frames to be allocated to a process
increases -number of processes executing, overall response time for a collection of processes

67
Q

when reference made to page’s address…

A

if invalid reference: abort. if not-in-memory: bring to memory
valid/invalid bit associated with each page entry: if 1- in memory, if 0- not in memory –> page fault trap
initially, the valid/invalid bit is set to 0 on all entries

68
Q

a process requests access to an address within a page

A

Validity of request checked. If invalid request:
-process terminated, clear all previously allocated main memory, throw “invalid memory access” message
If valid request:
-if page has been allocated frame in memory then break
-else if page fault trap: identify free frame from free frame list, read page into identified frame, update process’s page table, restart instruction interrupted by page fault trap + break

69
Q

page fault rate

A

between 0-1. 0 = no page faults, 1 = every reference is a fault

70
Q

effective access time

A

probability of a page being in memory * time to access memory + probability of page not being in memory *page fault overhead

71
Q

page fault overhead includes

A
  • context switching when relinquishing CPU
  • time waiting in paging device queue
  • time waiting in the ready queue
  • context switching when regaining the ready queue
72
Q

no free frame

A
  • when all memory frames available to a process are currently allocated
  • page replacement algorithm decides page in memory but not really in use, and swaps it out –> want to minimise number of page faults
73
Q

basic page replacement

A
  • find location of the desired page on disk
  • find a free frame: if there is a free frame, use it, else use page replacement algorithm to find a victim page. write victim page to disk, update the process’ page table and free frame list accordingly
  • read desired page into the newly freed frame and update the page table and free frame list
  • restart the user process
74
Q

how to evaluate replacement algorithms

A
  • run each algorithm on a particular string of memory references (reference string)
  • compute the number of page faults per algorithm on that string
  • compare the number of page faults: aiming for the lowest possible number of page faults given the reference string
75
Q

FIFO algorithm- page replacement

A
  • easy to implement as FIFO queue
  • associated with each page is the time it was brought into memory
  • no need to track ageing, victim is oldest page.
76
Q

beladys anomaly (suffered by FIFO)

A

for some page fault algorithms, page fault rate may increase as number of allocated frames increase.
usually inc memory improves memory response and throughput but some combinations of page demands (reference strings) will have fewer page faults with fewer available frames

77
Q

optimal page (OPT) fault algorithm

A

the page that is replaced is the one that will not be needed for the longest period of time –> fewest page faults
this requires knowing the future = impossible to implement

78
Q

Least recently used (LRU) algorithm

A
  • associate each page with time of its last use. victim page = page used least recently
  • stack algorithm
  • stack + counter algorithms can never exhibit Belady’s anomaly: set of pages in memory for n frames is always a subset of the set of pages that would be in memory for n+1 frames
79
Q

counter implementation

A

each page entry has a counter. whenever a page is referenced, the time is recorded. when a page needs to be changed, look at the counters to determine which are to be changed.

80
Q

stack implementation

A

keep a stack of page numbers, as a doubly linked list. no searching required, LRU page (bottom of stack) is pointed at by tail pointer

81
Q

LRU approximation algorithms: additional reference bit

A
  • each page associated with a bit, initially 0
  • when a page is referenced, bit is set to 1
  • replaced one of the pages with the bit set to 0
  • replace random page when none set to 0
82
Q

issue with additional ref bit (LRU)

A

-no way of knowing if page is LRU, MRU or in between

83
Q

LRU approximation algorithms: second chance algorithm

A
  • variation of FIFO queue, visualised as a circular queue
  • page enters main memory, joins tail of queue and ref bit set to 1
  • queue is full and page needs selecting: if page’s ref bit = 0 = victim page, else if 1 = set ref bit to 0 and move onto next page in queue
84
Q

different ways of allocating frames

A
  • global replacement
  • local replacement
  • fixed allocation: equal allocation/proportional allocation
  • priority allocation
85
Q

allocating frames: global replacement + local replacement

A

global -select replacement frame from set of all frames: one process can take a frame from another
local -select replacement frame from set of allocated frames for the process

86
Q

allocating frames: fixed allocation

A

equal allocation

proportional allocation- allocate according to size of process

87
Q

allocating frames: priority allocation

A

when page fault occurs, select for replacement frame from a process with lower priority number

88
Q

thrashing

A

when a process spends longer paging than actual work.

page-fault rate is high if process doesn’t have enough pages

89
Q

pre-paging

A

page into memory at one time all the pages that’ll be needed: for systems using working set model

  • keep with each process the list of pages within its working set model
  • if process suspended, remember working set for that process
  • process resumed: bring back entire working set back into memory before restarting process
90
Q

point of pre-paging

A

prevent thrashing but must ensure cost of prepaging is less than cost of servicing page faults

91
Q

page-fault frequency scheme

A

approach to prevent thrashing. establish an ‘acceptable’ page fault rate.
rate too low?: process lose frames
rate too high?: process gains frames