Unit 2 - Operating System Flashcards
what is an operating system
software needed to manage communication with computer hardware
how is an operating system booted
the boot loader in ROM loads the OS into RAM when the computer is switched on
functions of an operating system
- user interface
- memory management
- interrupt service routines
- processor scheduling
- backing store management
- management of all I/Os
why does an operating system have memory management
programs and their data needed to be loaded into RAM to be used. the OS must manage the allocation of RAM to different programs. there may not be sufficient RAM for all desired processes to be completely loaded into RAM at once
what is paging
- available memory is divided into fixed size pieces called pages
- each page has an address
- a process loaded into RAM is allocated sufficient pages, but those pages may not be contiguous (next to each other) in physical terms
- a page table maps between the logical memory locations and the physical memory locations
what is segmentation
- a logical method of memory allocation where memory is divided into segments which can be of different lengths
- segments can relate to parts of a program, for example a particular function or subroutine may occupy a segment
what is virtual memory
- a computer has a fixed amount of RAM so the demand for memory will often exceed this amount
- a designated area of secondary storage is used as if it were main memory
- some of the pages of a current process are stored in virtual memory until they are needed, at which point they are swapped into RAM
what is disk thrashing
- if many processes are running and the computer has insufficient RAM, lots of time is spent swapping pages in and out of virtual memory
- repeatedly swapping pages can noticeably slow down the computer
- this is known as disk thrashing
what are some advantages of paging
- simplified memory management means that the processor does not have to worry about physical memory addresses.
- efficient memory usage
what are interrupts
- it is vital that the CPU can be interrupted when necessary
- a signal generated by a source such as a I/O device or system software that causes a break in the execution of the current routine
- control passes to another routine in such a way that the original routine can be resumed after the interrupt
what are some examples of interrupts
- an I/O devices sends an interrupt signal
- the printer runs out of paper
- an error occurs in a program
- a scheduled interrupt from the internal clock
- power failure
how does the interrupt service routine work
- the CPU checks at the end of each clock cycle whether there are any interrupts to be processed
- when an interrupt is detected, the processor stops fetching instructions and instead pushes the current content of its registers onto a stack
- the CPU uses an interrupt service routine to process the interrupt
- when processing has finished, the values can be popped from the stack and reloaded into the CPU
what is interrupt priority
- interrupts have different priorities, and will be processed in order of priority
- interrupts can themselves be interrupted if the new interrupt has a higher priority
- if a higher priority interrupt occurs whilst an interrupt is being processed, the original interrupt’s registers will be pushed onto the stack as well
advantages of interrupts
- simplifies I/O operations by allowing devices to communicate directly with the CPU
- increases the efficiency of the CPU as there is less waiting time between tasks
- enables multitasking by allowing the CPU to switch between tasks
why does an operating system need processor scheduling
a single CPU can only process instructions for one application at a time. the OS must schedule when each application can use the CPU. this gives the illusion of multi tasking
what is the aim of processor scheduling
- to provide an acceptable response time to all users
- to maximise the time the CPU is usefully engaged
- to ensure fairness on a multi-user system
name some examples of how processor scheduling can be organised
- round robin
- first come first served
- shortest remaining time
- shortest job first
- multi-level feedback queues
how does round robin work
each job is allocated (by FIFO) a time slice during which it can use the CPU’s resources. if the job has not been competed by the end of its time slice, the next job is allocated a time slice
advantages of using round robin
- each process gets an equal share of CPU time
- cyclic = less starvation
disadvantages of round robin
- setting the time slice too short increases the overhead and lowers CPU efficiency
- setting the time slice too long may cause a poor response to short processes and RR degrades to FCFS
- average waiting time = long
how does first come first served work
the first job is executed until it completes and then moves onto the next job
advantages of FCFS
- simple and easy to understand
- fairness = all process have equal opportunity to run
- every process will eventually get a chance to execute (if the system has all the resources)
- low scheduling overhead = no frequent context switches or complex scheduling decisions
- well-suited for long-running processes or workloads without time constraints
disadvantages of FCFS
- processes with less execution time suffers
- favours CPU bound process rather than I/O
- average waiting time = long
- lower CPU and device utilization
- not suited for multiprogramming systems
how does shortest remaining time work
the time to completion is estimated when each new job arrives. the job with the shortest remaining time to completion is executed. it is pre-emptive meaning that the process being executed can be stopped to process another job with smaller remaining time
advantages of shortest remaining time
- shortest jobs are favored
- gives the minimum average waiting time for a given set of processes
disadvantages of shortest remaining time
starvation - the large jobs with the longest time to completion may never be executed as new jobs constantly flow in
how does shortest job first work
the total execution time of each job is estimated by the user. the waiting job with the smallest total execution time is executed when the current job is completed. (not pre-emptive)
disadvantages of shortest job first
- starvation if processes keep coming
- it cannot be implemented at the level of short-term CPU scheduling
how does multi-level priority queues work
multiple queues are created with different priority levels. if a job uses too much CPU time it is moves to a lower priority queue. processes can be moved to a higher priority queue if they have waited a long time
advantages of multi-level priority queues
- provides a good mechanism where the relative importance of each process may be precisely defined
- scheduling allows for the assignment of different priorities to processes based on their importance, urgency, or other criteria
disadvantages of multi-level priority queues
- if a high-priority process uses a lot of CPU time, lower-priority processes may starve and be postponed indefinitely
- deciding which process gets which priority level assigned to it
how does a distributed operating system work
an OS where the software is spread over a collection of independent, networked, communicating and physically separate nodes. coordinates the processing of a single job across multiple computers
- a program that can be run by the user that uses data or resources from any other computer
- coordinated by the OS passing instructions between computers
advantages of a distributed OS
- the user can access more computational power with the illusion of working with a single processor
- no need for training or writing programs differently
disadvantages of distributed OS
- the programmer has no control over the task distribution as this is entirely handled by OS
how does a multi-tasking OS work
a single processor can appear to do more than one task simultaneously by scheduling time
- some systems use a powerful computer = mainframe
- lots of users with their own terminals access the mainframe’s CPU and each get a time slice
- each terminal is also running multiple processes