U16 system software Flashcards
purpose of an operating system
operating system
provides interface between users and hardware
resources list
- CPU
- memory
- I/O devices
resource management
- focuses on utilizing the resources and maximize their use
-deals with I/O operations
direct memory access (DMA)
- DMA controller is used to give access to memory directly. it allows the hardware to access the main memory independently of the CPU
- frees up the CPU to allow it to carry out other tasks
- DMA initiates data transfer while CPU carries out other tasks
- once the data transfer is complete an interrupt signal is sent to the CPU from the DMA
kernel
- responsible for communication between hardware, software and memory
- responsible for process, device and memory management
how does the operating system hide the complexities of the hardware from the user
- provides interface e.g: GUI which helps to use the hardware
- uses device drivers to synchronize the hardware
difference between program and process
- program is written code
- process is executing code
multitasking
- to ensure multitasking operates correctly scheduling is used to decide which processes should be carried out
- ensures the best use of computer resources by monitoring each state of process
- kernel overlaps the execution of each process based on scheduling algorithms
preemptive
when cpu is allocated to a particular process and if at that time a higher priority process comes then it is allocated to that process
nonpreemptive
does not take any action until the process is terminated
features of preemptive
- resources are allocated to a process for a limited time
- the process can be interrupted while it is running
- more flexible form of scheduling
features of nonpreemptive
- once the resources are allocated to a process, the process retains them until it has completed its burst time
- process cannot be interrupted while running
- more rigid form of scheduling
why does an operating system need to use scheduling algorithms
- to allow multitasking to take place
- to ensure fair usage of processer
- to minimize the amount of time users must wait for their results
- to keep cpu busy at all times
- to ensure fair usage of memory
- to ensure higher priority tasks are executed sooner
ready state
- process is not being executed
- process is in the queue
- waiting for the processor’s attention/time slice
running state
- process is being executed
- process is currently using its allocated processor time/time slice
blocked state
- process is waiting for an event
- so it cannot be executed at the moment
- e.g: input/output
ready -> running transmission conditions
- current process no longer running // programmer is available
- process was at the head of ready queue // process has highest priority
- OS allocates processor to process so that process can execute
running -> ready transmission conditions
- when process is executing it is allocated a time slice
- when time slice is completed, interrupt occurs and process can no longer use processor even though it is capable of further processing
running -> blocked transmission conditions
- process is executing when it needs to perform I/O operation and it is placed in blocked state until I/O operation is completed
why is blocked -> running not possible
- when I/O operation completed for process in blocked state
- process is transferred to ready state
- OS decides to allocate to processor
why can a process not move directly from ready to blocked state
- to be in blocked state, process must initiate I/O operation
- to initiate operation process must be executing
- if the process is in ready state it cannot be executing
high level scheduler
- decides which processes are to be loaded from backing store
- into ready queue
low level scheduler
- decides which of the processes is in ready state
- should get use of processor or which processor is put into running queue
- based on position or priority
first come first served scheduling
- non-preemptive
- based on arrival time
- uses fifo principle