U16 system software Flashcards
purpose of an operating system
operating system
provides interface between users and hardware
resources list
- CPU
- memory
- I/O devices
resource management
- focuses on utilizing the resources and maximize their use
-deals with I/O operations
direct memory access (DMA)
- DMA controller is used to give access to memory directly. it allows the hardware to access the main memory independently of the CPU
- frees up the CPU to allow it to carry out other tasks
- DMA initiates data transfer while CPU carries out other tasks
- once the data transfer is complete an interrupt signal is sent to the CPU from the DMA
kernel
- responsible for communication between hardware, software and memory
- responsible for process, device and memory management
how does the operating system hide the complexities of the hardware from the user
- provides interface e.g: GUI which helps to use the hardware
- uses device drivers to synchronize the hardware
difference between program and process
- program is written code
- process is executing code
multitasking
- to ensure multitasking operates correctly scheduling is used to decide which processes should be carried out
- ensures the best use of computer resources by monitoring each state of process
- kernel overlaps the execution of each process based on scheduling algorithms
preemptive
when cpu is allocated to a particular process and if at that time a higher priority process comes then it is allocated to that process
nonpreemptive
does not take any action until the process is terminated
features of preemptive
- resources are allocated to a process for a limited time
- the process can be interrupted while it is running
- more flexible form of scheduling
features of nonpreemptive
- once the resources are allocated to a process, the process retains them until it has completed its burst time
- process cannot be interrupted while running
- more rigid form of scheduling
why does an operating system need to use scheduling algorithms
- to allow multitasking to take place
- to ensure fair usage of processer
- to minimize the amount of time users must wait for their results
- to keep cpu busy at all times
- to ensure fair usage of memory
- to ensure higher priority tasks are executed sooner
ready state
- process is not being executed
- process is in the queue
- waiting for the processor’s attention/time slice
running state
- process is being executed
- process is currently using its allocated processor time/time slice
blocked state
- process is waiting for an event
- so it cannot be executed at the moment
- e.g: input/output
ready -> running transmission conditions
- current process no longer running // programmer is available
- process was at the head of ready queue // process has highest priority
- OS allocates processor to process so that process can execute
running -> ready transmission conditions
- when process is executing it is allocated a time slice
- when time slice is completed, interrupt occurs and process can no longer use processor even though it is capable of further processing
running -> blocked transmission conditions
- process is executing when it needs to perform I/O operation and it is placed in blocked state until I/O operation is completed
why is blocked -> running not possible
- when I/O operation completed for process in blocked state
- process is transferred to ready state
- OS decides to allocate to processor
why can a process not move directly from ready to blocked state
- to be in blocked state, process must initiate I/O operation
- to initiate operation process must be executing
- if the process is in ready state it cannot be executing
high level scheduler
- decides which processes are to be loaded from backing store
- into ready queue
low level scheduler
- decides which of the processes is in ready state
- should get use of processor or which processor is put into running queue
- based on position or priority
first come first served scheduling
- non-preemptive
- based on arrival time
- uses fifo principle
shortest job first scheduling
- non-preemptive
- burst time of a process should be known in advance
- processing requiring the least cpu time is executed first
shortest remaining time first
- preemptive
- the processes are placed in ready queue as they arrive
- but when a process with a shorter burst time arrives
- existing process is removed
- shorter process is then executed first
round robin
- preemptive
- a fixed time slice is given to each process, this is known as time quantum
- running queue is worked out by giving each process its time slice in the correct order (if a process completes before the end of its time slice then the next process is brought into ready queue for its time slice)
interrupt
- a signal to OS from the device which is connected to computer. sometimes interrupts are within the computer
- processor will check for interrupt signals and will switch to kernel mode if any of the following signals are sent: device interrupt, exceptions, software interrupt
IDT
- interrupt dispatch table
- to determine the current response to interrupts
IPL
- interrupt priority level
- numbered (0-31)
interrupt handling
- when an interrupt is received, other interrupts are disabled so the process that deals with the interrupt cannot be itself be interrupted
- state of current task/process is saved on the kernel stack
- when the source of interrupt is identified, the priority of the interrupt is checked
- system now jumps to the ISR
- once completed, the state of the interrupted process is restored using the values stored on the kernel stack
- after an interrupt has been handled, the interrupt needs to be restored so that any further interrupts can be dealt with
page replacement
- occurs when a requested page is not in memory
- when paging in/out from memory, it is necessary to consider how the computer can decide which pages to replace to allow the requested page to be loaded
- when a new page is requested but is not in memory, a page fault occurs
optimal page replacement
looks forward in time to see which frame it can replace in the event of page fault
longest resident
a particular page which is present for the longest time is swapped
least used
a particular page which is used less is swapped
internal fragmentation
when a process is allocated more memory than required few spaces are left in the block
paging
- a page is a fixed size block of memory
- since the block size is fixed, it is possible that all blocks may not be fully used and this can lead to internal fragmentation
- user provides a single value this means that the hardware decides the actual page size
- procedures cannot be separated when using paging
segmentation
- a segment is a variable size block of memory
- memory blocks are a variable size, this increases the risk of external fragmentation
- the user will supply the segment number and segment size
- procedures can be separated when using segmentation
virtual memory
- secondary storage used to extend the RAM
- so cpu can access more memory space than available RAM
- only part of program in use needs to be in RAM
- data is swapped between RAM and disk
how is paging used to manage virtual memory
- divide memory RAM into frames
- divide virtual memory into blocks of the same size called pages
- frames/pages are a fixed size
- set up a page table to translate logical to physical addresses
- keep track of all free frames
- swap pages in memory with new pages from disk when needed
disk thrashing
- pages are required back in RAM as soon as they are moved to disk
- there is continuous swapping (at the same pages)
- no useful processing happens
- because pages that are in RAM and on disk are interdependent
- nearly all processing time is used for swapping pages