Unit 2 Systems Software Flashcards
Operating Systems
The term ‘operating system’ refers to a collection of programs that work together to provide an interface between the user and computer. Operating systems enable the user to communicate with the computer and perform certain low-level tasks involving the management of computer memory and resources. Therefore they are essential in devices such as laptops, mobile phones and games consoles. Examples of popular desktop operating systems include Windows and macOS while popular mobile phone operating systems include iOS and Android.
Operating systems are essential to a computer system as they provide the following features:
- Memory management (paging, segmentation, virtual memory)
- Resource management (scheduling)
- File management (moving, editing, deleting files and folders)
- Input/ Output management (device drivers)
- Interrupt management
- Utility software (disk defragmenter, backup, formatting etc.)
- Security (firewall)
- User interface
Paging
Paging is when memory is split up into equal-sized sections known as pages, with programs being made up of a certain number of equally-sized pages. These can then be swapped between main memory and the hard disk as needed. 4Kb each section
Segmentation
Segmentation is the splitting up of memory into logical sized divisions, known as
segments, which vary in size. These are representative of the structure and logical flow of the program, with segments being allocated to blocks of code such as conditional statements or loops.
Virtual Memory
Virtual memory uses a section of the hard drive to act as RAM when the space in main
memory is insufficient to store programs being used. Sections of programs that are not currently in use are temporarily moved into virtual memory through paging, freeing up memory for other programs in RAM.
Issues with Virtual Memory
The key issue with using these three techniques is disk thrashing. This is when the computer ‘freezes’ and occurs as a result of pages being swapped too frequently between the hard disk and main memory. As a result, more time is spent transferring these pages between main memory and the hard disk then is spent actually running the program. This issue becomes progressively worse as virtual memory is filled up.
Interrupts
Interrupts are signals generated by software or hardware to indicate to the processor that a process needs attention. Different types of interrupts have different priorities and how urgent they are must be taken into account by the operating system when allocating processor time. Interrupts are stored in order of their priority within an abstract data structure called a priority queue in a special register known as an interrupt register. It is the job of the operating system to ensure interrupts are serviced fairly by the processor through the Interrupt Service Routine.
Interrupt Service Routine
The processor checks the contents of the interrupt register at the end of each Fetch-Decode-Execute cycle. If an interrupt exists that is of a higher priority to the process being executed, the current contents of the special purpose registers in the CPU are temporarily transferred into a stack. The processor then responds to the interrupt by loading the appropriate interrupt service routine (ISR) into RAM. A flag is set to signal the ISR has begun. Once the interrupt has been serviced, the flag is reset. The interrupt queue is checked again for further interrupts of a higher priority to the process that was originally being executed.
If there are more interrupts to be serviced, the process described above is repeated until all priority interrupts have been serviced. If there are no more interrupts or interrupts are of a lower priority to the current process, the contents of the stack are transferred back into the registers in memory. The Fetch-Decode-Execute cycle resumes as before.
Scheduling algorithms can either be:
- Pre-emptive
Jobs are actively made to start and stop by the operating system.
For example: Multilevel Feedback Queues, Shortest Remaining Time, Round Robin - Non pre-emptive
Once a job is started, it is left alone until it is completed.
For example: First Come First Served, Shortest Job First
Round robin
Each job is given a section of processor time - known as a time slice - within which it is allowed to execute. Once each job in the queue has used its first time slice, the operating system again grants each job an equal slice of processor time. This continues until a job has been completed, at which point it is removed from the queue. Although Round Robin ensures each job is seen to, longer jobs will take a much longer time for completion due to their execution being inefficiently split up into multiple cycles. This algorithm also does not take into account job priority
First come first served
Jobs are processed in chronological order by which they entered the queue. Although this is straightforward to implement, FCFS again does not allocate processor time based on priority.
Multilevel feedback queues
This makes use of multiple queues, each which is ordered based on a different priority.
This can be difficult to implement due to deciding which job to prioritise based on a combination of priorities.
Shortest job first
The queue storing jobs to be processed is ordered according to the time required for completion, with the longest jobs being serviced at the end. This type of scheduling is most suited to batch systems, where shorter jobs are given preference to minimise waiting time. However it requires the processor to know or calculate how long each job will take and this is not always possible. There is also a risk of processor starvation if short jobs continue being added to the job queue.
Shortest remaining time
The queue storing jobs to be processed is ordered according to the time left for completion, with the jobs with the least time to completion being serviced first. Again, there is a risk of processor starvation for longer jobs if short jobs are added to the job queue.
Distributed
This is a type of operating system which is run across multiple devices, allowing the load to be spread across multiple computer processors when a task is run.
Embedded
Built to perform a small range of specific tasks, this operating system is catered towards a specific device. They are limited in their functionality and hard to update although they consume significantly less power than other types of OS.
Multi-tasking
Multi-tasking operating systems enable the user to carry out tasks seemingly simultaneously. This is done by using time slicing to switch quickly between programs and applications in memory.
Multi-user
Multiple users make use of one computer, typically a supercomputer, within a multi-user system. Therefore a scheduling algorithm must be used to ensure processor time is shared fairly between jobs. Without a suitable scheduling algorithm, there is a risk of processor starvation, which is when a process is not given adequate processor time to execute and complete.
Real Time
Commonly used in time-critical computer systems, a real time OS is designed to perform a task within a guaranteed time frame. Examples of use include the management of control rods at a nuclear power station or within self-driving cars: any situation where a response within a certain time period is crucial to safety.
BIOS
The Basic Input Output System is the first program that runs when a computer system is switched on. The Program Counter register points to the location of the BIOS upon each start-up of the computer as the BIOS is responsible for running various key tests before the operating system is loaded into memory