Operating Systems Flashcards
Why do we need operating systems?
The OS works as an interface between users, applications and hardware. It hides the technical complexities (abstracts them away) and provides the user with interfaces to actually utilise the hardware
Provides a file system which allows data to be organised into files and folders (also permission management, metadata, searching)
It’s a resource manager: responsible for allocating resources to users and processes while ensuring there is no starvation (all users everyone should be able to access resources without a single user hogging them all), allocates resources according to policy (e.g. CPU timeslices can be allocated according to first-come-first-served etc.)
What is the definition of an operation system?
Software that acts as an intermediary between a user and hardware, executes programs which serve users’ needs, makes solving user problems easier, makes the computer convenient to use (e.g. by providing a file system) and uses the resources of the system fairly according to policies
Structure of an operating system
Users
Application programs - define the way in which the system resources are used to solve the computing problems of the users
OS - controls and coordinates the use of hardware by applications
Hardware
What is a kernel?
The core of the operating system, a program that is constantly running (in privileged/kernel mode)
Privileged mode/user mode
Programs in privileged/kernel mode can decide other programs’ access to hardware. The OS also prevents applications from interfering with each other.
What is the mode bit?
The bit provided by hardware that determines whether code is coming from user mode programs or kernel mode programs. Some instructions can only be executed in privileged mode, including managing interrupts, I/O, and halting processes
What is the mode bit?
The bit provided by hardware that determines whether code is coming from user mode programs (1) or kernel mode programs (0). Some instructions can only be executed in privileged mode, including managing interrupts, I/O, and terminating processes
When does a transition from user mode to kernel mode occur?
When a user mode process asks the operating system to execute a function which requires a privileged instruction (system call)
When an interrupt occurs
When an error condition occurs
When an attempt is made to execute a privileged instruction while in user mode
How does the OS allow programs to communicate (on the same PC or over a network)?
Processes can communicate via shared memory or through message passing (packages moved by the OS across a network)
Error detection
The OS needs to be constantly aware of possible errors (which may occur in the CPU and RAM, I/O devices, user programs)
It needs to take the appropriate action to ensure correct and consistent computing
It also provides debugging facilities
Error detection
The OS needs to be constantly aware of possible errors (which may occur in the CPU and RAM, I/O devices, user programs)
It needs to take the appropriate action to ensure correct and consistent computing
It also provides debugging facilities
Protection and security
In a multiuser system, the OS should ensure that all access to system resources is controlled, programs do not interfere with each other, users cannot access each other’s personal data
CLI vs GUI
CLI = command line (implemented by kernel, sometimes by system programs such as cmd.exe). Commands are either built in or names of programs
User goals and system goals
User goals: OS should be convenient to use, easy to learn, reliable, safe, fast
System goals: Easy to design, implement and maintain, flexible, error-free, reliable, efficient
How are operating systems implemented?
Lowest levels in assembly, main body in C, system programs in C, C++, Python, shell scripts
Higher level languages are easier to port to other hardware, emulation can allow an OS to run on non-native hardware
What is monolithic kernel architecture and how does UNIX use this?
An operating system architecture where the entire OS runs in kernel mode
The UNIX OS consisted of two parts: system programs and the kernel, which is anything below the system call interface and above the hardware
What is microkernel system architecture and how does Mac OS use this?
The kernel is small and only contains the bare minimum amount of software
Communication between modules such as device drivers, application programs and the file system takes place via message passing
What does a process contain?
Code
Current activity (program counter and content of CPU registers)
Stack (temporary data such as local variables)
Data section (global variables)
Heap (memory allocated while process is running)
What is a process control block and what does it contain?
Data structure which stores information about a process
Process ID, status, CPU registers, priority, memory usage, I/O status
Process states
New - the process is being created
Running - instructions are being executed
Waiting - the process is waiting for some event to occur
Ready - the process is ready to be dispatched to the CPU
Terminated - the process has completed its execution, or some other event causing termination
Transitions between process states
admitted: new=>ready
scheduler dispatch: ready=>running
interrupt: running=>ready
exit: running=>terminated
I/O or event wait: running=>waiting
I/O or event completion: waiting=>ready
How are processes stored in memory?
Each process has a separate memory space, delimited by a base register and a limit register. The CPU has to check every memory access generated in user mode to make sure it’s within these bounds
What does memory consist of?
Memory cells: electronic circuits which store one bit of information.
Logical vs physical addresses
Logical addresses are generated by the CPU, and are also known as virtual addresses
Physical addresses are the ones used in the RAM itself, never known by user programs
What is the memory management unit?
The hardware device that maps virtual to physical addresses. A simple MMU adds the value in the relocation register to the logical address, to make a physical address
How is main memory partitioned?
One partition for kernel processes and another for user processes
How are new processes allocated memory?
When a process arrives, it is allocated memory from a “memory hole” large enough to accommodate it. The OS needs to maintain information about each partition and its allocated memory and free holes.
What is the first-fit strategy for allocating a hole to a new process?
Allocate the first hole which is large enough
What is the best-fit strategy for allocating a hole to a new process?
Allocate the smallest hole which is large enough
Requires you to search the whole list
Produces the smallest leftover hole
What is the worst-fit strategy for allocating a hole to a new process?
Allocate the largest hole
Requires you to search the whole list
Produces the largest leftover hole
Resource sharing: three possible cases
The parent and child processes share all resources
The child process shares a subset of the parent’s resources
The parent and child processes share no resources
Execution: two possible cases
The parent and child execute concurrently
The parent waits until the child terminates
Process termination
The process asks the OS to delete it using the exit() system call
Outputs data from child’s process to parent (via wait())
The child process’s resources are de-allocated by the operating system
When do parent processes terminate execution of child processes
The child process has exceeded its allocated resources
The task assigned to child is no longer required
The parent is itself terminating
Zombie and orphan processes
Zombie proccess: no parent waiting for it (did not invoke wait())
Orphan process: Parent terminated without invoking wait()
Process scheduling
Purpose is to maximise CPU use, and quickly switch processes onto CPU for time sharing
Process scheduler selects process for next execution on CPU
Different types of scheduling queue
Job queue: Set of all processes
Ready queue: Set of all processes residing in main memory, ready and waiting to execute
Device queue: Set of all processes waiting for an I/O device
Process creation
Child duplicates parent’s address space for easier communication with the parent
fork() system call creates new process which starts executing from the line below the parent’s fork() call
Four components of the kernel
Privileged instruction set
Interrupt mechanism
Memory protection
Real-time clock
FLIH manages interrupts
Dispatcher/scheduler switches CPU between processes
Intra-OS communications (e.g. via system bus)
First-level interrupt handler (FLIH)
Determines the source of an interrupt, what happened and to prioritise the processes
Initiates servicing of the interrupt (selects which process should be sent to the dispatcher/scheduler)
The dispatcher
Assigns processing resource (CPU time) to processes
Is initiated when:
- A current process cannot continue
- An interrupt changes a process state
- A system call results in the current process not being able to continue (e.g. waiting for an input or output)
What are threads
Unit of execution
Lists a sequence of instructions that execute
Belongs to a process and executes within it
Kernel threads are initiated by the operating system
Multithreading
When an OS supports multiple threads within a single process
Single threading is where the OS does not recognise the separate concept of threads (e.g. MS-DOS)
One thread per process
Local variables are per thread, allocated on the stack (each thread has its own stack)
Global variables are shared between all threads, allocated in the data section. Need to take into account concurrent access
Dynamically allocated memory can be global or local (programmer chooses)
Benefits of multithreading
Creating, terminating and switching threads is quicker than doing the same thing but with processes
Responsiveness: May allow continued execution if part of process is blocked (if a tab crashes then the whole browser doesn’t crash)
Resource sharing: Threads share resources of process, easier than shared memory or message passing
Thread switching has lower communication overhead than context switching
Processes can take advantage of multiple cores
Amdahl’s law
Identifies performance gains from adding additional cores, to an application that has both serial and parallel components.
S = serial portion
N = processing cores
speedup <= 1/(S + (1-S)/N)
Long-term scheduler (job scheduler)
Selects which processes should be brought into the ready queue
Medium-term scheduler
In charge of handling the swapped out-processes
Short-term scheduler
Selects the processes to be executed next and allocated to the CPU
Dispatch latency
The time it takes for the dispatcher to stop one process and start another. The dispatcher saves the current process’s state into PCB, and restores the next process’s state.