Processes and Threads Flashcards
the process model
Allows the OS to simplify:
- Resource allocation
- Resource accounting
- Resource limiting
OS maintains information on the resources and the internal state of every single process in the system
processes in the process model
- single process counter
- each process in unique location
- CPU switches back and forth from process to process
- Each process has own flow of control (own logical program counter)
- Each time we switch processes, we save the program counter of first process and restore the program counter of the
second - all processes make progress, but only one is active at any given time
concurrent processes
- The CPU can be allocated in turns to different processes
- OS normally offers no timing or ordering guarantees
process hierarchies
- 1 init process
- parent can create many child processes
- tree structure with process groups
- child can have some or all resources from the parent
parent -> child (address space)
- the child is a duplicate of the parent ⇒ it has the same data as the parent
- the child has a new program loaded into it
Four principal events that cause processes to be created
- System initialisation
- Execution of a process creation system call by a running process
- A user request to create a new process
- Initiation of a batch job
process termination
all the resources are deallocated after termination
a parent may terminate its child because
- the child has exceeded its usage of the resources it has allocated
- parent needs to know the state of the children → uses some mechanism
- the task is no longer required
- the parent is terminated itself (in some os)
typical conditions which terminate a process:
- Normal exit (voluntary)
- Error exit (voluntary)
- Fatal error (involuntary)
- Killed by another process (involuntary)
- only parent can kill the child process
information associated with a process
ID (PID), User (UID), Group (GID)
memory address space
hardware registers (e.g. pc)
open files
signals
Where is information associated with a process stored?
in the operating system’s Process Table
PCB
process control block, consist of:
- process ID
- process state
- process number
- program counter
- registers (CPU registers)
- CPU scheduling information
- memory limits - memory management information
- list of open file
- accounting information
- I/O information
process states
- Running (actually using the CPU at that instant)
- Ready (runnable; temporarily stopped to let another
process run) - Blocked (unable to run until some external event
happens)
process blocks for input
running -> blocked
scheduler picks another process
running -> ready
scheduler picks this process
ready -> running
input becomes available
blocked -> running
Which layer handles interrupts and scheduling
the lowest layer of a process-structured operating system
scheduling queues
- job queue: all processes in the system
- ready queue: processes in the main memory and waiting for the CPU
job -> ready ->CPU
context switches
state save
state restore
Why is context switch time pure overhead?
because the system does no useful work while switching
Interrupt vector
- Associated with each I/O device and interrupt line
- Part of the interrupt descriptor table (IDT)
- Contains the start address of an OS-provided internal procedure (interrupt handler)
Who continues process execution after interrupts?
interrupt handler
interrupt types
sw, hw, device (async), exceptions
what the lowest level of the operating system does when an interrupt occurs
- Hardware stacks program counter, etc.
- Hardware loads new program counter from interrupt vector.
- Assembly language procedure saves registers.
- Assembly language procedure sets up new stack.
- C interrupt service runs (typically reads and buffers input).
- Scheduler decides which process is to run next.
- C procedure returns to the assembly code.
- Assembly language procedure starts up new current process.
Every time an interrupt occurs, who gets control acts as a mediator?
the scheduler
thread
unit of execution within a process
threading +
+ allows space and time efficient parallelism
+ allows simple communication and synchronisation
+ responsiveness
+ resource sharing
+ economy
+ multiprocessor utilisation
dispatcher and worker thread
dispatcher: process incoming requests
working: handle the requests
threads
single-threaded process
finite-state machine / event driven process
parallelism, blocking syscalls
no parallelism, blocking syscalls
parallelism nonblocking syscalls, interrupts
Blocking syscalls
pause execution until completion, causing the process to wait
Nonblocking syscalls
return immediately, allowing the process to continue executing even if the requested operation isn’t ready yet
user threads
managed without kernel support
kernel threads
managed directly by the kernel
thread model many to one
one is blocked → all blocked
can access the kernel one at a time; no space for utilisation
thread model one to one
costly: every user needs to create a kernel thread, heavy on the system
thread model many to many
multiple user-level threads to be multiplexed over multiple kernel-level threads
threads in a process
- threads reside in the same address space of a single process ⇒ all threads can access the same memory (fast access)
- All information exchange is via data shared between the threads
- Threads synchronise via simple primitives
- Each thread has its own stack, hardware registers, and state
- Thread table/switch: a lighter process table/switch
- Each thread may call any OS-supported system call on behalf of the process to which it belongs
per process items
- address space
- global variables
- open files
- child processes
- pending alarms
- signals and signal handlers
- accounting information
per thread items
- ID
- program counter
- registers
- stack
- state
user-level thread package
kernel has no knowledge of it → can’t go to another thread
a thread package managed by the kernel
kernel maintains the threads, if a thread blocks a process the kernel can schedule it
user threads +/-
+ thread switching time (no mode switch)
+ scalability, customisable
- transparency
- parellelism
event driven servers
Implement server as finite-state machine that responds to events using
asynchronous system calls
When to schedule?
- Process exits
- Process blocks on I/O, Semaphore, etc.
- When a new process is created
- When an interrupt occurs:
○ I/O, clock, syscall, etc.
preemptive scheduling
“kick-out” running process no matter what
non-preemptive scheduling
keep the process running until it leaves the CPU
Categories of Scheduling Algorithms
batch
interactive
real time
scheduling algorithm goals
fairness: giving each process a fair share of the CPU
policy enforcement: seeing that stated policy is carried out
balance: keeping all parts of the system busy
batch systems
Throughput: maximise jobs per hour
Turnaround time: minimise time between submission and termination
CPU utilisation: keeping the CPU busy all the time
interactive systems
response time: respond o requests quickly
proportionality: meet user expectations
policy vs mechanism
Important principle
Here: we may have a scheduling algorithm, but parameters to be filled in by user (process)
For instance, to give some child processes higher priority than others
real time systems
meeting deadlines: avoid losing data
predictability: avoid quality degradation in multimedia systems
timing plays an essential role
soft real time vs. hard real time
periodic and aperiodic tasks
static or dynamic schedules
schedulable formula
scheduling criteria - all systems
- Fairness—giving each process a fair share of the CPU
- Policy enforcement—seeing that stated policy is carried out
- Balance—keeping all parts of the system busy
scheduling criteria - batch
- Throughput—maximize jobs per hour
- Turnaround time—minimize time between submission and termination
- CPU utilization—keep the CPU busy all the time
scheduling criteria -interactive
- Response time—respond to requests quickly
- Proportionality—meet users’ expectations
scheduling criteria - real-time
- Meeting deadlines—avoid losing data
- Predictability—avoid quality degradation in multimedia systems
scheduling algorithms for batch systems
FCFS
Shortest job first
Shortest remaining time next
scheduling algorithms for interactive systems
round-robin
priority
multiple queues
shortest process next
guaranteed scheduling
lottery scheduling
fair-share
scheduling for real time systems
soft vs hard
period vs aperiodic
static or dynamic