03 - Scheduling Flashcards
PROCESS SCHEDULING
(FCFS) First-Come, First-Served
Processes are executed in the other they arrive in the ready queue
PROCESS SCHEDULING
(SJF) Shortest Job First
Processes with the shortest execution time are executed first
PROCESS SCHEDULING
(RR) Round Robin
Processes are executed in a cyclic manner
Each process is assigned a fixed time quantum
PROCESS SCHEDULING
Priority Scheduling
Processes are assigned priorities, CPU is allocated to highest priority process
PROCESS SCHEDULING
Multilevel Queue Scheduling/Multilevel Feedback Queue Scheduling
Processes are divide into queues based on characteristics such as priority or process type
Feedback Scheduling:
ability for processes to move between queues based on their behaviour
CPU Bursts
Occur when a process performs a significant amount of computation without requiring much I/O or waiting time.
the process actively uses the CPU to execute instructions continuously.
I/O bursts
Occur when a process requires input/output operations
the process spends most of its time waiting for I/O operations to complete, with brief bursts of CPU activity when processing data.
Preemptive Scheduling vs Nonpreemptive Scheduling
Preemptive scheduling: The operating system can interrupt a program currently running to let another one run.
Non-preemptive scheduling:
lets a program complete its execution before switching to another one.
Race Conditions
Occur when the outcome of a program depends on the timing/sequence of events
Multiple threads and processes compete for shared resources without proper synchronisation
Scheduling Criteria
CPU utilisation – keep the CPU as busy as possible
Throughput – # of processes that complete their execution per time unit
Turnaround time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a
request was submitted until the first response is produced.
Processor affinity
When a thread has been running on one processor, the cache contents of that processor stores the memory accessed by that thread
benefit:
improved cache performance.
NUMA - Non-Uniform Memory Access
Computer architecture where multiple processors (or nodes) have access to their own local memory as well as the shared memory.
benefit:
Improves memory access times for local memory.