Scheduling Flashcards
What does a scheduler do? What are its main components? What happens when a task is scheduled?
Scheduler puts threads on the CPU for execution.
Has a run queue and scheduling algorithm
Steps: Perform context switch, Enter user mode, Set PC, Execute
What is a ready queue and how do tasks end up there?
A ready queue is where tasks a ready to be put on the CPU. They end up there by:
- Initiating an I/O request
- Having their time slice on the CPU expire
- Being forked by a parent thread
- Initiated a wait
What is run to completion scheduling?
Once a tasks has been scheduled on the CPU, it runs until completion
What is First-Come-First-Served?
Tasks are scheduled onto CPU based on when it arrives. Use FIFO queue
What is Shortest Job First?
Exactly what it sounds like, you can use an ordered queue or a tree
What is preemptive scheduling?
Unlike run-to-completion scheduling, tasks can be interrupted. (SJF - if a shorter job enters the queue, interrupt current one)
How does the scheduler know how long a task will take to run?
History, last run time, windowed average of last n times
What is priority scheduling?
Run the task with the highest priority next - can use preemption if higher priority comes into the queue. Can have a queue for each priority level
What is thread starvation?
When a lower priority task is stuck in the run queue because higher priority tasks keep showing up
What is priority aging?
A technique used to combat thread starvation - set priority to some function of (assigned priority, time in queue)
What is priority inversion?
When a lower priority thread holds a mutex needed by a higher priority thread. Higher priority thread will be kept waiting and lower thread will finish first
How to combat priority inversion?
Temporarily boost the priority of mutex owner (the lower priority task) to match that of the thread that is waiting until mutex is released
What is round-robin scheduling?
Every task gets an equal time slice and the scheduler places them on the CPU in order.
Pros and Cons of RR Scheduling with time slices
+ shorter tasks finish sooner (w/o heuristics), more responsive, length I/O ops initiated sooner
- high overheads when timeslice expires
How to choose a timeslice based on I/O vs CPU bound
Want time(context switching)»_space; timeslice. CPU Bounds = want larger time slice, I/O Bound == smaller slices
What data structure can we use to determine if a task is CPU or I/O bound and assign it an appropriate timeslice?
Use Multi-level Feedback Queue (MLFQ)
What is a multi-level feedback queue?
Have different levels for different time slices. Tasks enter at smallest slice. If they don’t yield before the slice expires, they stay there, if they do yield, they move down
Describe the O(1) Linux Scheduler
Has two priority level buckets (real time and user). Time slice based on priority, feedback based on how long it took to run in the past. Real time always has higher priority than user and gets longer slices. Longer wait times means the thread is more interactive, so gets longer timeslice in the future.
Describe the run queue of the O(1) Scheduler
Has two run queues - active and expired. Tasks stay in active queue until their timeslices have completely expired (might take multiple runs). When there are no more tasks ini active array, the arrays are swapped.
Why was the O(1) Scheduler replaced?
It did not work well for real time applications and as apps moved to more and more responsive tasking, the jittering became a problem
What is the Completely Fair Scheduler (CFS)?
Scheduler for linux non-real time tasks. Based on vruntime. Red-black tree with tasks based on vruntime. Smallest runtime task gets scheduled on CPU. Once it’s no longer the smallest, switch it out for another. vruntime increases more slowly for higher priority based tasks
Types of CPUs
Shared Memory Multiprocessor (SMP) - each CPU has its own private cache. LLC may or may not be shared by CPUs. Multi-core systems - CPU has internal cores with their own caches, LLC shared by all cores.
Scheduling on multi-core systems
Want to maximize cache-affinity (keep tasks on same CPU to keep cache hot) - have individual run queue / scheduler
What is hyperthreading?
Hardware support for multiple execution contexts - can switch threads more easily. Want to schedule a CPU bound and an I/O bound thread together
How to make sure to schedule a CPU- and I/O Bound thread together (hyperthreading)
Use hardware counters - high CPI == memory bound, low CPI == CPU bound
rewatch last couple of lectures
lecture