Scheduling Policies Flashcards
What is a “preemptive scheduler”?
A scheduler that can interrupt running jobs
Describe the MLFQ briefly
- Uses multiple queues for scheduling. With queues we can prioritize
- “Learn from the past to predict the future”: Adjust the priority according to CPU time / I/O time
What is “game the scheduler” in the context of the MLFQ?
Attack to monopolize the scheduler. Before the time slice is used up, do a short I/O request. This gives the process again a “fresh” time slice.
What are important parameters to think about for MLFQ (Implementation) ?
- Nr. of queues
- Time slice size
- Extra queue for OS?
- At what interval occurs the priority boost?
What is proportional share scheduling and how can it be implemented?
- Give each job the same amount of CPU time
- Measure CPU time per job and split it fair (difficult to implement!).
- Use a random scheduler
What is the Linux completly fair scheduler (CFS)?
- Highly efficient scheduler for Linux
- Basic idea: Job with lowest virtual runtime is run next (gets calculated on the fly)
- Weights: Uses Unix nice levels as weights to calculate the time slices (vruntime)
- Parameters: min_granularity and sched_latency for min. time slices and min. time before context switch is considered.
CFS: What about sleeping jobs or I/O?
Those jobs won’t get aggregated while not doing anything. Therefore as they wake up they get the minimum vruntime that is currently among the jobs.
Multiprocessor scheduling: What are some problems that can happen regarding the CPU cache?
- Cache Coherence: The CPU caches should have the same state. Synchronization needed.
- Cache Affinity: If a process run on a CPU, the cache has filled up. It makes sense to run similar processes again on this CPU to gain a performance boost because of the cache.
Multiprocessor scheduling: Single - and multi-level queues: Mention 2 problems per queue
Single-level queue for all CPUs 1. Synchronization 2. Work required for cache affinity Multi-level (e.g. 1 queue per CPU): 1. Difficult to implement 2. load imbalance: How to properly divide workload on all CPUs