Scheduling Policies Flashcards

1
Q

What is a “preemptive scheduler”?

A

A scheduler that can interrupt running jobs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe the MLFQ briefly

A
  • Uses multiple queues for scheduling. With queues we can prioritize
  • “Learn from the past to predict the future”: Adjust the priority according to CPU time / I/O time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is “game the scheduler” in the context of the MLFQ?

A

Attack to monopolize the scheduler. Before the time slice is used up, do a short I/O request. This gives the process again a “fresh” time slice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are important parameters to think about for MLFQ (Implementation) ?

A
  • Nr. of queues
  • Time slice size
  • Extra queue for OS?
  • At what interval occurs the priority boost?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is proportional share scheduling and how can it be implemented?

A
  • Give each job the same amount of CPU time
  • Measure CPU time per job and split it fair (difficult to implement!).
  • Use a random scheduler
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the Linux completly fair scheduler (CFS)?

A
  • Highly efficient scheduler for Linux
  • Basic idea: Job with lowest virtual runtime is run next (gets calculated on the fly)
  • Weights: Uses Unix nice levels as weights to calculate the time slices (vruntime)
  • Parameters: min_granularity and sched_latency for min. time slices and min. time before context switch is considered.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

CFS: What about sleeping jobs or I/O?

A

Those jobs won’t get aggregated while not doing anything. Therefore as they wake up they get the minimum vruntime that is currently among the jobs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Multiprocessor scheduling: What are some problems that can happen regarding the CPU cache?

A
  • Cache Coherence: The CPU caches should have the same state. Synchronization needed.
  • Cache Affinity: If a process run on a CPU, the cache has filled up. It makes sense to run similar processes again on this CPU to gain a performance boost because of the cache.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Multiprocessor scheduling: Single - and multi-level queues: Mention 2 problems per queue

A
Single-level queue for all CPUs
1. Synchronization
2. Work required for cache affinity
Multi-level (e.g. 1 queue per CPU): 
1. Difficult to implement
2. load imbalance: How to properly divide workload on all CPUs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly