Scheduling Flashcards
What’s batch scheduling?
– When a job is scheduled to run, it gets its own cores
– Cores are dedicated (not shared)
– Each job runs to completion (until it terminates)
– Only after, the cores are allocated to other waiting jobs
– “Non-preemptive”
What’s wait time?
waitTIme = startTime - submitTime.
The shorter the average wait time, the better the performance
What’s response time?
- responseTime = terminationTime – submitTime
* responseTime = waitTime + runTime
Prove that:
for batch schedulers, the difference between average wait time and average response time of a given schedule is a constant
– The constant is the average runtime of all jobs
(In our context, we assume that job runtimes (and thus their average)
are a given; they stay the same regardless of the scheduler)
Proof:
page 10 of scheduling lecture.
What’s slowdown (or expansion factor)?
slowdown = responseTime / runTime = Ti/Ri
The greater the slowdown, the longer the job is delayed
What’s utilization?
Percentage of time the resource (CPU in our case) is busy
What’s throughput?
How much work is done in one
time unit
What’s FCFS (first come first serve)? what are the cons and pros?
• Jobs are scheduled by their arrival time
– If there are enough free cores, a newly arriving job starts to run immediately
– Otherwise it waits, sorted by arrival time, until enough cores are freed
• Pros:
– Easy to implement (FIFO
wait queue)
– Typically perceived as most fair
• Cons:
– Creates fragmentation (unutilized cores)
– Small/short jobs might wait for a long, long while..
(“אני רק שאלה” disallow we(
What’s backfilling?
The “backfilling” optimization
– A short waiting job can jump over
the head of the wait queue
– Provided it doesn’t delay the job @ head of the wait queue
What’s EASY scheduling?
EASY is FCFS + Backfilling.
whenever a job arrives (=submitted) or terminates:
1. Try to start the job @ head of wait queue (FCFS)
- Then, iterate over the rest of
the waiting jobs (in FCFS order)
and try to backfill them
“להשלים חורים”
What are the pros & cons of EASY?
Pros:
– Better utilization (less fragmentation)
– Narrow/short jobs have better chance to run sooner
• Cons:
– Must know runtimes in advance
• To know width of holes
• To know if backfill candidates are short enough to fit holes
What’s SJF (shortest job first)?
Instead of
– Ordering jobs (or processes) by their arrival time (FCFS)
• Order them by
– Their (typically estimated) runtime
Optimality of SJF for average wait time
Claim
– Given: a 1-core system where all jobs are serial
– If: (a) all processes arrive together and (b) their runtimes are known
– Then: the average wait time of SJF is equal to or smaller than the
average wait time of any other batch scheduling order S
Proof:
page 22
What’s SJBF (Shortest-job Backfilled First)?
– Exactly like EASY in terms of servicing the head of the wait queue in FCFS order (and not allowing anyone to delay it) – But the backfilling traversal is done in SJF order
What’s LXF (Largest eXpansion Factor) scheduling?
– LXF is similar to EASY, but instead of ordering the wait queue in FCFS, it
orders jobs based on their current slowdown (greater slowdown
means higher priority)
– The backfilling activity is done relative the job with the largest current
slowdown (= the head of the LXF wait queue)
– Note that every scheduling decision (when jobs arrive/finish) requires
a re-computation of slowdowns (because wait time has changed)
What’s preemption?
The act of suspending one job (process) in favor of another
– Even though it is not finished yet
What’s a quantum?
The maximal amount of time a process is allowed to run
before it’s preempted
What’s the difference between response time in batch scheduling and in preemptive schduling?
And why?
In batch: responseTime = waitTime + runTime
In preemptive: responseTime ≥ waitTime + runTime
Because
• Processes can be preempted, plus context switches have a price
What’s RR scheduling?
Processes are arranged in a cyclic ready-queue
– The head process runs, until its quantum is exhausted
– The head process is then preempted (suspended)
– The scheduler resumes the next process in the circular list
– When we‘ve cycled through all processes in the run-list (and we reach the
head process again), we say that the current “epoch” is over, and the next
epoch begins
Is there RR in parallel systems?
Yes! It’s called “gang scheduling”
– Time is divided to slots (seconds, or minutes)
– Every job has a “native” time slot
– Algorithm attempts to fill holes in time slots by assigning to them jobs
from other native slots (called “alternative” slots)
Batching vs. preemption Theorem
Claim
– Let avgResp(X) be the average response time under algorithm X
– Assume a single core system, and that all processes arrive together
– Assume X is a preemptive algorithm with context switch price = 0
– Then there exists a non-preemptive algorithm Y such that
avgResp( Y /batch/ ) ≤ avgResp( X /preemptive/ )
Proof: page 35
Connection between RR and SJF Theorem
Claim
– Assume:
• 1-core system
• All processes arrive together (and only use CPU, no I/O)
• Quantum is identical to all processes + no context switch overhead
– Then
• avgResponse(RR) ≤ 2 x avgResponse(SJF)
Proof: page 38
What’s SRTF (Shortest-Remaining-Time First)?
SRTF is just like SJF, but
– Is allowed to use preemption
– Hence, it’s “optimal” (assuming a zero context-switch cost etc.)
• Whenever a new job arrives, or an old job terminates
– SRTF schedules the job with the shortest remaining time
– Thereby making an optimal decision
What’s selfish RR?
• New processes wait in a FIFO queue
– Not yet scheduled
• Older processes scheduled using RR
• New processes are scheduled when
1. No ready-to-run “old” processes exist
2. “Aging” is being applied to new processes (some per-process counter
is increased over time); when the counter passes a certain threshold,
the “new” process becomes “old” and is transferred to the RR queue