Operating Systems (Scheduling) Flashcards
3 Basic concepts of OS Scheduling?
1) Max CPU utilization through multiprogramming.
2) CPU-I/O Burst Cycle
3) CPU burst distribution
CPU Scheduler selects from processes that are __ (2) when a process __(4)
1.1/2) In memory and ready to execute.
- 1) Switches from running to waiting state.
- 2) Switches from running to ready state.
- 3) Switches from waiting to ready.
- 4) Terminates.
Dispatcher gives control of CPU to __ (1) and this involves __ (3).
1) The process that was selected by the CPU scheduler.
- 1) switching context
- 2) switching to user mode
- 3) jumping to the proper location in the user program
Scheduling criteria (5)
1) CPU utilization
2) Throughput
3) Turnaround time
4) Waiting time
5) Response time
Optimization criteria (5)
- Max CPU utilisation
- Max throughput
- Min turnaround time
- Min waiting time
- Min response time
How can we (try) determine the length of the next CPU burst?
We estimate the length using exponential averaging:
tn = actual length of nth CPU burst. Tn+1 = predicted value of the next CPU burst.
For 0<=a<=1,
Tn+1 = atn + (1-a)*Tn
How can we (try) determine the length of the next CPU burst?
We estimate the length using exponential averaging:
tn = actual length of nth CPU burst. Tn+1 = predicted value of the next CPU burst.
For 0<=a<=1,
Tn+1 = atn + (1-a)*Tn
View the diagram in the notes.
Examples of Exponential Averaging
• α =0 • τn+1 = τn • Recent history does not count • α = 1 • τn+1 = αtn • Only the actual last CPU burst counts • If we expand the formula, we get: τn+1 = α τn+(1 - α)ατn -1 + … \+(1 - α )j ατn -j + … \+(1 - α )n +1 τ0
Priority scheduling:
- 2 Schemes
- SJF
- Problem and solution
1) Preemptive and Non-preemptive
2) SJF is a special case of priority scheduling
3) Problem - starvation;
Solution - aging
Round Robin (RR)
- Explanation
- What happens after this time has elapsed? (2)
- Performance (3)
1) Each process gets a unit of CPU time.
2) After, process preempted and added to the end of the ready queue.
3) q must be large w.r.t context switch time.
q too large => FIFO,
q too small => high overhead
Round Robin (RR)
- Explanation
- What happens after this time has elapsed? (2)
- Performance (3)
- RR vs SJF
1) Each process gets a unit of CPU time.
2) After, process preempted and added to the end of the ready queue.
3) q must be large w.r.t context switch time.
q too large => FIFO,
q too small => high overhead
4) • RR typically has • higher average turnaround than SJF, but • better response (diag.)
Multi-level queue:
1) Ready queue partitioning (2)
2) Queue scheduling algorithms
3) How is scheduling done between queues?
• Ready queue is partitioned into separate queues • foreground (interactive) • background (batch) • Each queue has its own scheduling algorithm • foreground – RR • background – FCFS • Scheduling must be done between the queues • Fixed priority scheduling • Foreground, then background: possibility of starvation. • Time slice • each queue gets a certain amount of CPU time • 80% to foreground in RR and • 20% to background in FCFS
see diagram with processes priority.
Multilevel Feedback Queue (2)
- Separation based on CPU bursts
- Process may move between queues
Diagram.
Thread Scheduling (2:2)
• Process Contention Scope 1) Local scheduling 2) Thread library schedules user threads on available kernel threads • System Contention Scope 1) Global Scheduling 2) Kernel schedules kernel threads onto CPU
Multiple-Processor Scheduling (6: 1, 2, 1, 1, 1, 2)
• Asymmetric multiprocessing
1) Master server - only one processor
accesses the system data structures,
alleviating the need for data sharing
• Symmetric multiprocessing
1) Each processor is self-scheduling
2) Multiple physical processors
• Processor Affinity
1) Hard / Soft
• Load balancing
1) Push / Pull
• Multiprocessing
1) Symmetric / Asymmetric
• Multithreaded Multicore Processors
1) Hardware thread
2) Appears as logical processor
Real-Time CPU Scheduling (2:1)
• Hard real-time systems
1) required to complete a critical
task within a guaranteed amount
of time
• Soft real-time computing
1) requires that critical processes
receive priority over less
fortunate ones
Real-time CPU scheduling (5)
- Minimizing Latency
- Priority-Based Scheduling
- Rate-monotonic Scheduling
- Earliest-Deadline-First Scheduling
- Proportional Share Scheduling
Minimize latency (2:2)
• Interrupt latency 1) Complete instruction, determine interrupt type, save state 2) To keep low - don’t disable interrupts for too long • Dispatch latency 1) Conflict phase and dispatch phase 2) Too keep low - provide preemptive kernels
Priority-Based scheduling (2)
• scheduler for a real-time system must support priority-based algorithm • preemptive, priority-based scheduler only guarantees soft real-time functionality
Hard real-time systems (2)
- Must further guarantee that real-time tasks will be service in accord with their deadline requirements.
- additional scheduling features required.
Characteristics of processes (3: 2, 0, 1)
• Periodic
1) rate 1/p
2) require CPU at constant intervals
• Fixed processing time t
• Deadline d
1) if announced to scheduler:
admission-control algorithm admits or rejects request
Hard real-time scheduling (3: 1)
• Rate-monotonic scheduling 1) static priority (1/p) with preemption • Earliest deadline first scheduling 1) process must announce deadline when becoming runnable • Proportional share scheduling 1) Assign shares, need admission-control
Rate-Monotonic scheduling (1:4)
• Static priority policy with preemption
1) Periods: P1 = 50, P2 = 100
2) Processings Times: T1 = 20, t2 = 35
3) Deadlines: start of next period
4) CPU utilisation: 20/50 + 35/100 = 75%
Diag.
Rate-monotonic (2: 0, 4)
- Optimal (static scheduling)
- CPU utilisation is bounded
1) Periods: P1 = 50, P2 = 80
2) Processings Times: T1 = 25, t2 = 35
3) Deadlines: start of next period
4) CPU utilisation: 25/50 + 35/80 = 95%
Diag.
Earliest-deadline-first (6: 0, 4, 2, 1, 1, 1)
• Assign priorities according to deadline
• Priorities not fixed, but adjusted to reflect deadline of newly runnable process 1) Periods: P1 = 50, P2 = 80 2) Processings Times: T1 = 25, t2 = 35 3) Deadlines: start of next period 4) CPU utilisation: 25/50 + 35/80 = 95%.
• Processes does not
1) need to be periodic
2) have to require a constant amount of CPU time
per burst
• Process must
1) announce its deadline when it becomes runnable
• Theoretically
1) possible to achieve 100% CPU utilisation
• In practice
1) Impossible due to context switching
Diag.
Proportional Share Scheduling (3: 0, 0, 1)
• Allocate T shares among all
applications
• An application assigned N shares
will have N/T of the processor time
• Admission-Control policy
1) Admit a client only if sufficient
shares available to satisfy request
Operating System Examples
• Systems using 1:1 uses only SCS
1) Linux
2) Windows
3) Solaris
Linux Scheduling (2: 2, 1)
• Two scheduling classes 1) Default class using CFS 2) Real-time class using POSIX standard for real-time scheduling
• Scheduler
1) selects highest-priority task
belonging to highest-priority class
CFS (3: 1)
• Assigns a proportion (based on nice value)
of CPU processing time to each task
1) nice value = -20 (high) … 0 (default) …
19 (low)
• Identifies a targeted latency
1) an interval of time during which every
runnable task should run at least once
• Maintains a virtual run time for each task
1) associated with a decay factor based on
the priority of the task
Linux Scheduling (2:2,0)
• Two separate ranges of priorities
1) 0-99 for real-time tasks
2) 100-139 for normal tasks
• These two ranges map into a global priority scheme
Solaris Scheduling (2:0,6)
- Priority-based scheduling
- Six classes
1) Time sharing (default)
2) Interactive
3) Real Time
4) System
5) Fair share
6) Fixed priority
Algorithm Evaluation (4)
- Deterministic modeling
- Queueing models
- Simulations
- Implementation