C191-Terms-Chapter-4 Flashcards
Long-term scheduling
Decides when a process should enter the ready state and start competing for the CPU.
A process entering the RL from the suspended list is subject to long-term scheduling. The OS may want to decide whether it should be re-admitted to the current mix of processes to compete for the CPU.
Short-term scheduling
Decides which of the ready processes should run next on the CPU.
When a running process becomes blocked and later reenters the RL, short-term scheduling is again applied to let the process compete for the CPU.
Whenever a running p isn’t allowed to continue running, it’s moved back to the RL by the OS. The decision to stop and remove the p is subject to short-term scheduling. Likewise, moving p back to the RL is also a short-term decision.
non-preemptive scheduling algorithm
Allows a running process to continue until the process terminates or blocks on a resource.
When a process blocks and leaves the RL, a new p must be chosen to run.
A preemptive scheduling algorithm
May stop the currently running process and choose another process to run. The decision is made whenever:
A new process enters the ready list.
A previously blocked or suspended process re-enters the RL.
The OS periodically interrupts the currently running process to give other processes a chance to run.
priority of a process (or thread)
Is a numerical value that indicates the importance of the process relative to other processes. The scheduler uses this value to decide which process to run next.
The priority can be a constant value assigned when the process is created or can change dynamically based on some combination of the following parameters.
arbitration rule
Decides which process should proceed if two or more processes have the same priority.
Arrival time
A common parameter used to compute short-term priority.
The point in time when the process enters the RL.
Departure time
A common parameter used to compute short-term priority.
The point in time when the process leaves the RL by entering the blocked or suspended state, or by terminating all work.
Attained CPU time
A common parameter used to compute short-term priority.
The amount of CPU time used by the process since arrival.
Real time in system
A common parameter used to compute short-term priority.
The amount of actual time the process has spent in the system since arrival.
The real time keeps increasing regardless of the state the process is in.
Total CPU time
A common parameter used to compute short-term priority.
The amount of CPU time the process will consume between arrival and departure. For short-term scheduling, total CPU time is sometimes called the CPU burst.
External priority
A common parameter used to compute short-term priority.
A numeric priority value assigned to the process explicitly at the time of creation.
Deadline
A common parameter used to compute short-term priority.
A point in time by which the work of the process must be completed.
Period
A common parameter used to compute short-term priority.
A time interval during which a periodically repeating computation must be completed. The end of each period is the implicit deadline for the current computation.
Other considerations
A common parameter used to compute short-term priority.
The resource requirements of a process, such as the amount of memory used, or the current load on the system.
batch process
Performs a long-running and generally repetitive task that does not require any intervention from the user. Ex: Payroll, insurance claims processing, weather prediction, scientific calculations.
The main objective in scheduling batch processes is to maximize the number of processes completed per unit of time.
The order in which processes are executed cannot reduce the total CPU times of the individual processes but can reduce the waiting times.
In modern computing, any long-running computation that does not require interaction with a user is called a batch job or batch process.
FIFO (First-In-First-Out) algorithm
Also known as FCFS (First-Come-First-Served), schedules processes strictly according to the process arrival time.
The earlier the arrival, the higher the priority.
Theoretically, multiple processes could have the same arrival time, in which case the arbitration rule can pick a process at random. In practice, all requests are processed sequentially and thus all processes have different arrival times.
FIFO is non-preemptive.
SJF (Shortest Job First) algorithm
Also known as SJN (Shortest Job Next), schedules processes according to the total CPU time requirements.
The shorter the required CPU time, the higher the priority. If multiple processes have the same CPU time requirement, then the arbitration rule can select a process based on the arrival times.
SJF is non-preemptive. A more appropriate name for SJF would be “Shortest Total CPU Time First”. Good for a batch OS.
SRT (Shortest Remaining Time) algorithm
Schedules processes according to the remaining CPU time needed to complete the work.
The shorter the remaining CPU time, the higher the priority. If multiple processes have the same remaining time requirement, then the arbitration rule can select a process based on the arrival times.
SRT is the preemptive version of SJF.
turnaround time
The turnaround time of a process is the time between arrival and departure, and is the sum of the total CPU time and the waiting time.
average turnaround time (ATT)
For a set of n processes is the mean of the n individual turnaround times.
Starvation
Another important objective in batch scheduling is to guarantee fairness.
Starvation is the indefinite postponement of a process while other processes are allowed to proceed. Both SJF and SRT can lead to starvation.
interactive process
Communicates with the user in the form of a dialog by receiving commands or data from the keyboard or a pointing device and responding by generating output on the user’s terminal or another output device.
The primary goal in scheduling interactive processes is to respond promptly to each input.
Consequently, interactive processes must time-share the CPU using preemptive scheduling to give each process a chance to make progress in a timely manner.
time quantum
Q, is a small amount of time (typically 10 to 100 milliseconds) during which a process is allowed to use the CPU.
round-robin (RR) algorithm
Uses a single queue of processes. The priority is determined solely by a process’s position within the queue.
RR scheduling treats all processes as equals. External priorities can be used to divide processes into groups based on importance.
The process at the head of the queue has the highest priority and is allowed to run for Q time units. When Q ends, the process is moved to the tail of the queue and the next process now at the head of the queue is allowed to run for Q time units.
ML scheduling algorithm
Multilevel (ML) scheduling maintains a separate queue of processes at each priority level. Within each level, processes are scheduled using RR.
ML is prone to starvation because it relies on the fact that queues are empty at certain times.
If a stream of processes continues arriving at some priority level k, then no process at any level below k will be able to run, leading to starvation of all processes at levels below k.
multilevel feedback (MLF) algorithm
Multilevel feedback scheduling is similar to ML but addresses the problems of starvation and fairness by:
Using a different time quantum at each priority level.
Changing the priority of every process dynamically.
Under the multilevel feedback (MLF) algorithm a newly arriving process enters the highest-priority queue, N, and is allowed to run for Q time units.
When Q is exceeded, the process is moved to the next lower priority queue, N-1, and is allowed to run for 2Q time units. The quantum size is doubled with each decreasing priority level. Thus at priority level L, the maximum time allowed is 2(N-L)Q time units.
MLF automatically favors short-running processes while processes with long running times gradually migrate to lower priority levels.
response time
Of a process is the elapsed time from the submission of a request (pressing the Enter key or clicking a mouse button) until the response begins to arrive. Guaranteeing adequate response time is the primary goal in the scheduling of interactive processes.
For a process running alone, the response time depends on the type of request.
When multiple processes time-share the CPU, the response time increases with the number of processes, the length of the time quantum, and the time it takes to perform a context switch.
real-time process
Is characterized by continual input, which must be processed fast enough to generate nearly instantaneous output. Each arriving input item is subject to a deadline. Ex: Streaming of audio or video, processing and displaying radar data, control of robots or of fly-by-wire aircraft.
period
Is a time interval (typically in milliseconds or even microseconds) within which each input item must be processed. The end of each period is the implicit deadline for processing the current item.
rate monotonic (RM) algorithm
Schedules processes according to the period. The shorter the period, the higher the priority.
RM is preemptive.
RM is not guaranteed to produce a feasible schedule, but empirical evidence shows that RM is likely to produce a feasible schedule if CPU is less than approximately 0.7.
The actual value depends on the particular set of processes.
earliest deadline first (EDF) algorithm
Schedules processes according to the shortest remaining time until the deadline. The shorter the remaining time, the higher the priority.
EDF is preemptive.
EDF is an optimal algorithm in that a feasible schedule is always guaranteed as long as U ≤ 1.
feasible
A schedule is feasible if the deadlines of all processes can be met.
CPU utilization (U)
Is the sum of the individual fractions of CPU times used by each process.