chapter 6: CPU Scheduling Flashcards
Which of the following is true of cooperative scheduling?
A) It requires a timer.
B) A process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state.
C) It incurs a cost associated with access to shared data.
D) A process switches from the running state to the ready state when an interrupt occurs.
B- A PROCESS KEEPS THE CPU UNTIL IT RELEASES THE CPU EITHER BY TERMINATING OR BY SWITCHING TO THE WAITING STATE
\_\_\_\_ is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput
D- THROUGHPUT
\_\_\_\_ scheduling is approximated by predicting the next CPU burst with an exponential average of the measured lengths of previous CPU bursts. A) Multilevel queue B) RR C) FCFS D) SJF
D- SJF
The \_\_\_\_ scheduling algorithm is designed especially for time-sharing systems. A) SJF B) FCFS C) RR D) Multilevel queue
C- RR
Which of the following scheduling algorithms must be nonpreemptive? A) SJF B) RR C) FCFS D) priority algorithms
C- FCFS
Which of the following is true of multilevel queue scheduling?
A) Processes can move between queues.
B) Each queue has its own scheduling algorithm.
C) A queue cannot have absolute priority over lower-priority queues.
D) It is the most general CPU-scheduling algorithm
B- EACH QUEUE HAS ITS OWN SCHEDULING ALGORITHM
The idea behind \_\_\_\_ is to create multiple logical processors on the same physical processor, presenting a view of several logical processors to the operating system. A) SMT B) SMP C) PCB D) PCP
A- SMT
The default scheduling class for a process in Solaris is \_\_\_\_. A) time sharing B) system C) interactive D) real time
A- TIME SHARING
In Linux, tasks that are not real-time have dynamic priorities that are based on their \_\_\_\_ values plus or minus the value 5. A) share B) active C) nice D) runqueue
C- NICE
In Little’s formula “h” represents the ____.
A) average waiting time in the queue
B) average arrival rate for new processes in the queue
C) average queue length
D) average CPU utilization
B- AVERAGE ARRIVAL RATE FOR NEW PROCESSES IN THE QUEUE
Explain the concept of a CPU – I/O burst cycle.
The lifecycle of a process can be considered to consist of a number of bursts belonging to two different states. All processes consist of CPU cycles and I/O operations. Therefore, a process can be modeled as switching between bursts of CPU execution and I/O wait.
What role does the dispatcher play in CPU scheduling?
The dispatcher gives control of the CPU to the process selected by the short-term scheduler. To perform this task, a context switch, a switch to user mode, and a jump to the proper location in the user program are all required. The dispatch should be made as fast as possible. The time lost to the dispatcher is termed dispatch latency.
Explain the difference between a response time and a turnaround time. These times are both used to measure the effectiveness of scheduling schemes.
Turnaround time is the sum of the periods that a process is spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O. Turnaround time essentially measures the amount of time it takes to execute a process. Response time, on the other hand, is a measure of the time that elapses between a request and the first response produced.
What effect does the size of the time quantum have on the performance of an RR algorithm?
At one extreme, if the time quantum is extremely large, the RR policy is the same as the FCFS policy. If the time quantum is extremely small, the RR approach is called processor sharing and creates the appearance that each of n processes has its own processor running at 1/n the speed of the real processor
Explain the process of starvation and how aging can be used to prevent it.
Starvation occurs when a process is ready to run but is stuck waiting indefinitely for the CPU. This can be caused, for example, when higher priority processes prevent low-priority processes from ever getting the CPU. Aging involves gradually increasing the priority of a process so that a process will eventually achieve a high enough priority to execute if it does not execute for a long enough period of time.