Chapter 4: CPU Scheduling Flashcards

1
Q

Which of the following is true of cooperative scheduling?
A) It requires a timer.
B) A process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state.
C) It incurs a cost associated with access to shared data.
D) A process switches from the running state to the ready state when an interrupt occurs.

A

B- A PROCESS KEEPS THE CPU UNTIL IT RELEASES THE CPU EITHER BY TERMINATING OR BY SWITCHING TO THE WAITING STATE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
\_\_\_\_ is the number of processes that are completed per time unit.
A) CPU utilization 
B) Response time 
C) Turnaround time 
D) Throughput
A

D- THROUGHPUT

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
\_\_\_\_ scheduling is approximated by predicting the next CPU burst with an exponential average of the measured lengths of previous CPU bursts.
 A) Multilevel queue
 B) RR
 C) FCFS 
 D) SJF
A

D- SJF

SJF scheduling, also known as Shortest Job Next or Shortest Job First-Come First-Served, is a non-preemptive scheduling algorithm where the process with the shortest burst time is given the highest priority. In SJF, the scheduler selects the process with the smallest remaining burst time to execute next.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
The \_\_\_\_ scheduling algorithm is designed especially for time-sharing systems.
A) SJF
 B) FCFS
 C) RR
 D) Multilevel queue
A

C- RR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
Which of the following scheduling algorithms must be nonpreemptive?
A) SJF 
B) RR 
C) FCFS 
D) priority algorithms
A

C- FCFS

FCFS (First-Come, First-Served)

In FCFS, the processes are executed in the order they arrive, forming a queue. When a process completes its execution or is blocked for some reason, the next process in the queue is selected and given control of the CPU.

Non-preemptive: Once a process is assigned the CPU, it continues executing until it completes or is blocked voluntarily (e.g., due to an I/O operation).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which of the following is true of multilevel queue scheduling?
A) Processes can move between queues.
B) Each queue has its own scheduling algorithm.
C) A queue cannot have absolute priority over lower-priority queues.
D) It is the most general CPU-scheduling algorithm

A

B- EACH QUEUE HAS ITS OWN SCHEDULING ALGORITHM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
The idea behind \_\_\_\_ is to create multiple logical processors on the same physical processor, presenting a view of several logical processors to the operating system.
A) SMT 
B) SMP 
C) PCB 
D) PCP
A

A- SMT

SMT (Simultaneous Multithreading) is a technology that allows multiple virtual threads or logical processors to share the resources of a physical processor core. It enhances performance and resource utilization in virtualized environments by enabling concurrent execution of multiple threads or processes on a single core.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
The default scheduling class for a process in Solaris is \_\_\_\_.
A) time sharing 
B) system 
C) interactive 
D) real time
A

A- TIME SHARING

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
In Linux, tasks that are not real-time have dynamic priorities that are based on their \_\_\_\_ values plus or minus the value 5.
A) share 
B) active 
C) nice 
D) runqueue
A

C- NICE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In Little’s formula “h” represents the ____.

A) average waiting time in the queue
B) average arrival rate for new processes in the queue
C) average queue length
D) average CPU utilization

A

B- AVERAGE ARRIVAL RATE FOR NEW PROCESSES IN THE QUEUE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Explain the concept of a CPU – I/O burst cycle.

A

The lifecycle of a process can be considered to consist of a number of bursts belonging to two different states. All processes consist of CPU cycles and I/O operations. Therefore, a process can be modeled as switching between bursts of CPU execution and I/O wait.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What role does the dispatcher play in CPU scheduling?

A

The dispatcher gives control of the CPU to the process selected by the short-term scheduler. To perform this task, a context switch, a switch to user mode, and a jump to the proper location in the user program are all required. The dispatch should be made as fast as possible. The time lost to the dispatcher is termed dispatch latency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Explain the difference between a response time and a turnaround time. These times are both used to measure the effectiveness of scheduling schemes.

A

Turnaround time is the sum of the periods that a process is spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O. Turnaround time essentially measures the amount of time it takes to execute a process. Response time, on the other hand, is a measure of the time that elapses between a request and the first response produced.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What effect does the size of the time quantum have on the performance of an RR algorithm? (At extremes)

A

At one extreme, if the time quantum is extremely large, the RR policy is the same as the FCFS policy. If the time quantum is extremely small, the RR approach is called processor sharing and creates the appearance that each of n processes has its own processor running at 1/n the speed of the real processor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain the process of starvation and how aging can be used to prevent it.

A

Starvation occurs when a process is ready to run but is stuck waiting indefinitely for the CPU. This can be caused, for example, when higher priority processes prevent low-priority processes from ever getting the CPU. Aging involves gradually increasing the priority of a process so that a process will eventually achieve a high enough priority to execute if it does not execute for a long enough period of time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Explain the fundamental difference between asymmetric and symmetric multiprocessing.

A

In asymmetric multiprocessing (AMP), all scheduling decisions, I/O, and other system activities are handled by a single processor whereas in Symmetric Multiprocessing (SMP), each processor is self-scheduling.

Asymmetric Multiprocessing (AMP):
In AMP systems, one processor, often called the master processor, is responsible for managing the system and distributing tasks to other processors, known as slave processors. The master processor handles operating system functions, scheduling, and task allocation, while the slave processors primarily focus on executing the assigned tasks.

The master processor typically handles system-level operations, I/O operations, and task management, while the slave processors concentrate on computation-intensive tasks or specific workloads. This arrangement can be useful when different processors have varying capabilities or when specific tasks require dedicated resources.

Symmetric Multiprocessing (SMP):
Each processor has an identical architecture and is capable of executing any task. In SMP, the operating system and workload are designed to be parallelized and distributed evenly across all processors.

SMP systems provide a shared memory space, allowing multiple processors to access the same memory, devices, and resources. Tasks are distributed dynamically among the processors, and each processor can execute any available task simultaneously. This parallel execution capability allows for efficient utilization of resources and improved overall system performance.

17
Q

Describe two general approaches to load balancing.

A

With push migration, a specific task periodically checks the load on each processor and – if it finds an imbalance -evenly distributes the load by moving processes from overloaded to idle or less-busy processors. Pull migration occurs when an idle processor pulls a waiting task from a busy processor. Pull and pull migration are often implemented in parallel on load-balancing systems.

18
Q

What is deterministic modeling and when is it useful in evaluating an algorithm?

A

Deterministic modeling takes a particular predetermined workload and defines the performance of each algorithm for that workload. Deterministic modeling is simple, fast, and gives exact numbers for comparison of algorithms. However, it requires exact numbers for input, and its answers apply only in those cases. The main uses of deterministic modeling are in describing scheduling algorithms and providing examples to indicate trends.

19
Q

Describe how trace tapes are used in distribution-driven simulations.

A

In a distribution-driven simulation, the frequency distribution indicates only how many instances of each event occur; it does not indicate anything about the order of their occurrence. Trace tapes can correct this problem. A trace tape is created to monitor the real system and record the sequence of actual events. This sequence then drives the simulation. Trace tapes provide an excellent way to compare two algorithms on exactly the same set of real inputs.

20
Q

In preemptive scheduling, the sections of code affected by interrupts must be guarded from simultaneous use (T/F)

A

TRUE

In preemptive scheduling, the scheduler can interrupt the execution of a running process or thread to allocate CPU time to another process or thread with higher priority. This interruption can occur at any point during the execution of a process or thread, including within critical sections of code.

To ensure data integrity and prevent issues such as race conditions or data corruption, sections of code that are affected by interrupts must be guarded from simultaneous use. This is typically done by using synchronization mechanisms such as locks, semaphores, or mutexes.

21
Q

In RR scheduling, the time quantum should be small with respect to the context-switch time (T/F)

A

FALSE

22
Q

The most complex scheduling algorithm is the multilevel feedback-queue algorithm (T/F)

A

TRUE
It involves using multiple queues with different priorities and dynamically adjusting a process’s priority based on its behavior.

23
Q

Load balancing is typically only necessary on systems with a common run queue. (T/F)

A

FALSE

Load balancing aims to distribute the workload evenly across multiple resources (such as processors, servers, or network links) to improve performance, maximize resource utilization, and prevent bottlenecks.

24
Q

Systems using a one-to-one model (such as Windows XP, Solaris 9, and Linux) schedule threads using process-contention scope (PCS). (T/F)

A

FALSE

Systems using a one-to-one model, such as Windows XP, Solaris 9, and Linux, typically schedule threads using thread-contention scope (TCS), not process-contention scope (PCS).

In a one-to-one threading model, each thread corresponds to a separate kernel-level thread, allowing for more fine-grained control over thread scheduling and resource allocation.

Thread-contention scope (TCS) scheduling means that the scheduler considers threads as the scheduling unit. Each thread is scheduled independently, and thread-level contention is taken into account when making scheduling decisions.

Process-contention scope (PCS) scheduling, on the other hand, considers processes as the scheduling unit. In systems with a many-to-one threading model (where multiple user-level threads are mapped to a single kernel-level thread), the scheduler focuses on scheduling processes rather than individual threads. Thread-level contention is not explicitly considered, as multiple threads within a process share the same kernel-level thread.