Operating Systems: CPU Scheduling Flashcards

1
Q

What alternating cycle do almost all programs exhibit?

A

Almost all programs alternate between CPU number crunching (computation) and waiting for some kind of I/O.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why is even a simple fetch from memory significant relative to CPU speeds?

A

Even a simple fetch from memory takes a long time compared to the rapid speed of CPU operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What happens to CPU cycles when a process waits for I/O?

A

CPU cycles used during I/O waiting are lost forever, representing wasted potential processing time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How does CPU scheduling help when a process is waiting for I/O?

A

CPU scheduling allows another process to use the CPU while one is waiting for I/O, thereby making use of otherwise lost CPU cycles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the main challenge in scheduling for an operating system?

A

The challenge is to make the overall system as “efficient” and “fair” as possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What factors must be considered given that efficiency and fairness are subjective?

A

The system must adapt to varying and dynamic conditions and may be influenced by shifting priority policies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How is CPU utility maximized in a multiprogramming environment?

A

By overlapping CPU bursts with I/O bursts, ensuring the CPU is busy processing while other processes wait for I/O.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the CPU–I/O burst cycle in process execution?

A

Process execution consists of alternating cycles of a CPU burst (active processing) followed by an I/O burst (waiting for I/O).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What aspect of the CPU burst is of main concern in scheduling?

A

The distribution of CPU bursts is a main concern because it affects how efficiently processes are scheduled.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How are data transfers from disk to memory handled, and what does this imply for the CPU?

A

Data transfers are handled by the system bus, meaning the processor remains available to process data during disk I/O.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the role of the short-term scheduler?

A

The short-term scheduler selects which process in the ready queue should execute next and allocates the CPU. It is invoked frequently (every few milliseconds) and must be very fast.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the function of the long-term scheduler?

A

The long-term scheduler decides which processes should be brought into the ready queue and controls the degree of multiprogramming. It is invoked less frequently (every few seconds or minutes).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How are processes classified with respect to the long-term scheduler?

A

I/O-bound processes: Spend more time doing I/O than computations, resulting in many short CPU bursts.

CPU-bound processes: Spend more time doing computations, with few very long CPU bursts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is one of the goals of the long-term scheduler in terms of process mix?

A

The long-term scheduler strives to achieve a good mix of I/O-bound and CPU-bound processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

When is a medium-term scheduler used in an operating system?

A

It is used when the degree of multiprogramming needs to decrease; it swaps processes out of memory to disk and back in to manage active process counts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does the medium-term scheduler do during swapping?

A

It removes a process from memory (swap out) and later brings it back (swap in) so that execution can continue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the function of the dispatcher in CPU scheduling?

A

The dispatcher transfers control of the CPU to the process selected by the scheduler, performing context switching, switching to user mode, and jumping to the proper location in the new program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Why must the dispatcher be as fast as possible?

A

Because it is executed on every context switch; its time consumption (dispatch latency) directly affects overall system performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

When does the short-term scheduler make CPU scheduling decisions?

A
  1. Switches from running to a waiting state.
  2. Switches from running to the ready state.
  3. Switches from waiting to the ready state.
  4. Terminates.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Which scheduling actions are considered non-pre-emptive?

A

Scheduling triggered by a process switching from running to waiting (case 1) and upon termination (case 4) are non-pre-emptive; a new process must be selected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What does pre-emptive scheduling allow regarding CPU allocation?

A

It allows a process to be interrupted so that the CPU can be reallocated to another process, offering the choice to continue with the current process or switch.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What characterizes a non-pre-emptive (cooperative) scheduling system?

A

A process continues to run until it voluntarily blocks or terminates; scheduling occurs only when it switches to waiting or terminates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What enables pre-emptive scheduling, and what is its main risk?

A

Pre-emptive scheduling is enabled by hardware that supports a timer interrupt. The main risk is that a process might be interrupted while updating shared data structures, potentially causing inconsistency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are some key criteria that CPU scheduling aims to optimize?

A

Criteria include maximizing CPU utilization, ensuring fairness among processes, minimizing waiting time, and maintaining efficiency under dynamic conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What are the CPU Scheduling Criteria?
They include: CPU Utilisation – Keep the CPU as busy as possible. Throughput – The number of processes that complete execution per time unit. Turnaround Time – The total time taken to execute a particular process. Waiting Time – The total time a process spends waiting in the ready queue. Response Time – The time from when a request is submitted until the first response is produced (especially in time-sharing environments).
26
What are the Scheduling Algorithm Optimisation Criteria?
They aim to: * Maximise CPU utilisation. * Maximise throughput. * Minimise turnaround time. * Minimise waiting time. * Minimise response time.
27
What are the User-Oriented Scheduling Criteria?
They focus on performance from the user’s perspective, including: * Turnaround Time. * Response Time. * Meeting Deadlines. * Predictability of performance.
28
What are the System-Oriented Scheduling Criteria?
They emphasize overall system performance, including: * Throughput. * CPU Utilisation. * Fairness in resource allocation. * Enforcing Priorities. * Balancing Resources among processes.
29
What is the main challenge when designing CPU Scheduling Algorithms?
No single algorithm can optimise all criteria simultaneously; each algorithm has its own strengths and weaknesses. Some algorithms are non-preemptive (making decisions when no process is running), while others are preemptive (capable of interrupting a running process).
30
List the common CPU Scheduling Algorithms.
* FCFS (First Come First Served) * SJF (Shortest Job First) * SRT (Shortest Remaining Time) * Priority Scheduling * HRRN (Highest Response Ratio Next) * RR (Round Robin) * Multi-Level Queue Scheduling
31
How does the First Come First Served (FCFS) scheduling algorithm work?
How does the First Come First Served (FCFS) scheduling algorithm work?
32
In FCFS scheduling, what are the typical steps for process execution?
1. As each process becomes ready, it is placed in the ready queue. 2. When the CPU is free, the process at the head of the queue is selected. 3. The selected process runs until it finishes (non-pre-emptive execution).
33
FCFS Example: Given processes A–E with arrival times 0, 2, 4, 6, 8 and service times 3, 6, 4, 5, 2 respectively, what are the finish times and turnaround times?
* Finish Times: A = 3, B = 9, C = 13, D = 18, E = 20. * Turnaround Times: A = 3, B = 7, C = 9, D = 12, E = 12. * Additionally, the turnaround-to-service time ratios (Tr/Ts) are approximately: 1.00, 1.17, 2.25, 2.40, 6.00 with an average ratio of about 2.56.
34
What is Shortest Job First (SJF) scheduling?
SJF is a non-pre-emptive scheduling algorithm that selects the process with the shortest expected completion time next, thereby reducing the bias of FCFS against shorter jobs. Since actual job lengths are not usually known in advance, estimates (often via exponential averaging) are used.
35
How is the next CPU burst predicted in SJF scheduling?
By using exponential averaging with the formula: τₙ₊₁ = α·Tₙ + (1 – α)·τₙ, where τₙ₊₁ is the predicted next burst time, Tₙ is the actual length of the last burst, and 0 < α < 1 is a weighting factor.
36
SJF Example: With processes A–E having arrival times 0, 2, 4, 6, 8 and service times 3, 6, 4, 5, 2 respectively, what are the finish and turnaround times?
* Finish Times: A = 3, B = 9, C = 15, D = 20, E = 11. * Turnaround Times: A = 3, B = 7, C = 11, D = 14, E = 3. * The turnaround-to-service time ratios (Tr/Ts) are approximately: 1.00, 1.17, 2.75, 2.80, 1.50 with an average of about 1.84.
37
What is Shortest Remaining Time (SRT) scheduling?
SRT is the preemptive version of SJF. When a new process arrives in the ready queue, its expected running time is compared to the remaining time of the currently running process. If the new process requires less time, the current process is interrupted and the new process is scheduled.
38
SRT Example: Given processes A–E with arrival times 0, 2, 4, 6, 8 and service times 3, 6, 4, 5, 2 respectively, what are the finish and turnaround times?
* Finish Times: A = 3, B = 15, C = 8, D = 20, E = 10. * Turnaround Times: A = 3, B = 13, C = 4, D = 14, E = 2. * The turnaround-to-service time ratios (Tr/Ts) are approximately: 1.00, 2.17, 1.00, 2.80, 1.00 with an average ratio of about 1.59.
39
What is Priority Scheduling?
Priority Scheduling assigns each process a priority, and the process with the highest priority is selected for execution next. While SJF and SRT are special cases where priority is determined by processing time, general Priority Scheduling can use other criteria. A major challenge is that low-priority processes can experience starvation.
40
How is starvation addressed in Priority Scheduling?
By using the technique of Aging, where the priority of a process is gradually increased the longer it waits, ensuring that even low-priority processes eventually get executed.
41
What is Highest Response Ratio Next (HRRN) scheduling?
HRRN is a scheduling algorithm that selects the process with the highest response ratio. The response ratio is calculated as: Response Ratio = (Waiting Time + Service Time) / Service Time. This method gives higher priority to processes that have waited longer, thereby reducing the chance of starvation.
42
What is the formula for calculating the response ratio in HRRN scheduling?
What is the formula for calculating the response ratio in HRRN scheduling?
43
HRRN Example: For processes A–E with arrival times 0, 2, 4, 6, 8 and service times 3, 6, 4, 5, 2 respectively, what are typical finish and turnaround times?
In one example configuration, the finish times could be: A = 3, B = 9, C = 13, D = 20, E = 15, with turnaround times: A = 3, B = 7, C = 9, D = 14, E = 7. The corresponding turnaround-to-service ratios might be: A = 1.00, B = 1.17, C = 2.25, D = 2.80, E = 3.5, with an average ratio of approximately 2.14. (Note: Actual values depend on specific implementation details and arrival order.)
44
What is Round Robin (RR) scheduling?
RR is a turn-taking scheduling algorithm that assigns a fixed time quantum (slice) to each process. When a process’s time quantum expires, an interrupt is generated, the process is moved to the end of the ready queue, and the next process is scheduled. With a large quantum, RR behaves like FCFS; with a small quantum, context switch overhead becomes significant.
45
Round Robin Example (q = 4): Given processes A–E with arrival times 0, 2, 4, 6, 8 and service times 3, 6, 4, 5, 2 respectively, what are the finish and turnaround times?
* Finish Times: A = 3, B = 17, C = 11, D = 20, E = 19. * Turnaround Times: A = 3, B = 15, C = 7, D = 14, E = 11. * The turnaround-to-service time ratios (Tr/Ts) are approximately: A = 1.00, B = 2.50, C = 1.75, D = 2.80, E = 5.50, with an average of about 2.71.
46
What is Multi-Level Queue Scheduling?
In Multi-Level Queue Scheduling, the ready queue is partitioned into separate queues (e.g., foreground and background). Each process is permanently assigned to one queue based on characteristics like process type. Each queue may use a different scheduling algorithm (e.g., foreground might use Round Robin while background uses FCFS). Scheduling between queues is done by fixed priority or by allocating time slices to each queue, which can lead to potential starvation of lower-priority queues.
47
How does Multi-Level Queue Scheduling differentiate between process types?
Processes are categorized into distinct queues such as: * Foreground (higher priority) * Background (lower priority) Additionally, system processes, interactive processes, batch processes, and student processes can be assigned to different queues based on their priorities and scheduling needs.
48
What is Multi-Level Feedback Queue Scheduling?
For example, one configuration might use a quantum of 8 time units in higher-level queues and 16 time units in lower-level queues, with the lowest level possibly using FCFS scheduling.
49
What practical considerations are involved in implementing CPU scheduling algorithms?
In practice, scheduling algorithms are often a combination of basic methods. Complexities arise with multiprocessor/multicore systems and real-time operating systems, which have strict timing constraints. For further details, refer to standard texts like “Operating System Concepts” (Chapter 6).
50
What additional challenges arise in Multiprocessor Scheduling?
Multiprocessor scheduling is more complex due to: * The presence of multiple CPUs. * Homogeneous processors that share a common ready queue. * Asymmetric multiprocessing, where one processor handles system data structures, reducing data sharing issues. * Symmetric multiprocessing (SMP), where each processor is self-scheduling or has its own private ready queue. * Processor affinity issues (soft or hard affinity) and the use of processor sets.
51
How is load balancing achieved in Multiprocessor Scheduling?
Load balancing ensures that all CPUs are utilized efficiently. Techniques include: * Push Migration – Periodically checking processor loads and pushing tasks from an overloaded CPU to a less busy one. * Pull Migration – Idle processors pull waiting tasks from busier CPUs.
52
What defines Real-Time Scheduling in operating systems?
Real-Time Scheduling is used in environments where the correctness of computations depends not only on the logical results but also on the time at which these results are produced. It requires meeting strict deadlines and ensuring predictable behavioUr.
53
What are the unique requirements of Real-Time Scheduling?
They include: * Determinism – Predictable behaviour and guaranteed deadlines. * Responsiveness – Immediate handling of critical events with low latency. * User Control – Fine-grained control over priorities, task scheduling, and deadlines. * Reliability – Fault tolerance with no single point of failure, graceful degradation under failure conditions, and mechanisms for fail-soft operation.
54
Summarise the key points of CPU scheduling discussed in this module.
* CPU scheduling is the short-term scheduling mechanism in an operating system. * A variety of algorithms exist (e.g., FCFS, Priority-based, Round Robin, HRRN, Multi-Level Queues) each with unique trade-offs. * Multiprocessor and multicore systems introduce additional complexities such as load balancing and processor affinity. * Real-time systems impose strict timing and predictability requirements on scheduling.
55
What is "CPU Utilisation" in scheduling criteria?
It is a measure of keeping the CPU as busy as possible.
56
What does "Throughput" refer to in CPU scheduling?
It is the number of processes that complete their execution per time unit.
57
Define "Turnaround Time" in CPU scheduling.
Turnaround Time is the total time required to execute a particular process from start to finish.
58
What is "Waiting Time" in the context of CPU scheduling?
It is the total time a process spends waiting in the ready queue.
59
It is the total time a process spends waiting in the ready queue.
Response Time is the time from when a request is submitted until the first response is produced, not including final output.
60
What is the main goal for CPU scheduling optimization?
To maximise CPU utilisation and throughput while minimising turnaround, waiting, and response times.
61
What is a key system-oriented scheduling criterion?
Throughput, to maximise the number of completed processes.
62
Which system-oriented criterion ensures efficient use of CPU resources?
CPU Utilisation, by keeping the processor busy.
63
What system-oriented factor helps maintain balanced resource distribution?
Fairness, ensuring no process is unduly disadvantaged.
64
How can priorities be enforced in system-oriented scheduling?
By assigning and respecting process priorities during scheduling decisions.
65
What is the purpose of balancing resources in scheduling?
To ensure that all processes get an equitable share of the system's computing power.
66
What challenge does the range of CPU scheduling algorithms present?
No single algorithm can optimise for all scheduling criteria simultaneously.
67
Differentiate non-pre-emptive and pre-emptive scheduling.
Non-pre-emptive scheduling makes choices when no process is running; pre-emptive scheduling can interrupt a running process to allocate the CPU to another.
68
What is FCFS scheduling?
First Come First Served (FCFS) schedules processes in the order they arrive in the ready queue.
69
How does FCFS scheduling determine process execution?
Processes are executed until completion once they start running, with no interruption.
70
What is a major drawback of FCFS scheduling?
It may favour long processes and lead to higher average turnaround time for shorter jobs.
71
What scheduling algorithm selects the job with the shortest expected completion time next?
Shortest Job First (SJF).
72
Why is SJF scheduling challenging to implement perfectly?
Because the actual job lengths are not usually known in advance and must be estimated.
73
What technique is often used to predict CPU burst lengths in SJF?
Exponential averaging based on past performance.
74
What distinguishes SRT scheduling from SJF scheduling?
Shortest Remaining Time (SRT) is a preemptive version that can interrupt a running process if a new process has a shorter remaining time.
75
How does SRT scheduling decide to preempt the current process?
It compares the new process's expected running time to the remaining time of the current process and preempts if the new one is shorter.
76
What is the main idea behind Priority Scheduling?
Processes are assigned priorities, and the process with the highest priority is scheduled next.
77
What is a potential problem with Priority Scheduling?
It can lead to indefinite blocking or starvation for low-priority processes.
78
How is starvation mitigated in Priority Scheduling?
By using aging, which gradually increases a waiting process's priority over time.
79
What does HRRN stand for?
Highest Response Ratio Next.
80
How is the response ratio in HRRN calculated?
Response Ratio = (Waiting Time + Service Time) / Service Time.
81
What benefit does HRRN provide compared to SJF?
It prevents starvation by increasing a job's priority the longer it waits.
82
What is Round Robin (RR) scheduling?
RR is a turn-taking algorithm that allocates a fixed time quantum to each process in the ready queue.
83
What happens when a process's time quantum expires in RR scheduling?
An interrupt is generated, and the process is moved to the end of the ready queue.
84
How does the size of the time quantum affect RR performance?
A large quantum makes RR behave like FCFS, while a small quantum increases context switch overhead.
85
What is Multi-Level Queue Scheduling?
It partitions the ready queue into multiple separate queues (e.g., foreground and background), each with its own scheduling algorithm.
86
How are processes assigned in Multi-Level Queue Scheduling?
Processes are permanently assigned to a specific queue based on characteristics like process type or priority.
87
What is a common scheduling policy between queues in Multi-Level Queue Scheduling?
Fixed priority scheduling, where processes in higher-priority queues are always served before those in lower-priority queues.
88
What potential problem exists with fixed priority in Multi-Level Queue Scheduling?
It may lead to starvation for processes in lower-priority queues.
89
What is Multi-Level Feedback Queue Scheduling?
A scheduling method where processes can move between queues based on their CPU usage and behaviour?
90
How do processes start in a Multi-Level Feedback Queue system?
They all begin in the top-level queue.
91
What happens if a process uses too much CPU time in a Multi-Level Feedback Queue?
It is moved to a lower-level queue with possibly longer time quanta.
92
How can time slices be adjusted in a Multi-Level Feedback Queue system to prevent starvation?
Time slices can vary by queue level, with each level receiving a proportionate amount of CPU time.
93
What is an example of quantum settings in a Multi-Level Feedback Queue?
Higher-level queues might have a quantum of 8 time units, and lower-level queues might have 16 time units.
94
What practical factors affect the implementation of scheduling algorithms?
Multiprocessor/multicore complexities, real-time constraints, and the need to combine basic algorithms in various ways.
95
What is the focus of Multiprocessor Scheduling?
It addresses the complexity of scheduling when multiple CPUs are available and includes techniques like load balancing and processor affinity.
96
What is processor affinity in multiprocessor scheduling?
It is the tendency of a process to remain on the same processor, which can be managed as soft or hard affinity.
97
How is load balancing achieved in multiprocessor systems?
Through techniques like push migration (moving tasks from overloaded CPUs) and pull migration (idle CPUs fetching tasks from busy ones).
98
What defines Real-Time Scheduling in operating systems?
It is used when system correctness depends on both logical results and the timing of those results, with strict deadlines.
99
Name one unique requirement of Real-Time Scheduling.
Determinism – predictable behaviour and guaranteed deadlines.
100
What is another key requirement for Real-Time Scheduling?
Responsiveness – immediate handling of critical events with low latency.
101
What additional feature is critical in Real-Time Scheduling for user control?
Fine-grained control over priorities, task scheduling, and deadlines.
102
What reliability aspect is essential in Real-Time Scheduling?
Fault tolerance with no single point of failure and graceful degradation under failure conditions.
103
What is a core summary point about CPU scheduling?
CPU scheduling is the short-term mechanism used to decide which process in the ready queue gets the CPU next.
104
List some common CPU scheduling algorithms.
FCFS, SJF, SRT, Priority Scheduling, HRRN, Round Robin, Multi-Level Queue, and Multi-Level Feedback Queue.
105
How do multiprocessor systems complicate CPU scheduling?
They require managing load balancing, processor affinity, and often a combination of private and shared ready queues.
106
What special scheduling requirements do real-time systems impose?
They require predictable, timely responses with strict deadlines, and often specialized scheduling algorithms to ensure these constraints are met.
107
What is Multiprocessor Scheduling?
It is the CPU scheduling process designed for systems with multiple processors, where the complexity increases due to the need to allocate tasks among several CPUs.
108
What challenge does Multiprocessor Scheduling address?
It deals with efficiently distributing processes across multiple CPUs while managing issues like workload distribution and data sharing.
109
What is the difference between Asymmetric and Symmetric Multiprocessing?
* Asymmetric Multiprocessing: Only one processor accesses the system data structures, reducing data sharing issues. * Symmetric Multiprocessing (SMP): Each processor is self-scheduling; processes may share a common ready queue or have private queues, and SMP is the most common configuration.
110
What is Processor Affinity in multiprocessor systems?
Processor Affinity is the tendency of a process to continue executing on the same processor, which can be managed as soft affinity (preference) or hard affinity (strict assignment), helping to reduce cache misses and improve performance.
111
What is the purpose of Load Balancing in Multiprocessor Scheduling?
Load balancing aims to keep all CPUs efficiently utilized by distributing the workload evenly among processors.
112
Define Push Migration in the context of multiprocessor scheduling.
Push Migration is a load balancing technique where a periodic task monitors processor loads and pushes tasks from an overloaded CPU to one with lighter load.
113
What is Pull Migration in multiprocessor scheduling?
Pull Migration is a strategy where idle processors actively retrieve (pull) waiting tasks from busier processors to balance the workload.
114
How is Real-Time Scheduling defined?
Real-Time Scheduling refers to systems where the correctness of operations depends not only on producing the correct logical result but also on producing that result within a strict time deadline.
115
What does Determinism mean in Real-Time Scheduling?
Determinism means that the system behaves in a predictable manner, ensuring that deadlines are reliably met.
116
What is required for Responsiveness in Real-Time Scheduling?
Responsiveness requires the system to immediately handle critical events with very low latency, ensuring that high-priority tasks are addressed without delay.
117
Why is Fine-Grained User Control important in Real-Time Scheduling?
It allows users to adjust priorities, task scheduling, and deadlines at a very detailed level, ensuring that time-critical tasks receive appropriate attention.
118
What reliability aspects are crucial in Real-Time Scheduling?
Real-Time systems must be fault-tolerant (with no single point of failure), support fail-soft operation, and exhibit graceful degradation to maintain core functions under failure conditions.
119
What is the primary function of CPU Scheduling in an operating system?
It is the short-term scheduling mechanism that selects which process from the ready queue should be given control of the CPU next.
120
Name some common CPU scheduling algorithms.
Common algorithms include FCFS, Shortest Job First (SJF), Shortest Remaining Time (SRT), Priority Scheduling, Highest Response Ratio Next (HRRN), Round Robin (RR), Multi-Level Queue, and Multi-Level Feedback Queue.
121
How do multiprocessor or multicore systems add complexity to scheduling?
They require managing additional factors such as load balancing, processor affinity, and coordination among multiple ready queues, which complicate the scheduling decisions.
122
What specific challenges do Real-Time Systems introduce to CPU Scheduling?
Real-Time Systems impose strict timing constraints, requiring predictable, responsive, and reliable scheduling to ensure that deadlines are met.
123
Summarise the role of Load Balancing in multiprocessor scheduling.
Load Balancing ensures all processors are kept busy by evenly distributing the workload, using techniques like push and pull migration to avoid performance bottlenecks.
124
Why is processor affinity beneficial in a multiprocessor system?
Processor affinity reduces overhead by keeping a process on the same CPU, thus maintaining cache efficiency and minimizing data transfer between processors.
125
How does Symmetric Multiprocessing (SMP) typically schedule processes?
In SMP, each processor is self-scheduling, and processes may either be drawn from a common ready queue or from individual private queues, which increases overall system throughput.
126
What is the key difference between traditional and real-time CPU scheduling?
Traditional scheduling focuses on efficiency and throughput, while real-time scheduling must also guarantee that tasks complete within strict timing constraints, emphasizing predictability and responsiveness.