Processor Management Flashcards
What is processor scheduling
Processor scheduling refers to how we allocate CPU cores to processes. In other words, how to manage transitions of processes between the ready and running states to ensure efficient resource utilization.
What are the top 4 most common scheduling algorithms (1 processor)
1 first come first served
2 round robin
3 shortest process next
4 multilevel queuing
What is the first come first served scheduling algorithm
Allocates the processor based on the creation time of processes.
What are the pros of the first come first served scheduling algorithm
- There is no unnecessary switching between processes
- You will eventually always provide processing time to a given process
What are the cons of the first come first served scheduling algorithm
The average waiting time (the total time spent in the ready state) is often long
What is the round-robin scheduling algorithm
Identical to first come first served, except that no process can occupy the processor longer than a predefined time length (the time quantum).
What are the pros of the round robin scheduling algorithm
Distributes resources in a “fair” manner
A quick process can pass through relatively quickly
What are the cons of the round robin scheduling algorithm
Long average waiting time when processes require multiple time quanta
Performance depends heavily on time quantum
What is the shortest process next scheduling algorithm
The shortest process next scheduling algorithm shares the processor on the basis of shortest predicted execution time.
What can the shortest process next algorithm be
Preemptive (processes may be interrupted before they are done)
Non-preemptive (processes may not be interrupted before they are done)
What are the pros of the shortest process Next
Gives the minimum average waiting time for a given set of processes
What are the cons of the shortest process next
Execution time has to be estimated
Longer processes may have to wait a very long time
What are the 3 design choices for Multilevel Queuing
Design choices:
(a) How map processes to queues?
(b) How determine relative priority of queues?
(c) Which scheduling algorithm to use within each queue?
What are the different specific queues for multilevel queueing
interactive processes (processes with lots of I/O)
normal processes (e.g., system services)
batch processes (processes without I/O)
What are the two relative priority queues
- Fixed priority scheduling
- Time slicing
What is fixed priority scheduling
Example: interactive processes always get priority over batch processes.
What is time-slicing in Multilevel queueing
Allocates CPU time based on a specific percentage to different types of processes.
Example: 80% of the CPU time to in- teractive processes, 10% to normal pro- cesses, 10% to batch processes.
In a round robin scheduling algorithm what will happen after the process arrives at the processor
1) be interrupted and placed at the back of the (circular) queue, or 2) complete before it runs out of time.
What are the pros of multilevel queueing
Complex, can accommodate a range of different performance objectives
What are the cons of multilevel queueing
Complex and difficult to calibrate.
What are user oriented scheduling criteria
- Turnaround time
- Response time
What are system oriented scheduling criteria
- Throughput
- Processor utilization
Define turnaround time
The time between the submission of a process and its completion
Define response time
The time between the submission of a process and its first response
Define throughput
The number of completed processes per unit time
Define processor utilisation
The percentage of time that the processor is busy
What is the characteristics of multiprocessors
Each processor/core has its own cache memory.
Name the two approaches to multiprocessor scheduling.
- A common ready queue
- Private queues
What happens in a common ready queue approach?
When a processor becomes available, it is assigned a new process from a common queue.
What happens in a private queues approach?
When a processor becomes available, it is assigned a new process from its own private queue.
What are two methods for improving multiprocessor scheduling performance?
- Load balancing
- Processor affinity
What is processor affinity
A process is kept running on the same processor to keep the cache warm.
What is load balancing
With load balancing, processes are evenly balanced among processors. Common ready queues automatically enforce load balancing.
True or False: Load balancing and processor affinity can interact negatively.
True
Fill in the blank: The time between the submission of a process and its completion is known as _______.
[turnaround time]
Fill in the blank: The percentage of time that the processor is busy is known as _______.
[processor utilization]
What are the interaction effects
Load balancing counteracts processor affinity and vice versa. With soft processor affinity, processes are only moved if there is a good reason to do so.
What does it mean to keep the cache warm
To keep the cache relevant to the tasks/ processes. Warm - relevant, cold - cache data not relevant
In interaction effects why might some processes be moved and some not
We often have cores with different characterises thus we may queue processes based of this and therefore not moving certain takes to each:
- performance cores (those to handle heavy data tasks and require more energy and are the most powerful cores.
- Efficiency cores (more everyday tasks, not as powerful and do not consume as much energy)