Processor Management Flashcards

1
Q

What is processor scheduling

A

Processor scheduling refers to how we allocate CPU cores to processes. In other words, how to manage transitions of processes between the ready and running states to ensure efficient resource utilization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the top 4 most common scheduling algorithms (1 processor)

A

1 first come first served
2 round robin
3 shortest process next
4 multilevel queuing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the first come first served scheduling algorithm

A

Allocates the processor based on the creation time of processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the pros of the first come first served scheduling algorithm

A
  • There is no unnecessary switching between processes
  • You will eventually always provide processing time to a given process
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the cons of the first come first served scheduling algorithm

A

The average waiting time (the total time spent in the ready state) is often long

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the round-robin scheduling algorithm

A

Identical to first come first served, except that no process can occupy the processor longer than a predefined time length (the time quantum).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the pros of the round robin scheduling algorithm

A

Distributes resources in a “fair” manner

A quick process can pass through relatively quickly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the cons of the round robin scheduling algorithm

A

Long average waiting time when processes require multiple time quanta

Performance depends heavily on time quantum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the shortest process next scheduling algorithm

A

The shortest process next scheduling algorithm shares the processor on the basis of shortest predicted execution time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What can the shortest process next algorithm be

A

Preemptive (processes may be interrupted before they are done)

Non-preemptive (processes may not be interrupted before they are done)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the pros of the shortest process Next

A

Gives the minimum average waiting time for a given set of processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the cons of the shortest process next

A

Execution time has to be estimated

Longer processes may have to wait a very long time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the 3 design choices for Multilevel Queuing

A

Design choices:
(a) How map processes to queues?
(b) How determine relative priority of queues?
(c) Which scheduling algorithm to use within each queue?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the different specific queues for multilevel queueing

A

interactive processes (processes with lots of I/O)

normal processes (e.g., system services)

batch processes (processes without I/O)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the two relative priority queues

A
  • Fixed priority scheduling
  • Time slicing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is fixed priority scheduling

A

Example: interactive processes always get priority over batch processes.

17
Q

What is time-slicing in Multilevel queueing

A

Allocates CPU time based on a specific percentage to different types of processes.

Example: 80% of the CPU time to in- teractive processes, 10% to normal pro- cesses, 10% to batch processes.

18
Q

In a round robin scheduling algorithm what will happen after the process arrives at the processor

A

1) be interrupted and placed at the back of the (circular) queue, or 2) complete before it runs out of time.

19
Q

What are the pros of multilevel queueing

A

Complex, can accommodate a range of different performance objectives

20
Q

What are the cons of multilevel queueing

A

Complex and difficult to calibrate.

21
Q

What are user oriented scheduling criteria

A
  • Turnaround time
  • Response time
22
Q

What are system oriented scheduling criteria

A
  • Throughput
  • Processor utilization
23
Q

Define turnaround time

A

The time between the submission of a process and its completion

24
Q

Define response time

A

The time between the submission of a process and its first response

25
Q

Define throughput

A

The number of completed processes per unit time

26
Q

Define processor utilisation

A

The percentage of time that the processor is busy

27
Q

What is the characteristics of multiprocessors

A

Each processor/core has its own cache memory.

28
Q

Name the two approaches to multiprocessor scheduling.

A
  • A common ready queue
  • Private queues
29
Q

What happens in a common ready queue approach?

A

When a processor becomes available, it is assigned a new process from a common queue.

30
Q

What happens in a private queues approach?

A

When a processor becomes available, it is assigned a new process from its own private queue.

31
Q

What are two methods for improving multiprocessor scheduling performance?

A
  • Load balancing
  • Processor affinity
32
Q

What is processor affinity

A

A process is kept running on the same processor to keep the cache warm.

33
Q

What is load balancing

A

With load balancing, processes are evenly balanced among processors. Common ready queues automatically enforce load balancing.

34
Q

True or False: Load balancing and processor affinity can interact negatively.

35
Q

Fill in the blank: The time between the submission of a process and its completion is known as _______.

A

[turnaround time]

36
Q

Fill in the blank: The percentage of time that the processor is busy is known as _______.

A

[processor utilization]

37
Q

What are the interaction effects

A

Load balancing counteracts processor affinity and vice versa. With soft processor affinity, processes are only moved if there is a good reason to do so.

38
Q

What does it mean to keep the cache warm

A

To keep the cache relevant to the tasks/ processes. Warm - relevant, cold - cache data not relevant

39
Q

In interaction effects why might some processes be moved and some not

A

We often have cores with different characterises thus we may queue processes based of this and therefore not moving certain takes to each:
- performance cores (those to handle heavy data tasks and require more energy and are the most powerful cores.
- Efficiency cores (more everyday tasks, not as powerful and do not consume as much energy)