Modul 3 - Scheduling Flashcards

1
Q

What is turnaround time? (Omloppstiden)

A
  • Turnaround time = Finish time - Arrival time
  • The time it takes for a process to finish execution minus the time when the process entered the system.
  • It’s the time required for a particular process to complete, from submission time to completion. It is equal to the sum total of Waiting time and Execution time.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is response time? (Responstiden)

A
  • Response time = First run time (first scheduled) - Arrival time
  • The time a process first gets to run minus the time the process entered the system
  • Response time - The time taken in a program from the issuance of a command to the commence of a response to that command.(i.e., the time-interval between submission of a request, and the first response to that request, not the output .)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Scheduling algorithms?

A
  • First in first out (FIFO)
  • Shortest job first (SJF)
  • Shortest time to completion first (STCF)
  • Round Robin (RR)
  • Multilevel feedback queue (MLFQ)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain First in first out (FIFO) scheduling algorithm

A

The processes run to completion in the order in which they came in.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Explain Shortest job first (SJF) scheduling algorithm

A

Runs the processes that has the shortest execution time to run first. It’s a non-preempt scheduler.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain Shortest time to completion first (STCF) scheduling algorithm

A

Like SJF, which runs the processes that has the shortest execution time first, but it is possible to stop and run processes alternately (växelvis). It’s a preempt scheduler.

Solves the problem of SJF, if a process that takes longer time to complete arrives before another process that is shorter.

So any time a new process enters the system the STCF scheduler determines which of the remaining jobs (including the new job) has the least time left, and schedules that one.

Alltså om process1 tar 30 ms anländer vid tid 0ms och process2 tar 10 ms och anländer vid tid 10 ms. Så kommer process1 att pausas och process2 börja köras tills den blir klar. Sen återuptas process1 igen. (Om inte det finns någon annan kortare process som väntar på att köras)

  • Bad response time.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain Round Robin (RR) scheduling algorithm

A

•Runs processes in time slices.

For example if we have three process that all take 40 ms to complete. Then we can improve the response time by giving each process a time-slice of 10 ms. So it alternates between the processes every 10ms.

•Good for response time but really bad for turnaround time. (worst policy for turnaround time).

  • What RR is doing is stretching out each job as long as it can, by only running each job for a short bit before moving to the next. Because turnaround time only cares about when jobs finish, RR is nearly pessimal, even worse than simple FIFO in many cases.
  • Making the time slice too short is problematic: suddenly the cost of context switching will dominate overall performance.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the problems with scheduling algorithms such as Shortest job first (SJF) and Shortest time to completion first (STCF) and how can we address the problem?

A
  • Problems with algorithms such as SJF and STCF is that we cannot know in advance how long a program will run.
  • Multilevel feedback queue addresses this problem
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does Multilevel feedback queue (MLFQ) scheduling algorithm work?

A
  • It decides which processes run first by observing their behavior and assigning different priorities. Processes with high priority are chosen to run first. Processes with the same priority runs with round-robin scheduling (RR).
  • MLFQ has a number of distinct queues, each assigned to a different priority level.
  • MLFQ tries to optimize turnaround time and minimize response time (to make the program feel more interactive).
  • If a process uses the CPU under a long amount of time => low priority
  • If a process gives away CPU time often, for example while waiting for input from the keyboard => keeps its high priority (as this is how an interactive process might behave.)
  • MLFQ will try to learn about processes as they run, and thus use the history of the job to predict its future behavior.
  • Because it dosen’t know how long a processes will be, it will assume that is short (set to high priority first) and then lower the priority if the processes consumes all the time that is given in each priority level.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Assume we have a scheduler that implements “shortest job first” i.e. not able to preempt jobs. If we have three jobs that will take 10ms, 20ms and 30ms it’s a better strategy than taking the jobs in random order, show why. (Tenta fråga)

A

Answer:
- Turnaround time = (10 + 30 + 60) / 3 = 33 ms

  • Wrong order could give turnaround time = (30 + 50 + 60) / 3 = 47 ms

Where turnaround time is : Finish time - Arrival time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What happens to a process if it initiates a I/O-operation? (Scheduling)

A
  • An I/O-operation will take time to complete and we (the CPU) could do some useful work while a process is waiting.
  • That is why a process is descheduled if it is preempted or if it initiates a I/O-operation. It enters the blocked state. When the I/O is completed it will enter the ready state.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the convoy effect?

A

It when a number of relatively-short potential consumers of a resource get queued behind a heavyweight resource consumer.

For example if we use FIFO (first in first out) with with two processes , process1 (takes 5 ms to complete) and process2 (takes 20 ms to complete). If process2 arrives before, then we get the convoy effect.

To fix this use SJF (Shortest job first).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Preemptive vs non-preemptive schedulers?

A
  • non-preemptive schedulers run each job to completion before considering whether to run a new job.
  • preemptive schedulers quite willing to stop one process from running in order to run another. Virtually all modern schedulers are this way.

This implies that the scheduler employs a context switch, stopping one running process temporarily and resuming (or starting) another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the cost of a context switch? (switching which process that runs on the CPU)

A

The cost of context switching does not arise solely from the OS actions of saving and restoring a few registers. When programs run, they build up a great deal of state in CPU caches, TLBs, branch predictors, and other on-chip hardware. Switching to another job causes this state to be flushed and new state relevant to the currently running job to be brought in, which may exact a noticeable performance cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which rules has the Multilevel feedback queue (MLFQ) scheduling policy?

A

MLFQ rules:

  • Rule 1: If Priority(A) > Priority (B) => A is scheduled for execution.
  • Rule 2: If Priority(A) = Priority (B) => A and B are scheduled in round-robin (RR), i.e., time-slices.
  • Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue). i.e., when a new job is created it starts with the highest priority.

— A job is given a allotted time, to consume at each priority level. —-

• Rule 4: a job that has consumed its allotted time (regardless of how many times it has given up the CPU) is moved to a lower priority.

  • With such protections in place, regardless of the I/O behavior of the process, it slowly moves down the queues, and thus cannot gain an unfair share of the CPU.

• Rule 5: After some time period S, move all jobs to the highest priority. (Solves starvation problem: which is if there are too many short jobs it will never let a long running job finish.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How are time-slices usually distributed in MLFQ scheduling?

A
  • Most MLFQ variants allow varying time-slice length across different queues. Where the highest priority queues are usually given short time slices (consists of interactive jobs, thus quickly alternates between them which makes sense for higher response time). And long time slices for long-running jobs that are CPU-bound.
17
Q

How is the performance of MLFQ?

A
  • MLFQ can deliver excellent overall performance ( similar to SJF/STCF) for short-running interactive jobs, and is fair and makes progress for long-running CPU-intensive workloads. Many systems use a form of MLFQ.
18
Q

What is proportional-share scheduler (fair-share scheduler)?

A

Proportional-share is a scheduling algorithm which tries to guarantee that each job obtain a certain percentage of CPU time. Instead of trying to optimize for turnaround or response time.

19
Q

Name a few proportional/fair-share schedulers? (fixa)

A
  • Lottery scheduling (random/ non-deterministic)
  • Stride scheduling (deterministic)
  • Completely Fair Scheduler (CFS) (Used in Linux)
20
Q

Basic idea of the lottery scheduling?

A

The basic idea is quite simple: every so often, hold a lottery to determine which process should get to run next; processes that should run more often should be given more chances to win the lottery.

21
Q

What are tickets in lottery scheduling?

A

tickets are used to represent the share of a resource that a process (or user or whatever) should receive. The percent of tickets that a process has represents its share of the system resource in question.

For example if process A has 75 tickets and process B has 25 tickets then A recives75% of the CPU and B the remaining 25%. (In a case where there are only two processes)

22
Q

How does the lottery scheduler determine which process to run?

A

Each process is given an amount of tickets and every so often (say, every time slice) a lottery is being hold. Where the scheduler picks a winning ticket (randomly). The process that holds the winning ticket is the process which gets to run.

(The lottery scheduler keeps track how many tickets have been distributed).

23
Q

Flexibility with the lottery scheduler? (fixa)

A
  • A new job can be given a set of tickets as long as we keep track of how many tickets we have.
  • We can give a user a set of tickets and allow the user to distribute them among its jobs. (ticket currency)
  • Each user can have its local tickets and then have a local lottery.
  • We could allow each user to create new tickets, i.e. inflation, if we trust each other. (ticket inflation)
24
Q

Explain the mechanism to manipulate tickets called ticket currency?

A

Currency allows a user with a set of tickets
to allocate tickets among their own jobs in whatever currency they would like; the system then automatically converts said currency into the correct global value.

For example, assume users A and B have each been given 100 tickets. User A is running two jobs, A1 and A2, and gives them each 500 tickets (out of 1000 total) in User A’s own currency. User B is running only 1 job and gives it 10 tickets (out of 10 total). The system will convert A1’s and A2’s allocation from 500 each in A’s currency to 50 each in the global currency; similarly, B1’s 10 ticketswill be converted to 100 tickets. The lottery will then be held over the global ticket currency (200 total) to determine
which job runs.

User A –> 500 (A’s currency) to A1 –> 50 (global currency)
–> 500 (A’s currency) to A2 –> 50 (global currency)

User B -> 10 (B’s currency) to B1 -> 100 (global currency)

25
Q

Explain the mechanism to manipulate tickets called ticket transfer?

A

With ticket transfer, a process can temporarily hand off its tickets to another process. This ability is especially useful in a client/server setting, where a client process sends a message to a server asking it to do some work on the client’s behalf. To speed up the work, the client can pass the tickets to the server and thus try to maximize the performance of the server while the server is handling the client’s request.

26
Q

Explain the mechanism to manipulate tickets called ticket inflation?

A

With ticket inflation, a process can temporarily raise or lower the number of tickets it owns. It can be applied in an environment where a group of processes trust one
another. So if any process knows it needs more CPU time, it can boost its ticket value as a way to reflect that need to the system, all without communicating with any other processes.

27
Q

Basic idea of the stride scheduling?

A

It’s a deterministic fair-share scheduler where at any given time, the process with the lowest pass value so far is picked to run, and then its pass value is incremented by its stride value.

  • Each job is given a stride value, the higher the stride the lower the priority.
  • Each job keeps a pass value initially set to 0.
  • In each round the job with the lowest pass value is selected and …
  • … the pass value is incremented by its stride value.
28
Q

How is the stride value determined?

A

The stride value is inverse in proportion to the number of tickets each process has.

For example, with jobs A, B, and C, with 100, 50, and 250 tickets, respectively, we can compute the stride of each by dividing some large number by the number of tickets each process has been assigned. For example, if we divide 10,000 by each of those ticket values, we obtain the following stride values for A, B, and C: 100, 200, and 40. We call this value the stride of each process.

29
Q

What is the pass value and how is it incremented?

A
  • Every time a process runs, a counter for it will increment (the pass value) by its stride to tracks its global progress.
30
Q

How does the stride scheduler determine which job to run?

A

The stride scheduler uses the stride and pass value to determine which process should run next.

The basic idea is simple: at any given time, pick
the process to run that has the lowest pass value so far; when you run a process, increment its pass counter by its stride.

31
Q

Stride scheduler vs lottery scheduler?

A
  • Lottery scheduling achieves the proportions probabilistically over time; stride scheduling gets them exactly right at the end of each scheduling cycle.
  • Stride scheduling has a global state which makes it harder to incorporate new processes in a sensible manner. If a new job enters in the middle of our stride scheduling what should its pass value be? Should it be set to 0? If so, it will monopolize the CPU.
  • With lottery scheduling, there is no global state per process; we simply add a new process with whatever tickets it has, update the single global variable to track how many total tickets we have, and go from there. That is why it’s easier to to incorporate new processes in a sensible manner in lottery scheduling.
32
Q

Which new requirements are introduced in real-time system scheduling?

A

Things should be completed within a given time period.

Two types:

  • Hard : all deadlines should be met, missing a deadline is a failure.
  • Soft : deadlines could be missed but the application should be notified and be able to take actions.
33
Q

What is known of task/job beforehand in real-time scheduling?

A

In hard real-time systems, tasks are known beforehand described by a triplet (e, d, p)

  • e: the worst case execution time for the task.
  • d: the deadline, when in the future do we need to finish.
  • p: the period, how often should the task be scheduled.

d < p : constrained, d = p default, d > p several outstanding

34
Q

Real-time system strategies?

A
  • Rate Monotonic Scheduling (RMS)

- Earliest Deadline First (EDF):

35
Q

How does rate monotonic scheduling (RMS) work?

A

Given that period (p) = deadline (d) i.e. a task must be completed within its period.

  • Schedule the available task with the shortest period
  • Always works if utilization is < 69%, could work for higher loads.
  • RMS is simpler to reason about, easy to implement.
36
Q

How does Earliest Deadline First (EDF) scheduling work?

A

Given that period (p) = deadline (d) i.e. a task must be completed within its period.

  • Schedule based on the deadline, more freedom to choose tasks.
  • Always works if utilization is < 100%.
  • Used by Linux in the real-time extension (not in the regular system)
37
Q

How does the Completly Fair Scheduler (CFS) work?

A
  • Similar to stride scheduler but uses a red-black tree to order processes.
  • Will keep processes on the same core if it thinks it’s a good choice.
  • Scheduling classes:
  • SCHED_FIFO, SCHED_RR: high priority classes (often called real-time processes)
  • SCHED_NORMAL: all the regular interactive processes
  • SCHED_BATCH: processes that only run if there are no interactive processes available.
  • SCHED_IDLE: if we’ve got nothing else to do.
38
Q

Goal with the Completly Fair Scheduler?

A
Its goal is simple: to fairly divide a CPU evenly among all competing processes. It does so through a simple
counting-based technique known as virtual runtime (vruntime). As each process runs, it accumulates vruntime. In the most basic case, each process’s vruntime increases at the same rate, in proportion
with physical (real) time. When a scheduling decision occurs, CFS will pick the process with the lowest vruntime to run next.