Operating Systems (Scheduling) Flashcards

1
Q

3 Basic concepts of OS Scheduling?

A

1) Max CPU utilization through multiprogramming.
2) CPU-I/O Burst Cycle
3) CPU burst distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

CPU Scheduler selects from processes that are __ (2) when a process __(4)

A

1.1/2) In memory and ready to execute.

  1. 1) Switches from running to waiting state.
  2. 2) Switches from running to ready state.
  3. 3) Switches from waiting to ready.
  4. 4) Terminates.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Dispatcher gives control of CPU to __ (1) and this involves __ (3).

A

1) The process that was selected by the CPU scheduler.

  1. 1) switching context
  2. 2) switching to user mode
  3. 3) jumping to the proper location in the user program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Scheduling criteria (5)

A

1) CPU utilization
2) Throughput
3) Turnaround time
4) Waiting time
5) Response time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Optimization criteria (5)

A
  • Max CPU utilisation
  • Max throughput
  • Min turnaround time
  • Min waiting time
  • Min response time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How can we (try) determine the length of the next CPU burst?

A

We estimate the length using exponential averaging:

tn = actual length of nth CPU burst.
Tn+1 = predicted value of the next CPU burst.

For 0<=a<=1,
Tn+1 = atn + (1-a)*Tn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can we (try) determine the length of the next CPU burst?

A

We estimate the length using exponential averaging:

tn = actual length of nth CPU burst.
Tn+1 = predicted value of the next CPU burst.

For 0<=a<=1,
Tn+1 = atn + (1-a)*Tn

View the diagram in the notes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Examples of Exponential Averaging

A
• α =0
      • τn+1 = τn
      • Recent history does 
        not count
• α = 1
      • τn+1 = αtn
      • Only the actual last 
        CPU burst counts
• If we expand the formula, 
   we get:
τn+1 = α τn+(1 - α)ατn -1 + …
 \+(1 - α )j ατn -j + …
 \+(1 - α )n +1 τ0
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Priority scheduling:

  • 2 Schemes
  • SJF
  • Problem and solution
A

1) Preemptive and Non-preemptive
2) SJF is a special case of priority scheduling

3) Problem - starvation;
Solution - aging

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Round Robin (RR)

  • Explanation
  • What happens after this time has elapsed? (2)
  • Performance (3)
A

1) Each process gets a unit of CPU time.
2) After, process preempted and added to the end of the ready queue.

3) q must be large w.r.t context switch time.
q too large => FIFO,
q too small => high overhead

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Round Robin (RR)

  • Explanation
  • What happens after this time has elapsed? (2)
  • Performance (3)
  • RR vs SJF
A

1) Each process gets a unit of CPU time.
2) After, process preempted and added to the end of the ready queue.

3) q must be large w.r.t context switch time.
q too large => FIFO,
q too small => high overhead

4) • RR typically has
       • higher average 
         turnaround than SJF, 
         but
       • better response
(diag.)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Multi-level queue:

1) Ready queue partitioning (2)
2) Queue scheduling algorithms
3) How is scheduling done between queues?

A
• Ready queue is partitioned into separate queues
     • foreground (interactive)
     • background (batch)
• Each queue has its own scheduling algorithm
     • foreground – RR
     • background – FCFS
• Scheduling must be done between the queues
     • Fixed priority scheduling
          • Foreground, then 
            background: 
            possibility 
            of starvation.
     •  Time slice
            • each queue gets a 
            certain amount of 
            CPU time
            • 80% to foreground 
            in RR and
            • 20% to background 
            in FCFS 

see diagram with processes priority.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Multilevel Feedback Queue (2)

A
  • Separation based on CPU bursts
  • Process may move between queues

Diagram.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Thread Scheduling (2:2)

A
• Process Contention Scope
1) Local scheduling
2) Thread library schedules user threads
on available kernel threads
• System Contention Scope
1) Global Scheduling
2) Kernel schedules kernel threads
onto CPU
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Multiple-Processor Scheduling (6: 1, 2, 1, 1, 1, 2)

A

• Asymmetric multiprocessing
1) Master server - only one processor
accesses the system data structures,
alleviating the need for data sharing

• Symmetric multiprocessing

1) Each processor is self-scheduling
2) Multiple physical processors

• Processor Affinity
1) Hard / Soft

• Load balancing
1) Push / Pull

• Multiprocessing
1) Symmetric / Asymmetric

• Multithreaded Multicore Processors

1) Hardware thread
2) Appears as logical processor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Real-Time CPU Scheduling (2:1)

A

• Hard real-time systems
1) required to complete a critical
task within a guaranteed amount
of time

• Soft real-time computing
1) requires that critical processes
receive priority over less
fortunate ones

17
Q

Real-time CPU scheduling (5)

A
  • Minimizing Latency
  • Priority-Based Scheduling
  • Rate-monotonic Scheduling
  • Earliest-Deadline-First Scheduling
  • Proportional Share Scheduling
18
Q

Minimize latency (2:2)

A
• Interrupt latency
1) Complete instruction, determine
interrupt type, save state
2) To keep low - don’t disable
interrupts for too long
• Dispatch latency
1) Conflict phase and dispatch phase
2) Too keep low - provide preemptive
kernels
19
Q

Priority-Based scheduling (2)

A
• scheduler for a real-time system
must support priority-based
algorithm
• preemptive, priority-based
scheduler only guarantees soft
real-time functionality
20
Q

Hard real-time systems (2)

A
  • Must further guarantee that real-time tasks will be service in accord with their deadline requirements.
  • additional scheduling features required.
21
Q

Characteristics of processes (3: 2, 0, 1)

A

• Periodic

1) rate 1/p
2) require CPU at constant intervals

• Fixed processing time t

• Deadline d
1) if announced to scheduler:
admission-control algorithm admits or rejects request

22
Q

Hard real-time scheduling (3: 1)

A
• Rate-monotonic scheduling
1) static priority (1/p) with preemption
• Earliest deadline first scheduling
1) process must announce deadline when
becoming runnable
• Proportional share scheduling
1) Assign shares, need admission-control
23
Q

Rate-Monotonic scheduling (1:4)

A

• Static priority policy with preemption

1) Periods: P1 = 50, P2 = 100
2) Processings Times: T1 = 20, t2 = 35
3) Deadlines: start of next period
4) CPU utilisation: 20/50 + 35/100 = 75%

Diag.

24
Q

Rate-monotonic (2: 0, 4)

A
  • Optimal (static scheduling)
  • CPU utilisation is bounded

1) Periods: P1 = 50, P2 = 80
2) Processings Times: T1 = 25, t2 = 35
3) Deadlines: start of next period
4) CPU utilisation: 25/50 + 35/80 = 95%

Diag.

25
Q

Earliest-deadline-first (6: 0, 4, 2, 1, 1, 1)

A

• Assign priorities according to deadline

• Priorities not fixed, but adjusted to reflect deadline of newly
runnable process
1) Periods: P1 = 50, P2 = 80
2) Processings Times: T1 = 25, t2 = 35
3) Deadlines: start of next period
4) CPU utilisation: 25/50 + 35/80 = 95%.

• Processes does not
1) need to be periodic
2) have to require a constant amount of CPU time
per burst

• Process must
1) announce its deadline when it becomes runnable

• Theoretically
1) possible to achieve 100% CPU utilisation

• In practice
1) Impossible due to context switching

Diag.

26
Q

Proportional Share Scheduling (3: 0, 0, 1)

A

• Allocate T shares among all
applications

• An application assigned N shares
will have N/T of the processor time

• Admission-Control policy
1) Admit a client only if sufficient
shares available to satisfy request

27
Q

Operating System Examples

A

• Systems using 1:1 uses only SCS

1) Linux
2) Windows
3) Solaris

28
Q

Linux Scheduling (2: 2, 1)

A
• Two scheduling classes
1) Default class using CFS
2) Real-time class using POSIX
standard for real-time scheduling

• Scheduler
1) selects highest-priority task
belonging to highest-priority class

29
Q

CFS (3: 1)

A

• Assigns a proportion (based on nice value)
of CPU processing time to each task
1) nice value = -20 (high) … 0 (default) …
19 (low)

• Identifies a targeted latency
1) an interval of time during which every
runnable task should run at least once

• Maintains a virtual run time for each task
1) associated with a decay factor based on
the priority of the task

30
Q

Linux Scheduling (2:2,0)

A

• Two separate ranges of priorities

1) 0-99 for real-time tasks
2) 100-139 for normal tasks

• These two ranges map into a global priority scheme

31
Q

Solaris Scheduling (2:0,6)

A
  • Priority-based scheduling
  • Six classes

1) Time sharing (default)
2) Interactive
3) Real Time
4) System
5) Fair share
6) Fixed priority

32
Q

Algorithm Evaluation (4)

A
  • Deterministic modeling
  • Queueing models
  • Simulations
  • Implementation