P3L1 Flashcards

1
Q

3 Options for scheduler

A
  1. Assign tasks immediately (FCFS)
  2. Assign simple tasks first (SJF)
  3. Assign complex tasks first
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

FCFS

A

First Come First Serve (FIFO)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

SJF

A

Shortest Job First

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

CPU Scheduler runs when

A
  1. CPU become idles
  2. New tasks become ready
  3. timeslice expired timeout
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

runqueue is equal to

A

scheduling algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

“Run-to-completion” scheduling

A

No preemption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Metrics for schedulers

A

throughput
avg job completion time
avg job wait time
cpu utilization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Data structure for FCFS

A

Queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Data structure for SJF

A

Ordered queue or tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

SJF + Preemption

A

heuristics based on history -> job running time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Priority Scheduling

A

Run highest priority next

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Data structure(s) for Priority Scheduling

A

Queues for each priority

Tree with priority

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How to deal with Priority Scheduling Starvation

A

priority aging function (actual priority, time spent in run queue)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is Priority Invesion

A

Priorities inverted due to lock

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Solution to Priority Inversion

A

Temp boost priority of mutex owner, lower on release

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Round Robin Scheduing

A

Pick up first task from queue (task only yields to wait on I/O)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Round Robin w/ Priorities

A

Include preemption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Round Robin w/ interleaving

A

Timeslicing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Timeslice

A

(aka Time Quantum)

maximum amount of uninterrupted time given to a task

20
Q

Advantages of timeslice

A
  • Shorter tasks finish first
  • More responsive
  • Lengthy I/O ops initiated sooner
21
Q

Timeslice needs to be much greater than

A

context switch

22
Q

CPU bound tasks prefer

A

larger timeslices

23
Q

I/O Bound tasks prefer

A

smaller timeslices

24
Q

CPU Bound Timeslices

A

higher timeslices

a) limits context switching overheads
b) keeps CPU utilization and throughput high

25
Q

I/O Bound Timeslices

A

shorter timeslices

a) I/O bound tasks can issue I/O ops earlier
b) keeps CPU + device utilization high
c) better user perceived performance

26
Q

Multi-Level Feedback Queue

A
  1. task enters top most queue
  2. if task yields voluntarily, stay
  3. if task uses entire time slice, push down level
27
Q

Linux O(1) Scheduler

A

Constant time to select/add task
- Pre-emptive, priority-based
realtime (0-99)
timesharing (100-139)

28
Q

Linux O(1) Scheduler Data Structures

A

Run queue == 2 array of tasks

29
Q

O(1) Active array

A
  • used to pick next task to run

- tasks remain in queue until timeslice expires

30
Q

O(1) Expired array

A
  • inactive list

- when no more tasks in active array, swap active and expired

31
Q

O(1) Feedback

A

sleep time: waiting/idling

  • longer sleep: interactive, priority -5 (boost)
  • smaller sleep: compute intensive, priority +5 (lowered)
32
Q

Linux CFS Scheduler

A

Completely Fair Scheduler

33
Q

Why CFS vs O(1)

A

O(1) - performance of interactive tasks suffered, lacked fairness

34
Q

CFS Data structure

A

Run queue == Red-Black Tree
Ordered by virtual runtime (ns)
vruntime == time spent on CPU

35
Q

CFS Scheduling

A
  • always pick left-most node
  • periodically adjust vruntime
  • compare to leftmost vruntime
  • if smaller, continue running
  • if larger, pre-empt and place appropriately in the tree
36
Q

vruntime progress rate

A

depends on priority and niceness

  • rate faster for low priority
  • rate slower for high priority
37
Q

CFS Performance

A

O(1) for select

O(log N) for adding

38
Q

LLC

A

Last Level Cache

39
Q

SMP

A

Shared Memory Processor

Each CPU has LLC

40
Q

Multi-core Processor

A

Shared LLC

41
Q

NUMA

A

Non-Uniform Memory Access Platforms

  • multiple memory nodes
  • access to local mm node faster than access to remote
42
Q

HyperThreading

A

Multiple hardware-supported execution contexts

43
Q

Other names for HyperThreading

A
  • hardware multithreading
  • Chip Multithreading (CMT)
  • Simultaneous Multithreading (SMT)
44
Q

SMT Memory loading

A

100 cycles

45
Q

SMT Optimal Mix

A

mix of CPU and memory-intensive threads

  • all component (CPU and memory) well utilized
  • avoid/limit content on processor
46
Q

Mechanisms to track SMT ctx_switches

A
  • Hardware counters
    • L1, L2,… LLC misses
    • IPC (instructions retired)
    • Power and energy data
  • Software interface and tools
    • e.g. oprofile, Linux perf tool
47
Q

CPI

A

Cycle Per Instructions

- mixed CPI is good, but not realistic due to real world workloads