P3L1: Scheduling Flashcards

1
Q

What decides how and when process (and their threads) access shared CPUs?

A

CPU Scheduler

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the CPU Scheduler concerned with?

A

Schedules tasks running user-level processes/threads as well as kernel-level threads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What events kick off the CPU Scheduler?

A
  • CPU becomes idle
  • New task becomes ready (checks for priority on new task and previous tasks to potentially interrupt current running task)
  • timeslice expired timeout (when a running thread has used up its time on the CPU)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe the Steps for when the Scheduler dispatches a thread on the CPU

A
  1. context switch
  2. enter user mode
  3. set program counter
  4. execute/run
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Choose task from ready queue means?

A

Scheduling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Name the scheduling type that, as soon as a task is assigned to the CPU, it will run until it finishes/completes

A

Run-to-Completion Scheduling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define the First-Come First-Servce (FCFS) Algorithm for scheduling

A
  • Schedules tasks in order of arrived
  • A runqueue is useful, the runqueue should be First-In First-Out (FIFO)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define the Shortest Job First (SJF) algorithm for scheduling

A
  • Schedules tasks in order of their execution time
  • Runqueue will maintained as an ordered queue by task execution time OR utilize a tree where the nodes are organized by execution time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Formula for Throughput

A

Throughput = (# of Tasks) / (Total Execution Time, or "Makespan")

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Formula for Avg. Completion Time

A

Avg. Completion Time = (Sum of Times to Complete Each Task) / (# of Tasks)

Order of Task is important!!!!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Formula for Avg. Wait Time

A

Avg. Wait Time = (Total Wait Time to Start Each Tasks) / (# of Tasks)

Order of Task is important!!!!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does it mean to preempt (what is preemption)?

A

Interrupt a task to start another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do we determine execution time?

A

heuristics based on history are utilized to determine job running time/execution time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is Windoqwed Average?

A

The average execution time of a task over the last n times

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is Priority Scheduling?

A

When tasks have different priority levels that help indicate which tasks to schedule

  • highest priority task gets scheduled next or preempts the current running task
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How does Priority Scheduling determine priority?

A
  • Different runqueues for each priority level
  • Tree ordered based on priority level
17
Q

What is a drawback/danger of Priority Scheduling and how do we address it?

A
  • Low priority task stuck in a runqueue (due to higher priorty tasks being always present); “runqueue starvation”
  • “Priority Aging” - priority = f(actual priority, time spent in runqueue)
    We can change the priority of a task through some function, based on the original priority and the time it has spent in the current runqueue
18
Q

Describe Priority Inversion and how we might combat it

A

Priority Inversion is when the completion of tasks does not follow the exact priority scheduling due to conflicts in mutex locks. We can combat this through temporarily boosting the priority of the current mutex owner until they do not need the mutex anymore, ensuring that the mutex is then unlocked for higher priority tasks. We lower the priority of the tasks once it releases the mutex

19
Q

Define Round Robin Scheduling

A
  1. Pick up first task from queue (like FCFS)
  2. Task my yield, to wait on I/O (unlike FCFS)
  3. Next task is picked up while first task waits on I/O to finish. First task is placed at end of queue.
  4. Second task will run to completion or until it has to wait, at that point is yields to next task in queue and is put in back of queue

and so on / this goes on until all tasks are complete

20
Q

How does Round Robin Scheduling work with priorities?

A

Basic Round Robin Scheduling but with the inclusion on preemption based on priority level

21
Q

What is the maximum amount of uninterrupted time given to a task?

A

timeslice or “time quantum”

22
Q

Can a task run for less than the set timeslice time?

A

True - could end early, or have to wait on I/O, synchronization, mutex, etc.

task will be placed on the queue if interrupted

higher priority tasks will also interrupt/preempt

23
Q

How do we minimize overheads for scheduling that uses timeslices?

A

the timeslice value should be larger than the context switch time by a significant enough time

24
Q

What are benefits/downsides of adding Timeslice Scheduling to RoundRobin?

A

Benefits
- short tasks finish sooner
- more responsive
- lengthy I/O operations initiated sooner

Downsides
- overheads: interrupt, schedule, context switches

25
For CPU bound tasks is it better to have lower or higher timeslice values?
Higher, in fact inifinite time for timeslice value will yield the best throughput and avg. completion time (CPU bound tasks specifically)
26
For I/O bound tasks is it better to have lower or higher timeslice values
Lower, I/O bound tasks get more chances to start running, issue their I/O request, respond to users, and keeps CPU and I/O devices busy (I/O bound tasks)
27
CPU Utilization Formula
[cpuRunningTime / (cpuRunningRime + contextSwitchOverheads)] * 100 Helpful Steps to Solve: 1. Determine a consistent, recurring interval 2. In the interval, each task should be given an opportunity to run 3. During that interval, how much time is spent computing? This is the cpu_running_time 4. During that interval, how much time is spent context switching? This is the context_switching_overheads 5. Calculate!
28
List Runqueue Data Structures
List Multiple Lists Tree etc
29
Given n number of tasks with three levels of I/O intensity, how might configure the runqueue datastructure, timeslice values, and priority?
Separate the tasks by the I/O intensity, highest being given the shortest timeslice value and highest priority and lowest I/O intensity getting the longest timeslice and lowest priority.
30
How do we determine I/O intensity of tasks?
- History based heuristics - treat multiple runqueues as not totally separate, if a task uses the full timeslice from one queue, move it to the next queue with the largest timeslice value (or vice versa, moving a task up in priority if it interrupts before timeslice value is used) => Multi-Level Feedback Queue (MLFQ)
31
Describe Multi-Level Feedback Queue
multiple runqueues with increasing timeslice values and decreasing priority (higher timeslice == lower priority). Each runqueue has a different policy associated to it as well. If tasks use full timeslice time for high priority queues, move them to the lower priority queue with higher timeslice value. If task is interrupted before full timeslice value is used, move it to higher priority queue with lower timeslice.
32
Describe the Linux O(1) Scheduler
Constant time to select/add task, regardless of task count. Preemptive, priority-based - real time (0 to 99 priority levels), 0 being highest - timesharing (100 to 139), user processes, default is 120 adjusted by a "nice value" (-20 to 19) Timeslice Value - depends on priority - smallest for low priority - highest for high priority Feedback - sleep time: waiting/idling - longer sleep => interactive, so boost the priority by subtracting 5 - smaller sleep => compute-intensive, lower priority by adding 5 Runqueue 2 arrays of tasks Active - used to pick next task to run - constant time to add/select - tasks remain in queue in active array until timeslice expires Expired - inactive list - when no more tasks in Active arry => swap active and expired
33
What are some issues with the O(1) Linux Scheduler?
- performance of interactive tasks - fairness not guarenteed
34
Describe the Linux Completely Fair Scheduler (CFS)
Default since 2.6.23 and is the scheduler for Nonreal-time tasks. Realtime tasks are scheduled by a real-time scheduler Runqueue == Red-Black Tree - dynamic tree structure - As nodes are added/removed then the tree will self balance itself so all paths from the root to the leaves are aproximately the same size - ordered by "vruntime" - vruntime == tiome spent on CPU - children on left spend lest time on CPU, children on right spend more time on CPU CFS always picks the leftmost node, will periodically adjust vruntime of current running task and compare it to leftmost node's vruntime - if smaller, continue running - if larger, preemptand place appropriately in the tree vruntime progress rate depends on priority and niceness - rate faster for low priority - rate slower for high priority Same tree for all priorities! Performance - Select Task => O(1) - Add Task => O(logN)
35
Why is it important to and how do we schedule a task back to the same CPU is previously was on?
Cache-Affinity - keep tasks on the same CPU as much as possible - cache is hot with the data that task needs - hiearachical scheduler architecture Per CPU schedulers that are handed tasks by a load balancer - load balance across CPUs - base on queue length or when CPU is idle For multiple memory nodes, want to utilize the memory node on the CPU closer to the where the state of the task is => NUMA-Aware (Non-Uniform Memory Access Aware) Scheduling
36
Describe Non-Uniform Memory Access (NUMA)
- multiple memory nodes - memory node closer to a "socket" of multiple processors - access to local memory node faster than access to remote memory node
37
Define Hyperthreading
- multiple hardware-supported execution contexts - still one CPU but with very fast context switch CPU has multiple registers with their own threads/execution contexts. It is faster two context switch between these execution contexts than others Also known as "hardware multithreading," "multithreading," "chip multithreading (CMT)," "simultaneous multithreading (SMT)
38