1-5 Flashcards

1
Q

Virtualization:

(4)…

A

• The OS takes a physical resource and transforms it into a virtual form of itself

  1. Physical resource: Processor, Memory, Disk …
  2. The virtual form is more general, powerful and easy-to-use
  3. Sometimes, we refer to the OS as a virtual machine
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

System call allows user to __.

A

tell the OS what to do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The OS provides some __.

A

interface (APIs, standard library)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A typical OS exports a few hundred system calls:

(3)…

A
  • Run programs
  • Access memory
  • Access devices
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The OS manage resources such as (3)…

A

CPU, memory, and disk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The OS allows:

(3)…

A
  • Many programs to run → Sharing the CPU
  • Many programs to concurrently access their own instructions and data → Sharing memory
  • Many programs to access devices → Sharing disks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Virtualizing the CPU:
• The system has a very large number of virtual CPUs
- Turning a single CPU into __.
- Allowing many programs to seemingly run at once → __.

A

a seemingly infinite number of CPUs;

Virtualizing the CPU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Virtualizing Memory:

The physical memory is an __.

A

array of bytes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
Virtualizing Memory: 
1. Read memory (load):
...
2. Write memory (store):
...
A
  1. Read memory (load):
    Specify an address to be able to access the data
  2. Write memory (store):
    Specify the data to be written to the given address
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Virtualizing Memory:

Each process accesses its own private __.

A

virtual address space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Virtualizing Memory:
Each process accesses its own private virtual address space.

  • The OS maps __
  • A memory reference within one running program does not affect __
  • Physical memory is a __ resource, managed by the OS
A

address space onto the physical memory;
the address space of other processes;
shared

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How to Provide the Illusion of Many CPUs?

(4)…

A
  1. Goal: run N processes at once even though there are M CPUs
    • N&raquo_space; M
  2. CPU virtualizing
    • The OS can promote the illusion that many virtual CPUs exist
    • One isolated machine for each program
  3. Time sharing
    • Running one program, then stopping it and running another
    • The potential cost is performance
  4. What are the benefits?
    • Ease of use for the programmer
    • Protection – program runs on a restricted machine
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A process is an __.

A

OS’s abstraction of a running program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What constitutes a process?

(3)…

A
1. Memory (address space)
• Instructions
• Data
2. Registers (state of the processor)
• General purpose registers
• Program counter (PC)
• Stack pointer (SP)
3. I/O Information
• List of files process currently has open
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Process API:
These APIs are available on any modern OS:
(5)…

A
1. Create
• Create a new process to run a program
2. Destroy
• Halt a runaway process
3. Wait
• Wait for a process to stop running
4. Miscellaneous Control
• Suspend
• Resume
5. Status
• Get some status information about a process
• How long it has been running
• What state is it in
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Process Creation:

(5)…

A
  1. Load a program code into memory, the address space of the process
    • Programs reside on a disk in an executable format
    • OS performs the loading process lazily
    - Loads pieces of code or data only as they are needed during program execution (demand paging)
  2. The program’s run -time stack is allocated
    • Stack is used for local variables, function parameters, return address
    • Initialize the stack with arguments
    - argc and argv array of main() function
  3. The program’s heap is created
    • Used for explicitly requested dynamically allocated data
    • malloc(); free()
  4. The OS does some other initialization
    • I/O setup (stdin, stdout, stderr)
  5. Start the program running at the entry point main()
    • The OS transfers control of the CPU to the newly-created process
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A process can be in one of three states:

(3)…

A
  1. Running
    • A process is running on the CPU
  2. Ready
    • A process is ready to run but for some reason the OS has chosen not to run it at this given moment
  3. Blocked
    • A process has performed some kind of operation that it is waiting on
    • E.g., an disk request
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Process Data Structures:
• The OS has some key data structures that track various pieces of information:
(2)…

A
1. Process list
• Ready processes
• Blocked processes
• Current running process
2. Register context
• A copy of all the registers for a process
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

The Process Control Block (PCB):

• A C-structure that __.

A

contains information about each process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q
The xv6 Kernel Process Structures: 
// the registers xv6 will save and restore
// to stop and subsequently restart a process
struct context {
int eip; 
int esp; 
int ebx; 
int ecx; 
int edx; 
int esi;
int edi; 
int ebp; 
};
// the different states a process can be in
enum procstate { (6)... };
A
// the registers xv6 will save and restore
// to stop and subsequently restart a process
struct context {
int eip; // Index pointer register
int esp; // Stack pointer register
int ebx; // Called the base register
int ecx; // Called the counter register
int edx; // Called the data register
int esi; // Source index register
int edi; // Destination index register
int ebp; // Stack base pointer register
};
// the different states a process can be in
enum procstate { UNUSED, EMBRYO, SLEEPING,
RUNNABLE, RUNNING, ZOMBIE };
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
The xv6 Kernel Process Structures: 
// Per-process state
struct proc {
uint sz; 
pde_t* pgdir; 
char *kstack; 
enum procstate state; 
int pid; 
struct proc *parent; 
struct trapframe *tf; 
struct context *context;
void *chan; 
int killed; 
struct file *ofile[NOFILE]; 
struct inode *cwd; 
char name[16]; 
};
A
// Per-process state
struct proc {
uint sz; // Size of process memory (bytes)
pde_t* pgdir; // Page table
char *kstack; // Bottom of kernel stack for this process
enum procstate state; // Process state
int pid; // Process ID
struct proc *parent; // Parent process
struct trapframe *tf; // Trap frame for current syscall
struct context *context; // swtch() here to run process
void *chan; // If non-zero, sleeping on chan
int killed; // If non-zero, have been killed
struct file *ofile[NOFILE]; // Open files
struct inode *cwd; // Current directory
char name[16]; // Process name (debugging)
};
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

fork()

A

Makes a copy of the currently running process
• The newly-created process has its own copy of the address space, registers, and PC.

p1.c: 
#include [LTS]stdio.h>
#include [LTS]stdlib.h>
#include [LTS]unistd.h>
int main(int argc, char *argv[]){
printf("hello world (pid:%d)\n", (int) getpid());
int rc = fork();
if (rc [LTS] 0) { // fork failed; exit
fprintf(stderr, "fork failed\n");
exit(1);
} else if (rc == 0) { // child (new process)
printf("hello, I am child (pid:%d)\n", (int) getpid());
} else { // parent goes down this path (main)
printf("hello, I am parent of %d (pid:%d)\n",
rc, (int) getpid());
}
return 0;
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

exec()

A

Replaces a process with a different program
• Run a program that is different from the calling program

p3.c: 
int main(int argc, char *argv[]){
printf("hello world (pid:%d)\n", (int) getpid());
int rc = fork();
if (rc [LTS] 0) { // fork failed; exit
fprintf(stderr, "fork failed\n");
exit(1);
} else if (rc == 0) { // child (new process)
printf("hello, I am child (pid:%d)\n", (int) getpid());
char *myargs[3];
myargs[0] = strdup("wc"); // program: "wc" (word count)
myargs[1] = strdup("p3.c"); // argument: file to count
myargs[2] = NULL; // marks end of array
execvp(myargs[0], myargs); // runs word count
printf("this shouldn’t print out");
} else { // parent goes down this path (main)
int wc = wait(NULL);
printf("hello, I am parent of %d (wc:%d) (pid:%d)\n",
rc, wc, (int) getpid());
}
return 0;
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

wait()

A

Wait for a child process to finish

• This system call won’t return until the child has run and exited

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Why are fork() and exec() separate functions?

(3)…

A

Necessary for building a UNIX shell
• Lets the shell run code after the call to fork() but before the call to exec()
• Can alter the environment of the about-to-be-run program
• Can easily support things like redirection and pipes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

How to Efficiently Virtualize the CPU with Control?

• The OS needs to share the physical CPU by __.

A

time sharing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

How to Efficiently Virtualize the CPU with Control?:
Issues:
(2)…

A
  • Performance: How can we implement virtualization without adding excessive overhead to the system?
  • Control: How can we run processes efficiently while retaining control over the CPU?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Direct Execution:

(10)…

A

OS:

  1. Create entry for process list
  2. Allocate memory for program
  3. Load program into memory
  4. Set up stack with argc / argv
  5. Clear registers
  6. Execute call main()

Program:

  1. Run main()
  2. Execute return from main()

OS:

  1. Free memory of process
  2. Remove from process list
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Without limits on running programs, the OS wouldn’t be in control of anything and thus would be “__.”

A

just a library

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What if a process wishes to perform some kind of restricted operation such as …
• Issuing an I/O request to a disk
• Gaining access to more system resources such as CPU or memory

Solution:

A
  • Solution: Using protected control transfer
  • User mode: Applications do not have full access to hardware resources
  • Kernel mode: The OS has access to the full resources of the machine
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

System Call:

• Allow the kernel to carefully expose certain key pieces of functionality to user program, such as (4)…

A
  • Accessing the file system
  • Creating and destroying processes
  • Communicating with other processes
  • Allocating more memory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Trap instruction:

(2)…

A
  • Jump into the kernel

* Raise the privilege level to kernel mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Return-from-trap instruction:

(2)…

A
  • Return into the calling user program

* Reduce the privilege level back to user mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Limited Direction Execution Protocol @Boot

(2)…

A
  1. OS @ boot (kernel mode):
    initialize trap table
  2. Hardware:
    remember address of … syscall handler
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Limited Direction Execution Protocol @Run

(8)…

A
1. OS @ run (kernel mode):
Create entry for process list
Allocate memory for program
Load program into memory
Setup user stack with argv
Fill kernel stack with reg/PC
return-from -trap
  1. Hardware:
    restore regs from kernel stack
    move to user mode
    jump to main
3. Program (user mode):
Run main()
…
Call system call
trap into OS
  1. Hardware:
    save regs to kernel stack
    move to kernel mode
    jump to trap handler
  2. OS @ run (kernel mode):
    Handle trap
    Do work of syscall
    return-from-trap
  3. Hardware:
    restore regs from kernel stack
    move to user mode
    jump to PC after trap
  4. Program (user mode):

    return from main
    trap (via exit())
  5. OS @ run (kernel mode):
    Free memory of process
    Remove from process list
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

How can the OS regain control of the CPU so that it can switch between processes?
(2)…

A
  • A Cooperative Approach: Wait for system calls

* A Non-Cooperative Approach: The OS takes control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Processes periodically give up the CPU by making __ such as yield.

A

system calls

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Processes periodically give up the CPU by making system calls such as yield:
(2)…

A

• The OS decides to run some other task
• Application also transfers control to the OS when they do something illegal
- Divide by zero
- Try to access memory that it shouldn’t be able to access

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

A process gets stuck in an infinite loop → __

A

Examples: early versions of the Macintosh OS, the old Xerox Alto system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

A Non-Cooperative Approach: OS Takes Control:
A timer interrupt:
(3)…

A
  1. During the boot sequence, the OS starts the timer
  2. The timer raises an interrupt every so many milliseconds
  3. When the interrupt is raised:
    • The currently running process is halted
    • Save enough of the state of the program
    • A pre-configured interrupt handler in the OS runs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

A __ gives OS the ability to run again on a CPU.

A

timer interrupt

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Saving and Restoring Context:
Scheduler makes a decision:
(2)…

A
  • Whether to continue running the current process, or switch to a different one
  • If the decision is made to switch, the OS executes context switch
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Context Switch:

(3)…

A

A low-level piece of assembly code
1. Save a few register values for the current process onto its kernel stack
• General purpose registers
• PC
• Kernel stack pointer
2. Restore a few register values for the soon-to-be-executing process from its kernel
stack
3. Switch to the kernel stack for the soon-to-be-executing process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Limited Direction Execution Protocol (Timer interrupt) @Boot

(4)…

A
  1. OS @ boot
    (kernel mode)
    initialize trap table
  2. Hardware
    remember address of …
    syscall handler
    timer handler
  3. OS @ boot
    (kernel mode)
    start interrupt timer
  4. Hardware
    start timer
    interrupt CPU in X ms
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Limited Direction Execution Protocol (Timer interrupt) @Run:
(5)…

A
  1. Program (user mode):
    Process A
2. Hardware: 
timer interrupt
save regs(A) to k-stack(A)
move to kernel mode
jump to trap handler for timer
3. OS @ run (kernel mode):
Handle the trap
Call switch() routine
save regs(A) to proc-struct(A)
restore regs(B) from proc-struct(B)
switch to k-stack(B)
return-from-trap (into B)
  1. Hardware:
    restore regs(B) from k-stack(B)
    move to user mode
    jump to B’s PC
  2. Program (user mode):
    Process B
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What happens if, during interrupt or trap handling, another interrupt occurs?

OS handles these situations:
(2)…

A
  • Disable interrupts during interrupt processing

* Use a number of sophisticated locking schemes to protect concurrent access to internal data structures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Separating Policy and Mechanism:

  1. Design paradigm
    (1) …
  2. Mechanism
    (2) …
  3. Policy
    (2) …
A
  1. Design paradigm
    • Separate high-level policies from their low-level mechanisms
  2. Mechanism
    • Answers the “how” question about a system
    • How does the OS perform a context switch?
  3. Policy
    • Answers the “which” question about a system
    • Which process should the OS run right now?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Initial workload assumptions (we’ll relax these assumptions later):
(5)…

A
  1. Each job runs for the same amount of time
  2. All jobs arrive at the same time
  3. Once started, each job runs to completion
  4. All jobs only use the CPU (i.e., they don’t perform any I/O)
  5. The run-time of each job is known
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

𝑻_𝒕𝒖𝒓𝒏𝒂𝒓𝒐𝒖𝒏𝒅_ =

A

𝑻_𝒄𝒐𝒎𝒑𝒍𝒆𝒕𝒊𝒐𝒏_ − 𝑻_𝒂𝒓𝒓𝒊𝒗𝒂l_

50
Q

Scheduling Metrics:

(2)…

A
  1. Performance metric: Turnaround time
    • The time at which the job completes minus the time at which the job arrived in the system
    𝑻_𝒕𝒖𝒓𝒏𝒂𝒓𝒐𝒖𝒏𝒅_ =
    𝑻_𝒄𝒐𝒎𝒑𝒍𝒆𝒕𝒊𝒐𝒏_ − 𝑻_𝒂𝒓𝒓𝒊𝒗𝒂l_
  2. Another metric is fairness (e.g., Jain’s Fairness Index)
    • Maximum when all jobs receive the same share of CPU allocation

Performance and fairness are often at odds in scheduling.

51
Q

FIFO:

A

First In, First Out (FIFO):
1. First Come, First Served (FCFS)
• Very simple and easy to implement

  • Example:
    • A arrived just before B which arrived just before C
    • Each job runs for 10 seconds
52
Q

Why FIFO is not that great? – Convoy effect

A
  1. Let’s relax assumption #1 (all jobs run for the same time)
    • Each job no longer runs for the same amount of time
  • Example:
    • A arrived just before B which arrived just before C
    • A runs for 100 seconds, B and C run for 10 each
53
Q

Shortest Job First (SJF):

A
  1. Run the shortest job first, then the next shortest, and so on
    • Non-preemptive scheduler (no interrupting a running job)
    - Example:
    • A arrived just before B which arrived just before C
    • A runs for 100 seconds, B and C run for 10 each
54
Q

SJF with Late Arrivals from B and C:
• Let’s relax assumption #2 (all jobs arrive at the same time)
- Jobs can now arrive at any time

A

Example:
• A arrives at t=0 and needs to run for 100 seconds
• B and C arrive at t=10 and each need to run for 10 seconds

55
Q

Shortest Time-to-Completion First (STCF):

(4)…

A
  1. Let’s relax assumption #3 (once started, each job runs to completion)
  2. Add preemption to SJF
    • Also knows as Preemptive Shortest Job First (PSJF)
  3. When a new job enters the system:
    • Determine the time to complete the remaining jobs and new job
    • Schedule the job which has the least remaining time left
  4. Provably optimal with regard to minimizing turnaround time
56
Q

New scheduling metric: Response time

A

The time from when the job arrives to the first time it is scheduled
𝑻_𝒓𝒆𝒔𝒑𝒐𝒏𝒔𝒆_ = 𝑻_𝒇𝒊𝒓𝒔𝒕𝒓𝒖𝒏_ − 𝑻_𝒂𝒓𝒓𝒊𝒗𝒂l_

• STCF and related disciplines are not particularly good for response time

57
Q
Round Robin (RR) Scheduling: 
...
A

Time slicing Scheduling
• Run a job for a time slice and then switch to the next job in the run queue until the jobs are finished
- Time slice is sometimes called a scheduling quantum
• It repeatedly does so until the jobs are finished
• The length of a time slice must be a multiple of the timer-interrupt period

RR is fair, but performs poorly on metrics such as turnaround time.

58
Q

The length of the time slice is critical:

  1. The shorter time slice
    (2) …
  2. The longer time slice
    (2) …
A
  1. The shorter time slice
    • Better response time
    • The cost of context switching will dominate overall performance
  2. The longer time slice
    • Amortize the cost of switching
    • Worse response time
59
Q

Deciding on the length of the time slice presents a __ to a system designer.

A

trade-off

60
Q

Incorporating I/O in Scheduling:

(3)…

A
  1. Let’s relax assumption #4 (all jobs only use the CPU)
    • Jobs can now perform I/O
  • Example:
    • Jobs A and B need 50ms of CPU time each
    • A runs for 10ms and then issues an I/O request
    • I/Os each take 10ms
    • B simply uses the CPU for 50ms and performs no I/O
    • The scheduler runs A first, then B after
  1. When a job initiates an I/O request
    • The job is blocked waiting for I/O completion
    • The scheduler should schedule another job on the CPU
  2. When the I/O completes
    • An interrupt is raised
    • The OS moves the process from blocked back to the ready state
61
Q

Multi-Level Feedback Queue (MLFQ):

A

A scheduler that learns from the past to predict the future
• Objective:
- Optimize turnaround time → Run shorter jobs first
- Minimize response time without a priori knowledge of job length

62
Q

MLFQ: Basic Rules

(2)…

A
  1. MLFQ has a number of distinct queues
    • Each queues is assigned a different priority level
  2. A job that is ready to run is on a single queue
    • I.e., a job can be in only one queue at any given time
    • A job on the highest priority queue is chosen to run
    • Use round-robin scheduling among jobs in the same queue

Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t)
Rule 2: If Priority(A) = Priority(B), A & B run in RR

63
Q

MLFQ varies the priority of a job based on its __.

A

Example:
1. A job repeatedly relinquishes the CPU while waiting for I/O
• Keep its priority high
• When it runs, it doesn’t run for very long
2. A job uses the CPU intensively for long periods of time
• Reduce its priority

64
Q

MLFQ:

How to Change Priority

A

MLFQ priority adjustment algorithm:
• Rule 3: When a job enters the system, it is placed in the highest priority queue
• Rule 4a: If a job uses up an entire time slice while running, its priority is reduced (i.e., it moves down a queue level)
• Rule 4b: If a job gives up the CPU before the time slice is up, it stays at the same priority level

Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t)
Rule 2: If Priority(A) = Priority(B), A & B run in RR

65
Q

Problems with the Basic MLFQ:

(3)…

A
  1. Starvation
    • If there are “too many” interactive jobs in the system
    • Long-running jobs will never receive any CPU time
  2. Game the scheduler
    • After running 99% of a time slice, issue an I/O operation (e.g., read(0))
    • The job gains a higher percentage of CPU time
  3. A program may change its behavior over time
    • CPU bound process → I/O bound process
66
Q

MLFQ:

The Priority Boost

A

• Rule 5: After some time period S, move all the jobs in the system to the topmost queue
- Example:
A long-running job(A) with two short-running interactive job(B, C)

67
Q

MLFQ:
Better Accounting:
How to prevent gaming of our scheduler?

A

Solution:
• Rule 4 (Rewrite Rules 4a and 4b): Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down on queue)

68
Q
Tuning MLFQ And Other Issues: 
• The high-priority queues → \_\_
- E.g., 10 or fewer milliseconds
• The low-priority queue → \_\_
- E.g., 100 milliseconds
A

Short time slices;

Longer time slices

69
Q

The Solaris MLFQ implementation

A

For the Time-Sharing scheduling class (TS)
• 60 Queues
• Slowly increasing time-slice length
- The highest priority: 20 msec
- The lowest priority: A few hundred milliseconds
• Priorities boosted around every 1 second or so

70
Q

The refined set of MLFQ rules:

(5)…

A
  • Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t)
  • Rule 2: If Priority(A) = Priority(B), A & B run in RR
  • Rule 3: When a job enters the system, it is placed at the highest priority
  • Rule 4: Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down on queue)
  • Rule 5: After some time period S, move all the jobs in the system to the topmost queue
71
Q

Fair-share scheduler

(2)…

A
  • Guarantee that each job obtain a certain percentage of CPU time
  • Not optimized for turnaround or response time
72
Q

Tickets

(2)…

A
  • Represent the share of a resource that a process should receive
  • The percent of tickets represents its share of the system resource in question

Example:
There are two processes, A and B
- Process A has 75 tickets → receive 75% of the CPU
- Process B has 25 tickets → receive 25% of the CPU

73
Q

Lottery scheduling:

A

• The scheduler picks a winning ticket
- Switch to the winning process and run it

Example
• There are 100 tickets
- Process A has 75 tickets: 0 to 74
- Process B has 25 tickets: 75 to 99
Scheduler’s winning tickets: 63 85 70 39 76 17 29 41 36 39 10 99 68 83 63
Resulting scheduler: A B A A B A A A A A A B A B A

A runs 11/15 = 73.3%, B runs 4/15 = 26.7%

The longer these two jobs compete,
The more likely they are to achieve the desired percentages

74
Q

Ticket Mechanisms:
Ticket currency:
(2)…

A
  • A user allocates tickets among their own jobs in whatever currency they would like
  • The system converts the currency into the correct global value

Example:

  • There are 200 tickets (Global currency)
  • Process A has 100 tickets
  • Process B has 100 tickets

User A → 500 (A’s currency) to A1 → 50 (global currency) → 500 (A’s currency) to A2 → 50 (global currency)
User B → 10 (B’s currency) to B1 → 100 (global currency)

75
Q

Ticket transfer

A

A process can temporarily hand off its tickets to another process

76
Q

Ticket inflation

(3)…

A
  • A process can temporarily raise or lower the number of tickets is owns
  • If any one process needs more CPU time, it can boost its tickets
  • Assumes processes cooperate and are friendly with each other
77
Q

U

A

• U: unfairness metric
- The time the first job completes divided by the time that the second job completes

Example:
• There are two jobs, each jobs has runtime 10
• First job finishes at time 10
• Second job finishes at time 20
• U = 10/20 = 0.5
• U will be close to 1 when both jobs finish at nearly the same time

78
Q

Random is easy to implement, but it may __.

A

not deliver the exact right proportions

79
Q

Stride Scheduling:

A

Stride of each process:
• (A large number) / (the number of tickets of the process)
• Example: A large number = 10,000
- Process A has 100 tickets → stride of A is 100
- Process B has 50 tickets → stride of B is 200

• A process runs, increment a counter(=pass value) for it by its stride
- Pick the process to run that has the lowest pass value

80
Q

Stride Scheduling:

If a new job enters with pass value 0, it will __!

A

monopolize the CPU

81
Q

CFS Basic Operation:

(4)…

A
  1. Fairly divides a CPU evenly among all competing (runnable) processes
    • Doesn’t use a fixed time slice
  2. Uses the virtual runtime (vruntime) of a process
    • Accumulates as the process runs
    • To schedule a process, pick the one with the lowest vruntime
  3. When to schedule?
    • Frequent switches increase fairness but has a higher overhead
    • Fewer switches give better performance at the cost of fairness
  4. Controlled by sched_latency parameter
    • Maximum time a process can run before considering a switch (e.g., 20 ms)
    • Divided by the number of runnable processes to get a process time slice
    • CFS will be completely fair over this time period
82
Q

CFS Example:

A
  1. sched_latencey = 48 ms
  2. Four processes that are runnable to start
    • Per process time slice of 12 ms (48/4)
    • vruntime is starts at 0 for these jobs
  3. Pick job with the lowest vruntime (A, B, C, or D in this case)
  4. Run job A until it has used 12 ms of vruntime
    • Then make a scheduling decision
    • Run the job with the lowest vruntime
    - (B, C, or D)
  5. C and D complete after 96 ms
    • Time slice is adjusted to 24 ms (48/2)
83
Q

Per process time slice is…

A

the sched_latency / runnable processes
• A lot of runnable processes could lead to small time slices
• Lots of context switches and more overhead

84
Q

CFS min_granularity parameter:

(3)…

A

• Minimum time slice of a process (e.g., 6 ms)
• CFS will never set the time slice of a process to less than this value
• In this case, may not be perfectly fair over the target scheduling latency
- E.g., sched_latencey = 48 ms with 10 runnable processes
- time slice 4.8 –> 6 ms
- all jobs won’t run during the 48 ms

85
Q

Timer interrupts:

(3)…

A
  • Time slices are variable, how to set the timer?
  • Timer goes off frequently (e.g., 1 ms)
  • Gives the CFS scheduler a chance to see if the current job has reached the end of its run
86
Q

Niceness Levels:

(3)…

A
  1. Gives the user control over process priority
    • Give some processes a higher (or lower) share of the CPU
  2. Not through tickets, but with a nice level of a process
    • A measure of how nice (to other processes) your job is
    • 19 (lowest priority)
    • -20 (highest priority)
  3. Nice levels are mapped to a weight used to compute an effective time slice for a process
87
Q

Niceness Weighting Example

(4)…

A
  1. Two processes, A and B
    • A’s niceness level is -5 (boost in priority)
    • B’s niceness level is 0 (default)
  2. Calculate the time slice for A and B
    • Weight A: 3121, weight B: 1024, total weight: 4145
    • Time slice A: 3121 / 4145 = 0.753 * sched_latency
    • Time slice B: 1024 / 4145 = 0.247 * sched_latency
  3. Assuming a 48 ms sched_latency:
    • Process A gets about 75% of the sched_latency (36 ms)
    • B gets about 25% of the sched_latency (12 ms)
  4. Weight table is constructed to preserve CPU proportionally ratios when the difference in nice values is constant
    • E.g., if process A had a nice value of 5 and B had a nice value of 10, they would be scheduled the same way as above
88
Q

Calculating vruntime:

(3)…

A
  1. Higher priority processes get a longer time slice
  2. But we pick the process with the lowest vruntime to run next
    • To handle priority properly, vruntime must scale inversely with priority
  3. For our example:
    • A’s vruntime will accumulate at about a 1/3 the rate of B’s

vruntime_i_ = vruntime_i_ + weight_0_ / weight_i_ * runtime_i_

89
Q

CFS efficiency:

(3)…

A
  1. How quickly can the scheduler find the next job to run
    • Lists don’t scale if you have 1000s of processes to search through every millisecond
  2. CFS keeps processes in a red-black tree
    • A type of balanced tree
    • Does a little extra work to maintain low depths
    • O(log n) for operations (search, insert, delete)
  3. CFS only keeps running and runnable processes in this structure
    • If a process is waiting on I/O, it is removed from the tree and kept track elsewhere
    • What to do when process wakes up?
    - vruntime will be behind the others and could monopolize the CPU
    - CFS sets the vruntime for the job to the minimum value found in the tree
    - Jobs that sleep for short periods often do not ever get their fair share of the CPU
90
Q

Summary:

We looked at three proportional share schedulers

A
  1. Lottery scheduling
    • Uses randomness to achieve proportional share
    • Can be unfair with short running jobs
    • Can be implemented with no shared state between processes
    • Ticket allocation can be difficult
  2. Stride scheduling
    • Achieves proportional share deterministically
  3. Linux Completely Fair Scheduler (CFS)
    • Most widely used fair-share scheduler in existence
    • A bit like a weighted round-robin with dynamic time slices
    • Built to scale and perform well under load
91
Q

Memory API: malloc()

A
#include [LTS]stdlib.h>
void* malloc(size_t size)

Allocate a memory region on the heap
1. Argument
• size_t size : size of the memory block (in bytes)
• size_t is an unsigned integer type capable of holding the size of any valid memory request
• Often used with sizeof(type) operator which returns the size of the argument
2. Return
• Success: a void type pointer to the memory block allocated by malloc
• Fail: a null pointer

92
Q

Memory API: sizeof()
• Two types of results of sizeof with variables:
(2)…

A
  1. The actual size of ‘x’ is known at run-time
    int *x = malloc(10 * sizeof(int));
    printf(“%d\n”, sizeof(x));
    8
  2. The actual size of ‘x’ is known at compile-time
    int x[10];
    printf(“%d\n”, sizeof(x));
    40

    Good coding style to use sizeof in malloc instead of requesting the number of bytes directly
93
Q

Memory API: free()

A
#include [LTS]stdlib.h>
void free(void* ptr)

Free a memory region allocated by a call to malloc
1. Argument
• void *ptr : a pointer to a memory block allocated with malloc
2. Returns
• none

94
Q

Memory Leak

A

A program runs out of memory and eventually dies.

95
Q

Dangling Pointer

A

Freeing memory before it is finished being used

96
Q

Double Free

A

Free memory that was freed already

97
Q

Other Memory APIs: calloc()

A
#include [LTS]stdlib.h>
void *calloc(size_t nmemb, size_t size)

Allocate memory on the heap and zeroes it before returning
1. Arguments
• nmemb: number of elements to allocate
• size: size of one element
2. Returns
• Success: a void type pointer to the memory block allocated by calloc
• Fail: a null pointer

98
Q

Other Memory APIs: realloc()

A
#include [LTS]stdlib.h>
void *realloc(void *ptr, size_t size)

Change the size of memory block
• A pointer returned by realloc may be either the same as ptr or a new
• Argument
- void *ptr: Pointer to memory block allocated with malloc, calloc, or realloc
- size_t size: New size for the memory block(in bytes)
• Return
- Success: Void type pointer to the memory block
- Fail: Null pointer

99
Q
System Calls: 
#include [LTS]unistd.h>
int brk(void *addr)
void *sbrk(intptr_t increment);
A

malloc library call uses brk system call
• brk is called to expand the program’s break
- break: The address of the end of the heap in address space
• sbrk increases the break by increment
• Programmers should never directly call either brk or sbrk

100
Q

Splitting:

A

• Finding a free chunk of memory that can satisfy the request and splitting it into two
- When request for memory allocation is smaller than the size of free chunks

101
Q

Coalescing:

(2)…

A
  • If a user requests memory that is bigger than free chunk size, the list will not find such a free chunk
  • Coalescing: Merge returning a free chunk with existing chunks into a large single free chunk if addresses of them are nearby
102
Q

Tracking The Size of Allocated Regions:

(2)…

A
  1. The interface to free(void *ptr) does not take a size parameter
    • How does the library know the size of memory region that will be back into free list?
  2. Most allocators store extra information in a header block
103
Q

The Header of Allocated Memory Chunk:

  1. The header minimally contains the size of the __
  2. The header may also contain:
    (2) …
A

allocated memory region;

  • Additional pointers to speed up deallocation
  • A magic number for integrity checking
104
Q

The Header of Allocated Memory Chunk:
The size for free region is __.
• If a user requests N bytes, the library will search for __.

A

the size of the header plus the size of the space allocated to the user;
a free chunk of size N plus the size of the header

105
Q

Simple pointer arithmetic to find the header pointer

A
void free(void *ptr) {
header_t *hptr = (char *)ptr – sizeof(header_t);
}
106
Q

Embedding a Free List

A

• The memory-allocation library initializes the heap and puts the first element of the free list in the free space
- The library can’t use malloc() to build a list within itself
• Description of a node in the list

typedef struct __node_t {
int size;
struct __node_t *next;
} node_t;

• Building the heap and creating a free list
- Assume the heap is built with mmap() system call

// mmap() returns a pointer to a chunk of free space
node_t *head = mmap(NULL, 4096, PROT_READ|PROT_WRITE,
MAP_ANON|MAP_PRIVATE, -1, 0);
head->size = 4096 - sizeof(node_t);
head->next = NULL;
107
Q
A Heap With One Free Chunk: 
// mmap() returns a pointer to a chunk of free space
A

node_t *head = mmap(NULL, 4096, PROT_READ|PROT_WRITE,
MAP_ANON|MAP_PRIVATE, -1, 0);
head->size = 4096 - sizeof(node_t);
head->next = NULL;

108
Q

Embedding A Free List: Allocation:

If a chunk of memory is requested, the library will __.

A

first find a chunk that is large enough to accommodate the request

109
Q

Embedding A Free List: Allocation:
If a chunk of memory is requested, the library will first find a chunk that is large enough to accommodate the request.

The library will
(2)…

A
  1. Split the large free chunk into two
    • One for the request and the remaining free chunk
  2. Shrink the size of free chunk in the list
110
Q

Embedding A Free List: Allocation:
Example: a request for 100 bytes by ptr = malloc(100)
(2)…

A
  • Allocating 108 bytes out of the existing one free chunk

* shrinking the one free chunk to 3980(4088 minus 108)

111
Q

Free Space With free():

Example: free(sptr)…

A
  • The 100 byte chunk is put back into the free list
  • sptr->next == old head
  • It is now the head of the free list
  • The free list will start with the small chunk
112
Q

Free Space With Freed Chunks:
• Let’s assume that the last two in-use chunks are freed

A

• External Fragmentation occurs

- Coalescing is needed in the list

113
Q

Growing The Heap:

A

Most allocators start with a small-sized heap and then request more memory from the OS when they run out
- e.g., sbrk(), brk() in most UNIX systems

114
Q

Managing Free Space: Basic Strategies:
Best Fit:
(2)…

A
  • Finding free chunks that are big or bigger than the request
  • Returning the one of smallest in the chunks in the group of candidates
115
Q

Managing Free Space: Basic Strategies:
Worst Fit:
(2)…

A
  • Finding the largest free chunks and allocation the amount of the request
  • Keeping the remaining chunk on the free list
116
Q

Managing Free Space: Basic Strategies:
First Fit:
(2)…

A
  • Finding the first chunk that is big enough for the request

* Returning the requested amount and remaining the rest of the chunk

117
Q

Managing Free Space: Basic Strategies:
Next Fit:
(2)…

A
  • Finding the first chunk that is big enough for the request

* Searching at where one was looking at instead of the beginning of the list

118
Q

Examples of Basic Strategies:
Allocation Request Size 15:
head -> 10 -> 30 -> 20 -> NULL

Result of Best-Fit:

Result of Worst-Fit:

A

Examples of Basic Strategies:
Allocation Request Size 15:
head -> 10 -> 30 -> 20 -> NULL

Result of Best-Fit:
head -> 10 -> 30 -> 5 -> NULL

Result of Worst-Fit:
head -> 10 -> 15 -> 20 -> NULL

119
Q

Segregated List:

(3)…

A

• Keeping free chunks in different size in a separate list for the size of popular request
• New Complication:
- How much memory should dedicate to the pool of memory that serves specialized requests of a given size?
• Slab allocator handles this issue

120
Q

Slab Allocator:

(2)…

A
  1. Allocate a number of object caches
    • The objects are likely to be requested frequently
    • e.g., locks, file-system inodes, etc.
    • Keeps freed objects in a particular list in their initialized state
  2. Request some memory from a more general memory allocator when a given cache is running low on free space
121
Q

Binary Buddy Allocation

A

The allocator divides free space by two until a block that is big enough to accommodate the request is found

122
Q

Binary Buddy Allocation:
The allocator divides free space by two until a block that is big enough to accommodate the request is found

• Buddy allocation can suffer from __.
• Buddy system makes __ simple
- …

A

internal fragmentation;
coalescing;
Coalescing two blocks in to the next level of block