OS Quiz 2 Flashcards
Boot Sequence Order
ROM BIOS POST Primary Master HD
BIOS tells where the Master Hardrive is Need to know where master HD is because that loads the Boot Block/Sector = 512bytes = Master boot Record (MBR)
• Multi-Programming
o Purpose is to maximize CPU usage
o Ability to run multiple programs concurrently
o Must be efficient (most important factor)
o Fair – resources must be allocated equally across the system
process
• Process
o Running program / program in execution
o AKA Task
process address space
Process Address Space • Address space for every process in memory • bss & data, global variables • dynamic data = creating new object / can grow and shrink as we create and release methods/objects • bottom of stack address is static Stack btm top Local variables – dynamic Heap “new” - dynamic bss Uninitialized (int x) Data Initialized (int x = 5) Text Code
Process States / Transitions Diagram
• New State (request has been sent, OS checks stuff before executing)
• Admit Transition (process is admitted to the system / set up is done/ allocate memory)
o Address and PCB given to process
• Ready State
• Dispatch Transition (dispatched into the CPU)
• Running State (process can be running, CPU is actively running program)
• Exit Transition
• Terminated State (process still exists in system but is not actively running)
• Suspend Transition (always when going to suspend state)
• Resume Transition (when coming out of suspend into ready state)
See graph
Process control block
PID: Process ID
State: What is the current state of the process?
PC – Program Counter: What are we doing? / Point of execution?
CPU Registers: The values in the registers
Scheduling
Memory Management Info: Where is the process address space existing in the physical memory
I/O Information All resources involved in process (ex: if opening file, must know about it)
Accounting Resource accounting – maintenance information about the process. Memory, CPU cycles, usage
Degree of Multiprogramming
o How many programs can we execute at the same time?
o Simulating concurrency in multiprogramming with time sharing
Sharing time of CPU across system
• Context Switch
o Process 1 interrupts CPU and says “hey I need to execute! Look at me”
o Interrupt gets sent to the OS
o OS saves the PCB (process control block) of Process 0
o OS loads PCB from Process 1
o OS can then execute
o Saves the PCB and returns to Process 0 and loads it’s PCB
o Switching from Process 0s “saving PCB” to Process 1’s “loading PCB” is a context switch
o Context switch is how we achieve Time Sharing
Process Queues
Ready Queue – waits on the CPU
Disk Queue – Waiting on hard drive
Disk Queue 2 – waiting on other hard drive
Waiting Queue
Ready Queue = essentially a list of all the PCB processes that are in the ready state
Process Schedulers
- Long-term / Admission Scheduler
a. Doesn’t run often
b. Admitting transition (new to ready state)
c. Decides on the degree of multiprogramming
i. How many processes system can handle at any given time
d. Windows/Linux do not have scheduler
i. Don’t handle degree of multiprogramming
ii. End users control this - Short-term / CPU Scheduler
a. Runs very frequently
b. Responsible for choosing the process from the ready queue to be dispatched into CPU - Medium-term / Swapper
a. Responsible for the suspension and resuming of processes
Signals
• an event on I/O (example: I’ve typed a letter on the keyboard in notepad, “a” gets sent as a signal from the O/S)
• Windows calls signals: Asynchronous Procedure Call (APC)
• 3 different ways to handle a signal:
o Default Handler – OS
Do what the OS tells you to do
o Ignore the Signal
o Custom handler / user – defined
User = programmer defined (listeners / executing methods)
Callbacks
Method that gets called when OS sends a signal
Process Creation
• Embedded systems do not have this ability to dynamically create processes • Double click program = new state o Allocate address space o PCB o Insert into ready Queue • Parent Process o Child are any new processes that occur o Desktop process (parent) open Word(child of desktop) open eclipse (child of desktop) JVM (child of eclipse) java program helloworld.java (child process of JVM) • Process 0 (PID=0) o Very first process in the system o Windows – System Idle Process o Linux – System Process
Parent Responsibilities
• Validation: sufficient resources, privileges to run child process
1. Get resources from OS directly
2. Share resources from parent
What happens to parent when child runs?
1. Wait for child to terminate
a. Parent/Child has dependency
b. Ex: helloworld.java runs, JVM has to wait till helloworld completes, then JVM cleans up and shuts down.
2. Continue Execution
When creating the Child address space, we can create an exact copy of parent, same text/data/bss/heap/etc… as parent. Or a brand new one with nothing in it.
• Copy of parent (faster) (Linux – fork())
o Child is a copy (address space)
o Allows us to share data between parent and child (bss/data)
o Running exact same program (inherit the text/code)
• Brand New (Windows –createProcess())
o Do not inherit shared variables
o But we get new code / no shared text
• Parent keeps track of PID so that it can continue to be responsible for the child.
• Parent is responsible for clean up
o When child dies, it cleans up.
o During the terminated state of the child the child is called a Zombie
Process terminates when:
• Last line of code completes (exit)
• Forced by OS
• Forced by User
o Other processes
Parent can kill child when:
- Exceeded Resources
- Child has done its job so we don’t need it anymore.
- Parent is terminating.
a. A dependency: cascading termination
i. Parent is terminating and tells each of its children to go “kill yourself”
b. No dependency (if we open notepad from command line…notepad can run without cmd open)
i. Kill the parent, the child becomes an orphan
ii. Orphan gets adopted by process 0 (no longer an orphan! Yaaaay)
process termination cleanup
Clean up
• deallocating address space
• release instance of the PCB
• Parent can then check that child has completed deallocation and release PCB, parent will then remove the reference to the child.
Difference between process and thread
Process – Single point of execution
• Program Counter
Threads – Multiple points of execution
• Program Counter per thread
• Light weight process (see TCB), process is “heavy weight” (PCB)
o Specific to amount of resources that a thread needs
What is TCB
Thread Control Block (TCB) • System uses this to keep track of all the threads • Control block is smaller than the PCB State TID (thread ID) PC (program Counter) CPU Registers Scheduling
Process vs Threads
• Thread
o Less resources
o Context switch: faster
o Less information to save, load and maintain
o Share data
Address space is shared between all threads
Chrome tabs: each one is its own process, websites can’t interfere with other tabs
o Share resources
Ex: Microsoft word, window is shared resource
• Processes
o Built in protection
Every process has own address space and can’t access another processes space
o Not parallel
We don’t need threads to multitask
Example: autosave happening at same time as spellcheck
Methods/functions may have to run sequentially so no advantage to multithreading them
o Threads share I/O
o Threads are hard
Types of Threads
- Kernel Threads – directly managed by OS
2. User Threads – managed by Thread Library
Java Threads
• Hybrid o Combination of user and kernel threads o JVM manages (is a thread library) • Thread Requires CPU o Gets mapped to a kernel thread Get threads from OS that will help the thread within the JVM
Thread Pool
• Pre-create threads • Overhead on each creation o Creating all the threads on start-up • Uses more space o 10kb for threads, may not all be used o Space is a tradeoff for time
Java Thread States
• Creation of Java thread o Extends Thread or o Implements Runnable o Both ways must have run() method • “New” State o Memory has not been allocated o JVM is aware of threads “impending” existence • “Start” Transition o Thread is created o Memory allocated o Thread now exists • “Runnable” State o Means both “ready” and “running” • Run() is done transition o Last line of code has run • Terminated state o Thread is done • Waiting State • Timed Waiting State o Sets a timer while in a waiting state o Tracks which thread needs to “wake up” and allows it to continue • Blocked State
See diagram
• Multiprogramming is
ability to run multiple programs
Point of Multiprogramming
• We want CPU to do something all the time in multiprogramming
o Maximize CPU usage
CPU burst
• When a process is being executed the CPU has a “CPU Burst” (graph in task manager will go up, plateau a bit then go down again)
IO burst
• When a process has executed and we’re past the CPU burst I/O Burst
• CPU IO Burst Cycle:
mix of CPU Burst and I/O Burst
• CPU Bound Process
Can only execute as fast as the CPU
• IO Bound Process
Will only execute as fast as the IO events will allow
Pre-emption
• The ability to interrupt a process o Bring process from running state to the ready state • Non-Preemptive o Process in running state o Done CPU burst
2 components of Process Scheduling
• Policy - Scheduler
a. Algorithm
b. OS to choose which process to dispatch to the CPU
• Mechanism - Dispatcher
a. Performs the context switch (saving the state of a process and switching to another one)
i. Save / load PCBs
b. Very very fast
Process Scheduler goals
• Goals of scheduling (objectives)
o Fairness: fair to all processes and every process can execute at some point
o Maximize Throughput: maximize usage of cpu / how much work we can complete within a certain amount of time.
o Minimum Overhead: Do useful work
o Minimum response-time: how long must a process wait until dispatched to the CPU?
o Predictability: Do we know all 10 processes will execute in 10 seconds?
o Avoid Indefinite Postponement: certain algorithms where process always get selected first and other processes are waiting indefinitely or a really really long time.
CPU Utilization
a. Aiming for 40-90%
b. Should be same across all algorithms if given the same data set
c. Maximize
Throughput
a. Number of processes completed per unit time
b. Observe system for x amount of time and we’ll actively count # of processes that get dispatched to CPU and complete a CPU burst
c. Should be same across all algorithms if given the same data set
d. Maximize
turnaround time
a. How long from start to finish of CPU burst
b. Process enters the system to when it is terminated
c. Minimize
Waiting Time
a. Time that process has spent in the ready state / ready queue waiting to be dispatched
b. Minimize
Response Time
a. From receiving I/O or event to time to respond to the I/O or event.
b. Minimize
Asymmetric Multiprocessing
a. Master maintains the RQ and dispatch to all other slave CPUs
Symmetric Multiprocessing (SMP)
a. No master so OS takes care of CPUs. Two main approaches
i. Centralized RQ
1. Locking: making sure that 2 CPUs don’t attempt to take same process at same time
ii. One RQ per CPU
1. CPUs may have full RQ: Busy
2. CPUs may have empty RQ: Idle
3. Can be uneven so we need a load balancing algorithm
a. Linux uses a “push and pull”
b. Push: if we notice an uneven work load, we can push some processes from 1 RQ to the other
c. Pulling: We can get CPU to take from the other RQ
Processor Affinity
• Affinity = preference for one CPU over the other
a. May prefer one core because each core has its own local cache which allows a speedier way
• Hard Affinity
a. Process cannot move at all. Must stay on the same CPU
• Soft Affinity
a. Process prefers not to move, but if you have to okaaaay
Real Time Scheduling
• Time Constraint / deadline
• Care about response time
• Not aiming to maximize cpu usage
• Soft RT
o Not a strict deadline
o General purpose systems support soft rt
• Hard RT
o Critical
o Example: needing to see a wall while flying a drone before you hit it
• Response time: Event occur to the time that the event is handled
Algorithm Evaluation
(least expensive to most expensive) • Deterministic Modelling • Queueing Models a. Use sample data to calculate metrics • Simulation a. Care about the end result • Synthetic Benchmark a. Gives idealistic results • Benchmarking a. Most accurate b. Time consuming / expensive to develop real system and algorithms
What is unique to each process and what can be shared?
Code (text section) can be shared!
How can the system keep track of processes executing the same code?
Program counter!
One for each process
CPU loads value of PC into register as it executes each process
Dispatcher
Scheduling involves 2 separate components:
Scheduler: choose a process from the queue based on a selection method
The scheduling algorithm – POLICY!
Dispatcher: performs the scheduling functionality
Context switch
Switch CPU to user mode
Load program counter so CPU can continue executing process at the correct instruction
The dispatcher module – MECHANISM!
Must be FAST!
Starvation
Priority algorithms suffer from starvation – where low priority processes may never get to execute!
Being “starved” out of the CPU
To solve this problem, we can use the aging technique
Increase priority of processes as they wait in the RQ
Implies priority is dynamic, not static and fixed for the duration of a process’s life
Can be adjusted internally by the OS or externally by users or applications
Choosing a Time Quantum
What is the effect of quantum size on RR scheduling?
A large quantum will degrade the algorithm to perform just like FCFS
A small quantum will lead to very high overhead with no significant processing work
Overhead from context switches!
All the switches add up, even if each is very fast!
multi core vs multi processor
Multi-core are faster than multiprocessor
This extra speed led to a discovery of memory stalls – CPU waiting for data during an instruction
Spends more than half the time during an instruction cycle idling
H/W threads (Hyper-threading)
When one thread stalls on memory, switch to the other one!
Appears as its own virtual (logical) processor
OS schedules each logical CPU separately