Exam 1 Review Flashcards
What is a Virtual Machine (VM)?
• A software system that allows an operating system to run as an
application in user-space
• Allows us to run multiple operating systems on a single computer
Why Do We Use Virtual Machines?
- Running Windows/Linux on the same machine
- Sandboxing for testing
- Cloud computing
Advantages So Far with Online IDEs
- Online IDEs are great for learning how to code
- Provides a standardized coding environment
- Little configuration / installation required – it just works!
Why VM-Based Development Now?
Development in a more realistic environment
• Running on your local computer
• Logging into a virtual machine that runs Linux C similar to how system
administrators log into a server machine or cloud VM
• Build an application running on an actual operating system that you installed!
• Write C programs that interact with your operating systems using system calls
What is an Operating System?
Operating systems provide an interface between hardware and user
programs, and makes hardware usable
OS Functions
Extended machine providing abstraction of the hardware
• Hides the messy details which must be performed
It is a resource manager
• Time on CPU is shared among multiple users/programs
• Space in memory and on disks is shared among multiple users/programs
What’s a Process?
A “program in execution”
• With associated (data and execution) context
• Process execution must progress in sequential fashion
NOT the same as “program” or “application”
• A given program may be running 0, 1, or >1 times
Each instance of a program is a separate process
• With its own address space and context
Why Run Processes?
An operating system executes a variety of programs
• Batch systems – job
• Time-shared systems: user programs or tasks
Many processes can run concurrently
• Singer-user system: several programs (word processor, browser, email)
running at the same time
• OS internal programmed activities (ex: memory management)
Process Creation in a Nutshell
• Parent processes create child processes, which in turn create other
processes, forming a tree of processes
• Generally processes are identified and managed via a process
identifier (PID)
Resource sharing
• Parent and children have separate virtual address space
• Parent and child shared some resources (e.g. open file descriptors, semaphores)
Process Hierarchy
Init process
SSH server
User shell
Firefox/Emax
Init process
System processes
First System Call: fork()
Both parent and child run concurrently after fork()
Linux does copy-on-write to reduce fork() overhead
Second System Call: exec()
- exec() loads in a new binary for execution
* PC, SP, and memory (stack, heap, data) are all reset to run new program
Third and Fourth System Calls: wait() and exit()
exit(status) – executed by a child process when it wants to terminate
• Makes status (an integer) available to parent
• Zero exit status means command exited without errors
• Non-zero exit status indicates errors
Process Termination
Graceful termination via exit
Non-graceful termination when process is killed
• kill(pid, sig) – sends signal sig to process with process-id pid
• Ex: SIGKILL signal to terminate target process immediately
• Can be done from C code or from Shell
• A process can kill another process only if both processes belong to the
same user and if the “killer process” is owned by a super user
When a process terminates, resouces are reclaimed by system:
• PCB, memory, files
What is a Process’s ”Context”?
Contents of the PCB
Execution Context • Stack pointer • Program counter • Compute register values • Segment register values • Status (running, blocked, ready)
Memory Context
• Pointer to page table
Context Switch: P1 to P2 (running)
Steps:
- P1 stops running
- CPU - kernel mode
- OS starts running
- OS copies CPU register values to P1’s PCB
- OS scheduler decides P2 should run next
- OS loads P2’s PCB values into CPU
- OS - user mode
- P2 starts running
Context Switching has Overhead
• Transition from user to kernel mode
• Copying contents to and from PCBs
• Potential memory and CPU cache misses when running another
process
• A well-designed OS should:
• Balance carefully fairness and context-switching overheads
• Minimize time spent in OS by running efficiently
Life of a Process
Process Created
Ready
Running
Terminated/Blocked
When Does Context Switching Occur?
- (1) Currently running process makes a system call and is blocked
- Process is put on the blocked queue
- Scheduler picks up another process in ready queue to run
- (2) Currently running process terminates
- Scheduler picks up another process in ready queue to run
- (3) Hardware or software interrupt happens
- OS handles the interrupt and blocked processes may become ready
- Scheduler may choose to continue running current process of pick another process to run
- (4) Current process used up its current “time slice”
- Scheduler picks up another process in ready queue to run
Typical Scheduler Heuristics
Each process runs for a fixed time slice (typically 100 msec)
• Response times vs throughput tradeoffs
Some processes have higher priority over others:
• Interactive applications that block frequently because of system calls
• User-defined priority (using “nice” command in Linux or task manager in
Windows)
• System daemons
• User has a higher priority over other users using the computer