Midterm Flashcards
What are the key roles of an operating system?
- Manage resources (e.g., controls use of CPU, memory, peripheral devices)
- Enforce policies (e.g., fair resource access, limits resource usage)
- Provide abstractions that minimize complexity (e.g., abstract hardware details with system calls)
- Provide isolation and protection
What are the distinction between OS abstractions, mechanisms, policies?
An abstraction provides a simple interface that hides implementation details and hardware complexity. (e.g., process, file, memory page)
A mechanism is an implementation that acts on an abstraction. (e.g., create/schedule a thread, open a file, allocate memory, mutual exclusion)
A policy defines the behavioral rules that a mechanism should follow (e.g., least recently used cache expiration, first in first out queue)
What does the principle of separation of mechanism and policy mean?
The seperation between mechanism and policy means we have the flexibility to combine them in various ways to solve many problems.
What does the principle optimize for the common case mean?
We identify and design for the common case and we never sacrifice the performance of the common case for an edge case.
What happens during a user-kernel mode crossing?
When a process running in user mode must take privilaged actions like interacting with hardware, it uses a system call to do so. A privileged bit is set in the CPU so the hardware can distinguish who is performing a privilaged action and whether they have permission to do so. If the process doesn’t have permission, the trap is activated, control is given to the OS, which determines if the program should be terminated or given back to the user-level thread.
What are some of the reasons why user-kernel mode crossing happens?
- System calls (e.g., write to a file, allocate memory)
- Interprocess communication via message passing
What is a kernel trap?
Why does it happen?
What are the steps that take place during a kernel trap?
A trap is a mechanism by which a process is transitioned from user model to kernel mode.
It happens when due to:
- a software interrupt (e.g., system call)
- an access violation (e.g., a program tries to execute an instruction that is only available in kernel mode, divide by 0)
- a hardware interrupt (e.g., a timer or network device)
When a trap is initiatiated, the OS takes control, determines if the request is allowed. If not it’ll terminate the process. If so, it’ll execute the request and return control to the program.
What is a system call?
How does it happen? What are the steps that take place during a system call?
A system call is a way for a user program to ask the operating system to perform a privilaged task on its behalf.
When a system call is executed, control passes to a service routine in the OS and the privilage bit is set to kernel mode. The kernel executes the request, and returns control to the user program.
Contrast the design decisions and performance tradeoffs among monolithic, modular and microkernel-based OS designs.
A monolithic OS has every type of service that any application or hardware could require.
Pros:
- everything included
- inlining, compile-time optimizations
Cons:
- customization, portability, manageability
- memory footprint
- performance
A modular OS has basic services and apis but everything can be customized because the OS specifies interfaces that modules must implement.
Pros:
- maintainability / upgradability
- smaller footprint
- less resource needs
Cons:
- indirection can impact performance
- maintenance can be an issue as modules come from different codebases and can introduce bugs
A microkernel OS …
Pros:
- size
- verifiability
Cons:
- portability
- complexity of software development
- cost of user/kernel crossing.
What are the distinctions between a process and a thread?
What happens on a process vs. thread context switch?
A process is a program executing within a virtual address space. A thread is a subset of a process. It shares virtual address space with its process and other threads created by that process, but it has its own execution context (stack, registers, and program counter).
During a process context switch the OS has to create new virtual to physical address mappings, which is expensive. However, during a thread context switch the virtual to physical address mappings remain the same, making the context switch much less expensive.
Describe the states in a lifetime of a process?
- New: the process is being created
- Ready: The process is waiting to be assigned to a processor
- Running: Instructions are being executed
- Waiting: The process is waiting for some event to occur (e.g., I/O completion or signal)
- Terminated: The process has finished executing
Describe the lifetime of a thread?
TODO
TODO
Describe all the steps which take place for a process to transition form a waiting (blocked) state to a running (executing on the CPU) state.
Whatever the process is waiting on (e.g., I/O request, signal) needs to happen, then it’s placed in the ready queue. The CPU scheduler selects a process from the ready queue, loading its PCB into memory, and runs it.
What are the pros-and-cons of message-based vs. shared-memory-based IPC
Message-based inter-process communcation involves the OS setting up a shared communication chanel.
Pros:
- Can leverage the OS to manage communication, which comes with protections.
- Processes don’t have to be on the same machine.
Cons:
- Every call to send/receive a message has to cross the user/kernel boundary, which is expensive.
Shared-memory-based inter-process communication involves the OS setting up a segment of shared memory and mapping it to each process’ address space.
Pros:
- Don’t need to cross the user-kernel boundary
Cons:
- It’s expensive to set up so only worth it if cost can be amortized across uses.
- OS is not involved, which means there’s a lack of protection (i.e., processes have to do their own orchestration, etc.)
What are benefits of multithreading?
When is it useful to add more threads, when does adding threads lead to pure overhead?
What are the possible sources of overhead associated with multithreading?
+ Parallelization: speed up the time to complete work if there are multiple CPUs.
+ Specialization: give higher priority to certain types of tasks & improve performance by executing a smaller portion of code thus, more of that code wil be in the processor cache (hotter cache)
+ More memory efficient: threads share an address space so context switches are less expensive and the application is more likely to fit into memory and not require as many swaps from disk.
+ Lower communication overhead: communicating between processes is more costly than communicating between threads.
It’s useful to add more threads when work can be paralellized or specialized.
Possible sources of overhead associated with mulitthreading include sychronization, shared memory management…