midterm Flashcards
What are the key roles of an operating system?
Hides hardware complexity
manages resources
provides isolation & protection between applications running on the OS
Can you make distinction between OS abstractions, mechanisms, policies?
Abstractions:
complicated interactions between the OS and hardware that the OS makes simpler for applications running on a system.
Examples include: process, thread, file, socket, memory page.
Mechanisms:
“verbs’ that describe actions an operating system can do on abstractions, like “create”, “schedule”, “open”, “write”, “allocate”.
Policies:
concepts implemented by Operating systems to efficiently deal with the nature of computer hardware and software interaction.
For example the operating system can have a policy about how long content can stay in memory instead of just being on disk. Others include Least Recently Used (LRU) and Earliest Deadline First (EDF)”
What does the principle of separation of mechanism and policy mean?
Mechanisms in the operating system can support multiple policies. The mechanisms are flexible. The memory management mechanism might use different policies depending on the situation (LRU, LFU, random, etc.)
What does the principle optimize for the common case mean?
designing the OS based on how it will be used, what the user will be executing, what are the workload requirements of the user.
important because it allows the OS to be as effective as possible.
entails the OS choosing specific mechanisms and policies that match its most common usage.
What happens during a user-kernel mode crossing?
When a user application/process tries to execute a function that requires kernel level permissions the call (system call) is sent across the user-kernel boundary.
The kernel performs the system call and then returns to the user process. During the time when in privileged mode a bit is set on the CPU which allows privileged calls to be performed, this bit is not set when in user mode.
A trap can also occur when a user-level process attempts to perform a privileged function, the operating system will then check if the calling process should be allowed to do that action or not.
Crossings are slow, every time a system call is made it (crossing) affects hardware cache by switching locality. App loses part of the data from cache in favor of whatever the OS needs to bring in to perform its system call.
in user-kernel mode crossing the mode bit is set in CPU, and control is passed to kernel)”
What are some of the reasons why user-kernel mode crossing happens?
when a user-level thread/process/applications attempts to do a privileged action while the OS is not operating in privileged mode causing a trap
OR
when a user-level thread/process makes use of the OS level provided system calls which have the operating system perform said privileged actions.
OR
hardware interrupts; for example, the timer interrupt is what lets the kernel regain control to do scheduling.
OR
“(I also think that signals are user-kernel mode crossing but in the other direction: from kernel to user)” -
What is a kernel trap? Why does it happen? What are the steps that take place during a kernel trap?
alert to the operating system that an unprivileged user process has attempted to perform a privileged task or access privileged memory addresses. When this occurs the OS determines the source of this trap, determines if it should be allowed or not and then after returning execution to the interrupted user process.
What is a system call? How does it happen? What are the steps that take place during a system call?
an operation (belonging to a set) that the OS makes available to applications which can explicitly invoke a privileged mechanism.
happens when a user-level application makes a system call telling the OS it would like it to perform said privileged action.
step
- User Process makes a system call
- Control is passed to the operating system which sets the kernel mode bit to 0 (privileged access only). It jumps to the place in memory for the OS function to take place (along with the optional arguments from the user process.
- The system call completes execution and returns the result to the original user process which requires an execution context switch back to user-level privilege.
There are both synchronous and asynchronous versions of system calls.”
Monolithic OS
design is large and can be hard to manage and may not be very portable, but it can be optimized at compile time since it includes everything a system will need.
downside of it is customization, portability and manageability due to large codebase, which can be hard to code, debug and maintain. Also the memory footprint can be huge and it can impact performance.
Modular os
design can be smaller than monolithic because it is interface implemented oriented - meaning modules that are required based on the usage and workload of the operating system can be loaded in if they are necessary and excluded if not.
less resource intensive and is easier to maintain; however, performance is impacted because of interfaces and modules are often sources of bugs which are not directly the fault of the operating system implementing said modules. Given that modules can come from non-kernel authors this approach can be buggy.
Microkernal OS
design has a very small footprint and only supports very basic roles.
Memory management,
address space,
location for execution of user processes.
The user-level runs the typical operating system components like file systems, disk drivers, etc. This requires much moreless inter-process communication interactions.
very small and test (useful for embedded devices).
often less portable (very specific for said devices) and could be slow because of number of user-kernel boundary crossings that are required.
Process vs. thread, describe the distinctions.
“A process is any user application that is running on the operating system.
has it’s own memory space allocated to it. Both heap and stack allocations in virtual memory
The process consists of its address space - this includes: the code (text), the data that’s available when the process is initialized, the heap, and the stack. As the process executes dynamic data is added to the heap. The stack is a LIFO (last in first out) data structure that is useful for process execution when an applications needs to jump to another location to execute something and later return to a prior position.
A thread is similar to a process except in the case of multiple instances it has it’s own program counter, stack pointer, stack and thread-specific registers. But it shares the same virtual address space with other threads..”
What happens on a process vs. thread context switch.
“During a process context switch all information about the process that is being preempted that is tracking by the CPU will be updated in the process control block for that process. The CPU will then have the information loaded for a different process, and then switched back when the context switch happens in reverse. It may also require data be removed from the processor cache to make room for the other process.
When a context switch occurs, the running thread stores its execution context (its program counter, stack pointer, and register values) in memory, and the new thread’s execution context is loaded from memory onto the processor. This is faster than a context switch between processes, because threads don’t need the costly virtual to physical address mappings to be swapped out or recreated. This also leads to hotter caches during switches which is a performance optimization.”
Describe the states in a lifetime of a process?
New - First state when a process is created
Ready - Once OS admits process it will get a PCB and some initial resources, a running process is interrupted (context switch)
Running - OS Scheduler gives CPU to a process
Waiting - I/O event (or some other long running event/operation), causes a Ready state after I/O or event completes
Terminated - Process finishes all operations or encounters error
Describe the lifetime of a thread?
created when a parent process/thread calls a thread creation mechanism.
then run asynchronously (unless blocked/waiting).
can be signalled or broadcasted to in order to check if they need to continue blocking or continue executing. The parent process/thread can call a join mechanism to block itself until a child completes after which it’s result can be returned to said parent.
Describe all the steps which take place for a process to transition from a waiting (blocked) state to a running (executing on the CPU) state.
A waiting process will wait until it’s current event or operation that caused the WAITING state to finish. It will then transition to a READY state. Once in the READY state the process can be scheduled by the scheduler to have a CPU in which case it will enter the RUNNING state.
What are the pros-and-cons of message-based vs. shared-memory-based IPC.
“Message-based uses a OS provided communication channel to allow processes to send messages to each other.
good - the operating system maintains this communication channel for the processes so the API is more universally implemented.
bad - requires a lot of overhead. The processes have to copy information into the communication channel into kernel memory through the OS (i.e. system call).
Shared-memory-based IPC is implemented when the OS maps a shared memory space for processes to share.
Good - Both/all processes can access this shared memory space as if it were their own. This gets the OS out of the way which is good for performance,
Bad- OS no longer manages that address space it’s up to the processes which can be bug prone and processes must know how to handle said shared memory space.”
What are benefits of multithreading?
allows parallelization to occur which helps achieve overall performance/speed increases and/or execution time decreases.
Multiple threads of an application can be at different points in execution handling more input at the same time (especially on multi-core systems).
Threads can also be assigned long execution and block tasks so the main application or other threads can continue processing information/input while other threads wait for slower devices like I/O.
requires less memory because threads can share address space. This means the application requires less memory which could result in less memory swaps.
When is it useful to add more threads, when does adding threads lead to pure overhead?
Depending on the input and tasks of an application it could be beneficial to add more threads.
For example in a pipeline pattern it could make sense to match the number of threads to the number of steps in the pipeline or perhaps several threads per step (for the longer/more involved steps). In boss-worker patterns it might be detrimental to add more threads dynamically if there isn’t enough work for those threads to do. (I think it’s useful to add more threads as long as there is idle CPU time. So instead of letting CPU stay idle, another thread can be context-switched and start doing useful work. But once there is no idle CPU time to utilize, then adding more threads just adds overhead and slow down processing)
What are the possible sources of overhead associated with multithreading?
context switching & synchronization between threads
when a boss thread has to manage a pool of worker threads. It may not know exactly what each thread is doing or what it did last so it is difficult to know during execution time which threads may be more/less efficient at certain tasks.
overhead of keeping the shared buffer synchronized.