Midterm Review Questions Flashcards
What are 3 key roles of an operating system?
- hide hardware complexity
- manage underlying hardware resources
- provide isolation and protection
Give an example of how an OS hides hardware complexity
For example, the operating system provides __________________ as a representation of ______________________
For example, the operating system provides the file abstraction as a representation of logical components of information stored on disk.
Give3 examples of how the OS manages or arbitrates underlying hardware resources
- allocates memory
- schedules application to run on CPU
- controls access to I/O
Give an example of how the OS provides isolation and protection.
With multiple applications running concurrently, the operating system must make sure that no one application interferes with the execution of another application.
For example, the operating system ensures that memory allocated to one application is neither read from nor written to by another application.
Apart from simplicity, another benefit of abstractions is ________
Operating systems are free to _____ out their ___________ to suit different hardware resources, and as long as their API remains ______, programs will still run.
Apart from simplicity, another benefit of abstractions is their portability.
Operating systems are free to swap out their implementations to suit different hardware resources, and as long as their API remains constant, programs will still run.
Describe each of the following in one sentence:
- OS abstractions,
- OS mechanisms
- OS policies
- Abstractions are entities that represent other entities (e.g. file)
- Mechanisms are the tools by which policies are implemented.
- Policies are the rules around how a system is maintained and resources are utilized.
A file is an example of …
an OS abstraction
processes and threads are examples of …
file, socket, memory page are examples of …
software abstractions
hardware abstractions
As an example of a policy a common technique for swapping memory to disk is ________________
Explain this policy: _______________________
______________________________________
LRU (least recently used)
This policy states that the memory that was least recently accessed will be swapped to disk.
Policies help dictate the rules a system follows to manage and maintain a workable state. Note that policies can be different in different contexts or a different times in a system.
What is the mechanism for implementing LRU?
Mechanisms are the tools by which policies are implemented. For example, in order to enforce the LRU policy of memory management above, memory addresses/blocks may be moved to the front of a queue every time they are accessed.
When it comes time to swap memory to disk, the memory at the back of the queue can be swapped. In this example, the queue is the mechanism by which the LRU policy is implemented.
What does the principle of separation of mechanism and policy mean?
how you enforce a policy shouldn’t be coupled to the policy itself.
That being said, certain policies may occur more frequently than others, so it may make sense to optimize our mechanisms a little bit in one direction or anything while still maintaining their flexibility.
What does the principle optimize for the common case mean?
ensuring that the most frequent path of execution operates as performantly as possible.
Optimize for the common case -cite an example of this is as discussed in the SunOS paper.
A great example of this is discussed in the SunOS paper, when talking about using threads for signal handling instead of changing masks before entering/after exiting a mutex:
The additional overhead in taking an interrupt
is about 40 SPARC instructions. The savings in the mutex enter/exit path is about 12 instructions. However, mutex operations are much more frequent than interrupts, so there is a net gain in time cost, as long as interrupts don’t block too frequently. The work to convert an interrupt into a “real” thread is performed only when there is lock contention.
Give two reasons why context switch from user to kernel mode is slow?
- _____________________________________________
- _____________________________________________
This context switch takes CPU cycles to perform which is real overhead on the system.
In addition, context switching will most likely invalidate the hardware cache (hot -> cold), meaning that memory accesses for the kernel context will initially come from main memory and not from cache, which is slow.
What happens during a user-kernel mode crossing?
Distinguishing between user and kernel mode is supported directly in the hardware. For instance, when operating in kernel mode, a special bit is set on the CPU, and if that bit is set, any instruction that directly manipulates the hardware is allowed. When in user mode, the bit will not be set, and any attempt to perform such privileged operations will be forbidden.
Such forbidden attempts will actually cause a trap. The executing application will be interrupted, and the hardware will switch control back to the operating system a specific location - the trap handler. At this point, the operating system will have a chance to determine what caused the trap and then decide if it should grant access or perhaps terminate the transgressive process.
This context switch takes CPU cycles to perform which is real overhead on the system. In addition, context switching will most likely invalidate the hardware cache (hot -> cold), meaning that memory accesses for the kernel context will initially come from main memory and not from cache, which is slow.