Midterm Flashcards
Describe boss-worker multi-threading.
boss worker multithreading
boss worker multithreading
you have one big boss
that calls n worker threads.
the boss sends out the tasks
the workers eat ‘nuff said
shared request queue/buffer.
That how we do
the boss adds to the queue
the workers use
How do you improve B/W MT throughput or response time?
improve thoughput
or response time
i said thoughput
or response time
Increase thread count
Increase size of pool
Make the boss do less
so they ain’t no fool
Increase thread count
Increase size of pool
Make the boss do less
so they ain’t no fool
What are limiting factors in improving PLMT?
Bottle necked by the stage that takes the longest to complete
Difficult to keep the pipeline balanced over time.
Describe the pipelined multithreading pattern.
PLMTP so many ages
If you need to improve a performance metric like throughput or response time, what could you do in a pipelined model?
PLMTP
work into stages
Get info from the pages
allo threads to each stage
pass it down like sage
What are the limiting factors in improving pipelined MT performance with this pattern?
limiting factors
PLMT T
Bottle necked by the stage
takes the longest to complete.
Difficult to keep
the pipeline balanced and unique.
What are the key roles of an operating system?
Key Roles
O S
Hide Hardware Complexity
M-G-M-T
Isolation Protection
For you and me
Can you make distinction between OS abstractions, mechanisms, policies?
Abstractions simplify reading and manipulating physical storage into the file,
Example of abstractions include:
process/thread (application abstractions)
file, socket, memory page (hardware abstractions)
Policies are the rules around how a system is maintained and resources are utilized. For example, a common technique for swapping memory to disk is the least recently used (LRU) policy. This policy states that the memory that was least recently accessed will be swapped to disk. Policies help dictate the rules a system follows to manage and maintain a workable state. Note that policies can be different in different contexts or a different times in a system.
Mechanisms the verbs / tools of abstraction: COWS, create open write swap memory
Policies manage memory between hardware and software.
What does the principle of separation of mechanism and policy mean?
mechanism policy separation
every rhyme saves the nation
how you enforce policy
ain’t stuck to policy plates
the policy is only valid in some contexts states
having a mechanism that only suits a policy is brittle.
make the mechanism a variety of policy dittles
as any one of these policies may be in effect in time.
optimize our mechanisms like a dope rhyme
a little bit in one direction maintain flexibility.
separate mechanism policy 123
What does the principle optimize for the common case mean?
Optimizing for the common case means ensuring that the most frequent path of execution operates as performantly as possible. This is valuable for two reasons:
It’s simpler than trying to optimize across all possible cases
it leads to the largest performance gains as you are optimizing the flow that by definition is executed the most often
A great example of this is discussed in the SunOS paper, when talking about using threads for signal handling instead of changing masks before entering/after exiting a mutex:
The additional overhead in taking an interrupt
is about 40 SPARC instructions. The savings in the mutex enter/exit path is about 12 instructions. However, mutex operations are much more frequent than interrupts, so there is a net gain in time cost, as long as interrupts don’t block too frequently. The work to convert an interrupt into a “real” thread is performed only when there is lock contention
What happens during a user-kernel mode crossing?
user kernel mode cross
flex and floss
App needs access to HW
read write disk
list sock allo memory
syscall this
What are some of the reasons why user-kernel mode crossing happens?
user kernel cross
how you like that sauce?
user kernel cross app needs
access to hardware
read write disk disk
listen sock allo mem
syscall this
What is a kernel trap? Why does it happen? What are the steps that take place during a kernel trap?
Kernel Trap
bap boom bp boom bap
Unpriviledged user performs
Priviledged action
Find source determine whats allowed and ask them
Return execution to the interrupted
User process ends
and that’s what’s up kid
What is a system call? How does it happen? What are the steps that take place during a system call?
os bit 0 sys call completes
whats a syscall
whats a syscall
whats a syscall
whats a syscall
User lvl app asks 4 privileged action UserProcess call Pass control reaction to os bit 0 sys call completes return to user lvl app and is unique exe context switch to user sink async and sync that's how we drink
Contrast the design decisions and performance tradeoffs among monolithic, modular and microkernel-based OS designs.
Monolithic Pros Everything included Inlining, compile-time optimizations Cons No customization Not too portable/manageable Large memory footprint (which can impact performance) Modular
Pros
Maintainability
Smaller footprint
Less resource needs
Cons
All the modularity/indirection can reduce some opportunities for optimization (but eh, not really)
Maintenance can still be an issue as modules from different codebases can be slung together at runtime
Microkernel
Pros
Size
Verifiability (great for embedded devices)
Cons
Bad portability (often customized to underlying hardware)
Harder to find common OS components due to specialized use case
Expensive cost of frequent user/kernel crossing
Process vs. thread, describe the distinctions. What happens on a process vs. thread context switch.
process vs threads
process vs threads
get it in your head
process vs threads
virtual addy map execution virtual context clap addy map the code is init data the code is any heap mode
The execution context
has the stack CPU
registers associated with the process’s exe messenger.
Diff processes
diff virtual address map
and diff execution contexts,
repped by the process control block bap
Diff threads exist within
same process,
share the virtuaddy map
of process diff execution context
As a result, a multiprocess application needs a large memory footprint than a multithreaded app.
Greater memory needs mean that data will need to be swapped to disk more often, multithreaded applications will be more performant than multiprocess applications.
In addition, process-process communication via IPC is more resource intensive that thread-thread communication, which often just consists of reading/writing shared variables.
Since threads share more data than processes, less data needs to be swapped during a context switch. Because of this, thread context switching can be performed more quickly than process context switching. Process context switching involves the indirect cost of going from a hot cache to a cold cache. When a process is swapped out, most of the information the new process needs is in main memory and needs to be brought into the hardware cache. Since threads share more information - have more locality with one another - a new thread may still be able to benefit from the cache that was used by an older thread
Describe the states in a lifetime of a process?
lifetime
of the process
time of a lover
new new ready ready running running waited waited terminated
new state. At this point, the operating system initializes the PCB for the process,
ready state. In this state it is able to be executed, but it is not being executed. Once the process is scheduled and moved on to the CPU it is in
running state. If the process is then interrupted by the scheduler, it moves back the the ready state. If the process is running, and then makes an I/O request, it will move onto the wait queue for that I/O device and be in
the waited state. After the request is serviced, the process will move back to the ready state. If a running process exits, it moves to
the terminated state