Midterm Flashcards

1
Q

What are the key roles of an operating system?

A
  • Manage resources (e.g., controls use of CPU, memory, peripheral devices)
  • Enforce policies (e.g., fair resource access, limits resource usage)
  • Provide abstractions that minimize complexity (e.g., abstract hardware details with system calls)
  • Provide isolation and protection
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the distinction between OS abstractions, mechanisms, policies?

A

An abstraction provides a simple interface that hides implementation details and hardware complexity. (e.g., process, file, memory page)

A mechanism is an implementation that acts on an abstraction. (e.g., create/schedule a thread, open a file, allocate memory, mutual exclusion)

A policy defines the behavioral rules that a mechanism should follow (e.g., least recently used cache expiration, first in first out queue)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the principle of separation of mechanism and policy mean?

A

The seperation between mechanism and policy means we have the flexibility to combine them in various ways to solve many problems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does the principle optimize for the common case mean?

A

We identify and design for the common case and we never sacrifice the performance of the common case for an edge case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What happens during a user-kernel mode crossing?

A

When a process running in user mode must take privilaged actions like interacting with hardware, it uses a system call to do so. A privileged bit is set in the CPU so the hardware can distinguish who is performing a privilaged action and whether they have permission to do so. If the process doesn’t have permission, the trap is activated, control is given to the OS, which determines if the program should be terminated or given back to the user-level thread.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are some of the reasons why user-kernel mode crossing happens?

A
  • System calls (e.g., write to a file, allocate memory)
  • Interprocess communication via message passing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a kernel trap?

Why does it happen?

What are the steps that take place during a kernel trap?

A

A trap is a mechanism by which a process is transitioned from user model to kernel mode.

It happens when due to:

  1. a software interrupt (e.g., system call)
  2. an access violation (e.g., a program tries to execute an instruction that is only available in kernel mode, divide by 0)
  3. a hardware interrupt (e.g., a timer or network device)

When a trap is initiatiated, the OS takes control, determines if the request is allowed. If not it’ll terminate the process. If so, it’ll execute the request and return control to the program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a system call?

How does it happen? What are the steps that take place during a system call?

A

A system call is a way for a user program to ask the operating system to perform a privilaged task on its behalf.

When a system call is executed, control passes to a service routine in the OS and the privilage bit is set to kernel mode. The kernel executes the request, and returns control to the user program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Contrast the design decisions and performance tradeoffs among monolithic, modular and microkernel-based OS designs.

A

A monolithic OS has every type of service that any application or hardware could require.

Pros:

  • everything included
  • inlining, compile-time optimizations

Cons:

  • customization, portability, manageability
  • memory footprint
  • performance

A modular OS has basic services and apis but everything can be customized because the OS specifies interfaces that modules must implement.

Pros:

  • maintainability / upgradability
  • smaller footprint
  • less resource needs

Cons:

  • indirection can impact performance
  • maintenance can be an issue as modules come from different codebases and can introduce bugs

A microkernel OS …

Pros:

  • size
  • verifiability

Cons:

  • portability
  • complexity of software development
  • cost of user/kernel crossing.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the distinctions between a process and a thread?

What happens on a process vs. thread context switch?

A

A process is a program executing within a virtual address space. A thread is a subset of a process. It shares virtual address space with its process and other threads created by that process, but it has its own execution context (stack, registers, and program counter).

During a process context switch the OS has to create new virtual to physical address mappings, which is expensive. However, during a thread context switch the virtual to physical address mappings remain the same, making the context switch much less expensive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Describe the states in a lifetime of a process?

A
  • New: the process is being created
  • Ready: The process is waiting to be assigned to a processor
  • Running: Instructions are being executed
  • Waiting: The process is waiting for some event to occur (e.g., I/O completion or signal)
  • Terminated: The process has finished executing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe the lifetime of a thread?

A
  • Born: The thread was just created
  • Ready: The thread is waiting for the processor (CPU)
  • Running: The thread is executing
  • Blocked: The thread is waiting for an event to occur or waiting for an I/O device
  • Waiting:
  • Sleeping: The thread will become ready after the sleep expires
  • Dead: The thread has finished executing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Describe all the steps which take place for a process to transition form a waiting (blocked) state to a running (executing on the CPU) state.

A

Whatever the process is waiting on (e.g., I/O request, signal) needs to happen, then it’s placed in the ready queue. The CPU scheduler selects a process from the ready queue, loading its PCB into memory, and runs it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the pros-and-cons of message-based vs. shared-memory-based IPC

A

Message-based inter-process communcation involves the OS setting up a shared communication chanel.

Pros:

  • Can leverage the OS to manage communication, which comes with protections.
  • Processes don’t have to be on the same machine.

Cons:

  • Every call to send/receive a message has to cross the user/kernel boundary, which is expensive.

Shared-memory-based inter-process communication involves the OS setting up a segment of shared memory and mapping it to each process’ address space.

Pros:

  • Don’t need to cross the user-kernel boundary

Cons:

  • It’s expensive to set up so only worth it if cost can be amortized across uses.
  • OS is not involved, which means there’s a lack of protection (i.e., processes have to do their own orchestration, etc.)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are benefits of multithreading?

When is it useful to add more threads, when does adding threads lead to pure overhead?

What are the possible sources of overhead associated with multithreading?

A

+ Parallelization: speed up the time to complete work if there are multiple CPUs.

+ Specialization: give higher priority to certain types of tasks & improve performance by executing a smaller portion of code thus, more of that code wil be in the processor cache (hotter cache)

+ More memory efficient: threads share an address space so context switches are less expensive and the application is more likely to fit into memory and not require as many swaps from disk.

+ Lower communication overhead: communicating between processes is more costly than communicating between threads.

It’s useful to add more threads when work can be paralellized or specialized.

Possible sources of overhead associated with mulitthreading include sychronization, shared memory management…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe the boss-worked multithreading pattern.

If you need to improve a performance metric like throughput or response time, what could you do in a boss-worker model?

What are the limiting factors in improving performance with this pattern?

A

In the boss-worker pattern, the main code acts as the boss, and spawn threads (workers). The boss accepts requests and puts the work into a queue. The workers pull work from the queue.

To improve throughput you need to make sure the boss is doing as little as possible. Throughput = 1 / boss_time_per_request

The limiting factor in improving performance is the time it takes the boss to process a request.

17
Q

Describe the pipelined multithreading pattern.

If you need to improve a performance metric like throughput or response time, what could you do in a pipelined model?

What are the limiting factors in improving performance with this pattern?

A

In the pipeline pattern a job to be done is broken up into stages that happen in succession. As requests come in they are placed into the head of the pipeline.

To improve a performance metric like throughput we first attempt to make the stages all take the same amount of time, or barring that, we have enough threads serving a stage that a job never has to wait to enter it.

The limiting factor in improving performance with this pattern is the slowest stage.

18
Q

What’s a mutex?

A

A mutex is a construct for mutual exclusion and is used to make sure that only one thread has access to shared data at a time.

19
Q

What’s a condition variable?

A

A condition variable is a mechanism for waiting on an event before attempting to obtain a lock.

20
Q

Quickly write the steps/code for entering/existing a critical section for problems such as reader/writer, reader/writer with selective priority (e.g., reader priority vs. writer priority)?

A

TODO

21
Q

What are spurious wake-ups, how do you avoid them, and can you always avoid them?

A

A spurious wake-up is one where a thread is signaled when it’s not actually able to obtain the lock (e.g., signal called before mutex unlocked)

They can be avoided by only signaling threads outside the critical section, or calling signal instead of broadcast.

It’s not always possible to avoid a spurious wake up because sometimes a check to shared data needs to be made to determine if threads should be signaled, and that must be done in the critical section.

22
Q

Why do you need a while() loop for the predicate check in the critical section entry code examples in the lessons?

A

It’s entirely possible that the condition in the while loop has changed since the thread was signaled and obtained the lock, so it’s important to check it again while we have the lock. If we used an if statement this would not happen, but a while forces us to loop back around and check before proceeding with the critical section.

23
Q

What’s a simple way to prevent deadlocks? Why?

A

If you must obtain more than one mutex, always obtain them in the same order. That way it won’t be possible for one thread to have one mutex and another thread to have a different mutex and for them both to then be waiting on the other to finish.

24
Q

Explain the relationship among kernel vs. user-level threads? Think though a general mxn scenario (as described in the Solaris papers), and in the current Linux model. What happens during scheduling, synchronization and signaling in these cases?

A

TODO

25
Q

Can you explain why some of the mechanisms described in the Solaris papers (for configuring the degree concurrency, for signaling, the use of LWP…) are not used or necessary in the current threads model in Linux?

A

TODO

26
Q

What’s an interrupt? What’s a signal? What happens during interrupt or signal handling? How does the OS know what to execute in response to a interrupt or signal? Can each process configure their own signal handler? Can each thread have their own signal handler?

A

TODO

27
Q

What’s the potential issue if a interrupt or signal handler needs to lock a mutex? What’s the workaround described in the Solaris papers?

A

TODO

28
Q

Contrast the pros-and-cons of a multithreaded (MT) and multiprocess (MP) implementation of a webserver, as described in the Flash paper.

A

Multi-process (MP) Implementation

Pros:

  • Simpler to implement (don’t worry about sychronization)
  • More portable (don’t worry about if the OS supports threads)

Cons:

  • High cost of context switching
  • Larger memory footprint
  • Communication between processes is expensive

Multi-threaded (MT) Implementation

Pros:

  • Smaller memory footprint
  • Cheaper to context switch

Cons:

  • Complexity from sychronization
  • Communication between threads is cheap
29
Q

What are the benefits of the event-based model described in the Flash paper over MT and MP? What are the limitations? Would you convert the AMPED model into a AMTED (async multi-threaded event-driven)? How do you think ab AMTED version of Flash would compare to the AMPED version of Flash?

A

The application is implemented in a single address space. There is a single process and a single thread of control. It achieves concurrency by interleaving requests in a single execution context.

Benefits of the event-based model:

  • Single address space. Single flow of control.
  • Smaller memory requirement. No context switching.
  • No synchronization.

Benefits of helper threads/processes:

  • Resolves portability limitations of basic event-driven model
  • Smaller footprint than regular worker thread b/c we only have as many threads as there are blocking operations (b/c we spawn helper process/threads to do this work) as opposed to MP/MT, which spawns for each request.

Limitations:

  • Loss of data locality due to switching between requests but pales in comparison to context switching.
  • A blocking request/handler will block the entire process (can be mitigated with asyncronous I/O operations or helpers)
  • Applicability to certain classes of applications
  • Event routing on multi CPU systems
30
Q

There are several sets of experimental results from the Flash paper discussed in the lesson. Do you understand the purpose of each set of experiments (what was the question they wanted to answer)? Do you understand why the experiment was structured in a particular why (why they chose the variables to be varied, the workload parameters, the measured metric…).

A

TODO

31
Q

If you ran your server from the class project for two different traces: (i) many requests for a single file, and (ii) many random requests across a very large pool of very large files, what do you think would happen as you add more threads to your server? Can you sketch a hypothetical graph?

A

TODO