P2L4: Thread Design Considerations Flashcards

1
Q

Name the conceptual data structures that are needed in the user and kernel space for a thread abstraction (should not be all in on process-level PCB). Not for SunOs.

A
  • User thread structure (user-level thread ID, user-level stack pointer, user-level registry values)
  • PCB with “hard”process state (virtual address mappings)
  • PCB with “light”process state ( only relevant for a subset of user-level threads** that are currently scheduled on ONE kernel level thread - signal mask + syscall arguments)
  • Struct for the Kernel level thread (point to hard process state PCB for virtual address mappings (shared) , own stack + own register values such as program counter)
  • CPU struct
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why does a modern OS typically maintain the information about the execution context of processes and threads in the system in several data structures?

A

The datastrucuture for the PCB of a process are split up!

  • Overhead: smaller memory footprint. Shared data can be referenced instead of copied.
  • Scalability: Fat datastructure is large + needs to be copied for each thread (even though they share information) when created newly
  • Performance: Context switch faster (less data to save and reload from memory. Not the virtual address space)
  • Flexibility: To many parallel writers if only one datastructure
    • makes locking inefficient
  • User-level thread library only needs to update a smaller portion (not the whole PCB) via well-defined interface from user level
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Name examples when Kernel and Userspace need to coordinate for thread management

A
  • Kernel signals thread librarybefore blocking thread => ULT library can use system call to create additional kernel thread
  • Kernel notify thread library that it removed one kernel level thread (e.g idling too much)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the difference between CPU/Thread Pinning and bound threads?

A
  • Bound Thread -> One ULT is assigned to only one Kernel level thread (or LWP in solaris case)
  • CPU / Thread Pinning -> Kernel Thread pinned to particular CPU (CPU sets cgroup controller)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a disadvantage of having both user and kernel threads

A

Lack of visibility between kernel and ULT library requires coordination via signals and special system class.
1:1 model helps here.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Name problems in multi-threaded OS’es that stem from the lack of visibility between kernel and user level threads.

A
  • ULT library does not know when Kernel level thread blocks due to IO => might have more user-level threads to run concurrently (NEEDS IMPROVEMENT AS THE ULT knows that the UL thread code does some blocking stuff!)
  • ULT library does not know when kernel decides to remove an idling thread
  • Kernel preempts kernel thread that runs critical user-level thread => other user thread wait for mutex lock to clear => kernel unaware of mutex data structures)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Name examples when the ULT library scheduler in invoked/executed.

A
  • ULTs yield (Scheduler in library needs to schedule another ULT to run on kernel thread)
  • Timer set by ULT expire (time slice is up for ULT, need to preempt)
  • ULT call library functions (lock, unlock, …)
  • Blocked ULTS are runnable
  • Coordination between kernel & ULT thread library (kernel sends signals)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a problem in thread management (user & kernel space interaction) specific to multicore CPUs?

A

Video 13.

CPU 1 needs to signal CPU2 to invoke the user level scheduler when a condition (such as released lock) in thread executing on CPU1 changed.

Why: might need to preempt UL thread currently executing on KLT and schedule another UL thread with higher priority.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are adaptive mutexes for and when does it make sense to use them?

A

Adaptive mutex = spin lock.
Just burn CPU cycles to wait for mutex to be freed.
Makes sense when:
- Mutex will soon be released / short critical section (need to have idea of kind of critical section associated with mutex)
- Multiple CPUs/Cores where stuff can execute IN PARALLEL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Interrupt vs. Signal

A
  • Interrupt generated by external component (hardware, timer, other CPU!)
  • Signal triggered by CPU itself and software running on it
  • Which interrupts exist depend on the HARDWARE vs. Signals are dependent on OS
  • Interrupts always asynchronous vs either asynchronous or synchronous (process accesses memory it should not - send SIGSEGV right away)
  • Interrupts are delivered to a CPU vs. Signals are delivered to A PROCESS
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Similarities Interrupt & Signal

A
  • Have uniqeue ID
  • can both be masked (cool!)
  • Have a handler routine
    • Interrupt handlers set for entire system by OS
    • Signal handlers set per process (by user code)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are asynchronus signals and name two examples.

A

Asynchonous Signals => result from some action outside the process.

SIGALARM (timer expired)
SIGINT
SIGKILL (coming from other CPU)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are synchronus signals and name two examples.

A

result from an operation ON the thread

  • SIGFPE (divide by zero)
  • SIGSEGV
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Can a interrupt be masked and if yes, does the Kernel do that?

A

Yes it can be. Kernel / OS is not involved. Instead the Hardware interrupt route will not deliver interrupt to CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What types/categories of signals exist?

A

Real time Signals (reliable)
- n raised signals cause n handler executions

One Shot (unreliable)

  • n signals pending (masked by process) == 1 signal pending
  • handler routine have to be reinstalled by handler itself
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Name a caveat of One Shot signals

A

handler routine have to be reinstalled by handler itself => subsequent signals might get lost if not quickly enough re-installed.

17
Q

Name a situation when it makes sense to mask interrupts or threads.

A

To prevent a deadlock with mutexes.

Happens when handler routine tries to lock mutex that is already locked by the interrupted thread.

18
Q

How can a deadlock in signal/interrupt handlers with mutexes be prevented?

A

Executing handlers in dedicated threads.

19
Q

When are interrupts in dedicated threads useful?

A
  • Prevent deadlock in handler routine
    => Allow handler code running in thread to use any mutex and have high complexity (can be blocked, can wait etc.) without having to worry about deadlocks
  • Optimize performance (SunOS Paper)
    => Overhead in extra instruction to decide whether to create thread + create it is offset by not having to mask/unmask mutexes in the handler running in the originally interrupted thread context. (Optimize for common cause)
20
Q

What are the characteristics of the top half in the interrupt handler routine?

A

Non-blocking, fast execution

21
Q

What are the characteristics of the bottom half in the interrupt handler routine?

A

Arbitrary complexity (mutexes etc., can be blocked), executed in thread (e.g to avoid possible deadlocks.)

22
Q

Describe a problem with signal masks between UL threads and LWPs or KLTs

A

Kernel does not know UL threads & therefore which of them should receive signals (UL thread can have them masked in UL struct).

Per-Process kernel level signal handler table contains the start address of the library handler (instead of a global one) that does the multiplexing.

ULT library global handler optimizes to avoid mahal signal mask updates in the Kernel and instead operates on a UL signal mask.
This optimizes for the common cause: Signal mask updates happen often (cheaper on UL mask without signal call). Signals do not happen often.

23
Q

What is the difference between a Trap and an Interrupt?

A

Unanswered: See: https://piazza.com/class/ksj1dzntsni27l?cid=824

24
Q

Under which circumstances should the m:1 model for threads be used?

A

TODO

25
Q

What are the advantages of the m:n model for threads?

A

TODO

26
Q

Name the advantages of the 1:1 threading model

A

TODO: explicitly in lecture! –> king because it reduces complexity of m:n ->
–> complete!!

  • increases visibility
    • cannot just block the only executing thread which other UL threads depend on (knows all threads and can easily create new KL threads for concurrency)
    • no need to signal when KLT is removed so that UL threading lib can create new KLT (always 1:1)
  • reduces need for communication for sending signals from kernel to right user level thread (kernel does not know signal mask in user)
27
Q

What are the states in the lifetime of a thread in OSlaris?

A

Active, Runnable, Sleeping, Stopped, Zombie

28
Q

What are advantages/ why to Zombie threads &process exists?

A
  • Minimize cost of thread exit (free stack) => deferred to later.
  • Speed up further thread creation: freed stacks put in cache of available stacks => smart AF!!!
29
Q

What is important in respect to critical sections / synchronisation in signal handlers?

A
  • Solaris: Signal-safe critical section +> ALL asynchronous signals should be masked during such critical section (otherwise execution might jump into signal handler in the middle of critivcal section - tough!)
  • Interrupt handlers use separate threads to execute complicated code incl. blocking on synch. variables.
30
Q

What is priority inversion. How is it prevented in SolarisOS?

A
  • In computer science,priority inversionis a scenario in[scheduling] in which a high priority[task] is indirectly superseded by a lower priority task effectively inverting the assigned priorities of the tasks. This violates the priority model that high priority tasks can only be prevented from running by higher priority tasks.
  • Prevented by dispatch queue which is a Priority Queue