P2 L2: Threads and Concurrency Flashcards

1
Q

What sections of the virtual address space do threads share?

A

All except for the stack

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What OS resources do threads share (how does the PCB look like)?

A

Share

  • open files
  • virtual address space: code, data, heap

Not shared:

  • virtual address space: stack
  • execution context: registers such as program counter
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Name advantages of threads over processes

A
Speed: parallelize same code
Specialization:   ==> hot CPU cache 
More memory efficient: 
 - Less allocations (less work): No need to allocate separate Address space + execution context
 - results in less swaps to disk 
IPC more efficient via shared variables
  • context switch between threads is more efficient
    • no need to create new Page table as threads SHARE the virtual address space.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

When does it make sense to use threads on a single Core CPU? Given: P1 is blocked on I/O.

A

1) If the CPU has some idle time

2) if t_idle (P1) > 2 * t_context_switch.
Otherwise context switch uses more CPU time than P1 was waiting for I/0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Difference between context switch between processes and threads

A

Context switch between threads: no need to create new Page table (mapping virtual -> physical memory) as threads SHARE the virtual address space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

benefit of threading on single core CPU

A

Hide latency of I/O by making cheap context switch to other thread.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a problem in thread coordination?

A

Data races because threads share the virtual address space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are common mistakes when coordinating threads?

A
  • keep track of mutex/lock variable used with a resource
    • e.g. mutex_type m1; // mutex for file1
  • check that you are always and correctly using lock and unlock - Compilers can be used as they generate errors/warnings to correct this type of mistake
  • Use a single mutex to access a single resource
  • check that you are signalling correct condition
  • check that you are not using signal when broadcast is needed
    • signal : only 1 thread is will proceed, remaining threads will wait
  • Relying order of thread execution on signals to cond. variables
    • See above example where it is not guaranteed that the readers are woken up and aquire lock before the writer even though in the code they are send the signal earlier
    • depends on internal impl. details of the condition variables
  • Spurious wake ups
  • Deadlocks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Define a deadlock

A

Two or more threads depend on each other in a cyclic fashion so that they can never complete

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe the possible relationship between user and kernel level threads (User-level thread models)

A

1: 1
m: 1
m: m

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Describe the differences between user (goroutines, green threads, …) and kernel level threaths.

A

TODO

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Name two common Multithreading patters and explain them

A
  • Boss-worker pattern

- Pipeline pattern

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Name advantages of threads

A
  • Speed: parallelise same code (e.g map-reduce)
  • Specialisation: thread for specific task, executing small portion of instruction + data from RAM ==> hot CPU cache
  • Efficiency:
    • Memory efficiency: Less allocations (less work): No need to allocate separate:
      • Address space -> Thread executes the same program code anyways, so why not share the virtual address space?)
      • Execution context (the OS internal data structure PCB that needs to be updated by OS)
        => more likely to fit in physical memory
        => less swaps to disk
  • IPC Efficiency: is more efficient (via shared variables in same address space). No need for mapped memory between address spaces or message passing)

Threads hide the latency of I/O operations!!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly