P2 L2: Threads and Concurrency Flashcards
What sections of the virtual address space do threads share?
All except for the stack
What OS resources do threads share (how does the PCB look like)?
Share
- open files
- virtual address space: code, data, heap
Not shared:
- virtual address space: stack
- execution context: registers such as program counter
Name advantages of threads over processes
Speed: parallelize same code Specialization: ==> hot CPU cache More memory efficient: - Less allocations (less work): No need to allocate separate Address space + execution context - results in less swaps to disk IPC more efficient via shared variables
- context switch between threads is more efficient
- no need to create new Page table as threads SHARE the virtual address space.
When does it make sense to use threads on a single Core CPU? Given: P1 is blocked on I/O.
1) If the CPU has some idle time
2) if t_idle (P1) > 2 * t_context_switch.
Otherwise context switch uses more CPU time than P1 was waiting for I/0
Difference between context switch between processes and threads
Context switch between threads: no need to create new Page table (mapping virtual -> physical memory) as threads SHARE the virtual address space.
benefit of threading on single core CPU
Hide latency of I/O by making cheap context switch to other thread.
What is a problem in thread coordination?
Data races because threads share the virtual address space
What are common mistakes when coordinating threads?
- keep track of mutex/lock variable used with a resource
- e.g. mutex_type m1; // mutex for file1
- check that you are always and correctly using lock and unlock - Compilers can be used as they generate errors/warnings to correct this type of mistake
- Use a single mutex to access a single resource
- check that you are signalling correct condition
- check that you are not using signal when broadcast is needed
- signal : only 1 thread is will proceed, remaining threads will wait
- Relying order of thread execution on signals to cond. variables
- See above example where it is not guaranteed that the readers are woken up and aquire lock before the writer even though in the code they are send the signal earlier
- depends on internal impl. details of the condition variables
- Spurious wake ups
- Deadlocks
Define a deadlock
Two or more threads depend on each other in a cyclic fashion so that they can never complete
Describe the possible relationship between user and kernel level threads (User-level thread models)
1: 1
m: 1
m: m
Describe the differences between user (goroutines, green threads, …) and kernel level threaths.
TODO
Name two common Multithreading patters and explain them
- Boss-worker pattern
- Pipeline pattern
Name advantages of threads
- Speed: parallelise same code (e.g map-reduce)
- Specialisation: thread for specific task, executing small portion of instruction + data from RAM ==> hot CPU cache
-
Efficiency:
-
Memory efficiency: Less allocations (less work): No need to allocate separate:
- Address space -> Thread executes the same program code anyways, so why not share the virtual address space?)
- Execution context (the OS internal data structure PCB that needs to be updated by OS)
=> more likely to fit in physical memory
=> less swaps to disk
-
Memory efficiency: Less allocations (less work): No need to allocate separate:
- IPC Efficiency: is more efficient (via shared variables in same address space). No need for mapped memory between address spaces or message passing)
Threads hide the latency of I/O operations!!