Process Coordination and Communication Flashcards
(RAG)
Resource allocation graph
How is a futex different from a classic mutex? Which advantage does the futex have?
A futex first tries to use a spinlock in userspace to enter the critical section and only executes a mutex-like syscall to put the thread to sleep if the spinlock is busy.
As a result, futexes cause significantly less syscall overhead if congestion is low.
The following code implements a simple barrier synchronization for N threads, which causes the threads to leave the function barrier() only when all N threads have called the function. Each thread executes barrier() only once in total.
Which erroneous behavior can be observed? Describe how the code has to be changed to prevent this error.
1 int counter = 0;
2
3 void barrier() {
4 counter = counter + 1;
5 while (counter != N) {
6 /* wait until all N threads have called the function */
7 }
8 }
The threads might never leave the barrier because an update to the counter variable is lost. The addition in line 4 has to be made atomic by using
an atomic instruction or a mutex.
Describe how barrier synchronization can be implemented with direct asynchronous message passing. You can assume that every involved
process knows which other processes participate in the barrier synchronization and that each process can be addressed by a unique known number (process ID).
- All processes send messages to all other processes.
- Each process then waits until N − 1 messages were received before it continues
Draw a resource allocation graph which contains a cycle but does not represent a deadlock.
One of the resources in the cycle must have multiple instances.
Explain the term priority donation.
If a process waits for another process with lower priority, priority donation temporarily increases the priority of that process to match that of the first process.
Assume a scheduler that implements priority donation. Does this implementation alone solve the problem of priority inversion when using simple spinlocks?
No, because a simple spinlock does not yield control to the operating system, so the scheduler does not even know that the thread is waiting or for which other thread a thread is waiting.
Which problem arises when using synchronous message passing between processes in a system with user-level threading?
Synchronous message passing is blocking, and in a system with user-level threading a blocking thread blocks all other threads as well.
The following code implements a spinlock. A thread that enters the critical section writes its unique thread ID (>0) into the spinlock variable using the atomic swap instruction and reads the previous value. If the variable was 0, the thread enters the critical section. Otherwise, the write access is reverted.
Explain for each of the three requirements for a synchronization primitive whether they are fulfilled or not.
1 int lock_var = 0;
2
3 void lock(void) {
4 int x;
5 while (1) {
6 x = thread_id();
7 swap(&x, &lock_var); /* atomic /
8 if (x == 0)
9 break; / enter section /
10 else
11 swap(&x, &lock_var); / revert access */
12 }
13 }
14
15 void unlock(void) {
16 int x = 0;
17 swap(&x, &lock_var);
18 }
Mutual exclusion: Yes. lock_var is only zero if no thread is in the critical section. A thread only enters when lock_var is zero, and the variable is atomically set to a non-zero value on entry.
Progress: No. For example, if one thread unlocks the lock, but only afterwards another thread reverts an unsuccessful lock attempt, the lock variable is left with a non-zero value even though no thread is in the critical section.
Bounded Waiting: No. Threads enter the critical section in random order, so an individual thread might wait arbitrarily long.
Which of the requirements for synchronization primitives presented in the lecture is fulfilled by a strong semaphore, but not by a weak semaphore?
Bounded waiting
Which thread does a strong semaphore wake up to fulfill a requirement?
The thread which has been waiting for the longest time.
You want to use the message passing provided by the kernel between the user-level threads of a process. In this case, why does at least either sending or receiving have to be asynchronous?
If both sending and receiving are synchronous and blocking, both the receiver and the sender thread have to execute a system call at the same time. However, user-level threading maps both threads onto one kernel thread, so only one can execute a blocking system call at a time
Why is it not possible to use kernel-based direct message passing to send messages specifically to individual user-level threads?
Direct message passing can only target individual processes (or potentially kernel-level threads). All user-level threads are, however, executed in the same process and kernel-level thread.
Name the four conditions for a deadlock.
mutual exclusion
hold and wait
no preemption
circular wait
Assume a system where threads never access a spinlock if they already hold another spinlock with a higher virtual address. Which condition for deadlocks between the threads of a process is prevented by this scheme? Briefly explain why the condition is impossible. Assume that only spinlocks are used for synchronization and that physical memory is not mapped multiple times into the same address space.
The approach prevents circular wait. Any cycle requires one thread to hold a spinlock with a higher virtual address and wait for a spinlock with a lower address, whereas another thread has to hold a spinlock with a lower virtual add and has to wait for a higher address. Such scenarios are not allowed by the given system.
Multiple processes are synchronized by spinlocks in shared memory regions. Why is the approach in some situations unable to prevent deadlocks between the processes?
The approach requires spinlocks to have the same virtual address order in all participating processes. Shared memory segments, however, do not have to be mapped in the same order in all participating processes, so the order in which spinlocks can be acquired varies from process to process, allowing cycles to develop.
Incoming messages shall be processed by a group of processes. Which type of message passing do you have to use?
Indirect message passing (or message passing with mailboxes)
Name the three classes of countermeasures against deadlocks.
- Deadlock prevention
- Deadlock avoidance
- Deadlock detection/recovery
Explain why a spinlock is not suitable for synchronization of accesses by two threads
which execute on the same CPU core.
The thread spinning can potentially preempt the thread holding the lock. At this point, no progress can be made until the thread holding the lock is scheduled again, yet the thread which is spinning wastes CPU resources.