Process Coordination and Communication Flashcards

1
Q

(RAG)

A

Resource allocation graph

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How is a futex different from a classic mutex? Which advantage does the futex have?

A

A futex first tries to use a spinlock in userspace to enter the critical section and only executes a mutex-like syscall to put the thread to sleep if the spinlock is busy.
As a result, futexes cause significantly less syscall overhead if congestion is low.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The following code implements a simple barrier synchronization for N threads, which causes the threads to leave the function barrier() only when all N threads have called the function. Each thread executes barrier() only once in total.
Which erroneous behavior can be observed? Describe how the code has to be changed to prevent this error.
1 int counter = 0;
2
3 void barrier() {
4 counter = counter + 1;
5 while (counter != N) {
6 /* wait until all N threads have called the function */
7 }
8 }

A

The threads might never leave the barrier because an update to the counter variable is lost. The addition in line 4 has to be made atomic by using
an atomic instruction or a mutex.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe how barrier synchronization can be implemented with direct asynchronous message passing. You can assume that every involved
process knows which other processes participate in the barrier synchronization and that each process can be addressed by a unique known number (process ID).

A
  • All processes send messages to all other processes.
  • Each process then waits until N − 1 messages were received before it continues
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Draw a resource allocation graph which contains a cycle but does not represent a deadlock.

A

One of the resources in the cycle must have multiple instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain the term priority donation.

A

If a process waits for another process with lower priority, priority donation temporarily increases the priority of that process to match that of the first process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Assume a scheduler that implements priority donation. Does this implementation alone solve the problem of priority inversion when using simple spinlocks?

A

No, because a simple spinlock does not yield control to the operating system, so the scheduler does not even know that the thread is waiting or for which other thread a thread is waiting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which problem arises when using synchronous message passing between processes in a system with user-level threading?

A

Synchronous message passing is blocking, and in a system with user-level threading a blocking thread blocks all other threads as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The following code implements a spinlock. A thread that enters the critical section writes its unique thread ID (>0) into the spinlock variable using the atomic swap instruction and reads the previous value. If the variable was 0, the thread enters the critical section. Otherwise, the write access is reverted.
Explain for each of the three requirements for a synchronization primitive whether they are fulfilled or not.
1 int lock_var = 0;
2
3 void lock(void) {
4 int x;
5 while (1) {
6 x = thread_id();
7 swap(&x, &lock_var); /* atomic /
8 if (x == 0)
9 break; /
enter section /
10 else
11 swap(&x, &lock_var); /
revert access */
12 }
13 }
14
15 void unlock(void) {
16 int x = 0;
17 swap(&x, &lock_var);
18 }

A

Mutual exclusion: Yes. lock_var is only zero if no thread is in the critical section. A thread only enters when lock_var is zero, and the variable is atomically set to a non-zero value on entry.

Progress: No. For example, if one thread unlocks the lock, but only afterwards another thread reverts an unsuccessful lock attempt, the lock variable is left with a non-zero value even though no thread is in the critical section.

Bounded Waiting: No. Threads enter the critical section in random order, so an individual thread might wait arbitrarily long.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which of the requirements for synchronization primitives presented in the lecture is fulfilled by a strong semaphore, but not by a weak semaphore?

A

Bounded waiting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which thread does a strong semaphore wake up to fulfill a requirement?

A

The thread which has been waiting for the longest time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You want to use the message passing provided by the kernel between the user-level threads of a process. In this case, why does at least either sending or receiving have to be asynchronous?

A

If both sending and receiving are synchronous and blocking, both the receiver and the sender thread have to execute a system call at the same time. However, user-level threading maps both threads onto one kernel thread, so only one can execute a blocking system call at a time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why is it not possible to use kernel-based direct message passing to send messages specifically to individual user-level threads?

A

Direct message passing can only target individual processes (or potentially kernel-level threads). All user-level threads are, however, executed in the same process and kernel-level thread.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Name the four conditions for a deadlock.

A

mutual exclusion
hold and wait
no preemption
circular wait

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Assume a system where threads never access a spinlock if they already hold another spinlock with a higher virtual address. Which condition for deadlocks between the threads of a process is prevented by this scheme? Briefly explain why the condition is impossible. Assume that only spinlocks are used for synchronization and that physical memory is not mapped multiple times into the same address space.

A

The approach prevents circular wait. Any cycle requires one thread to hold a spinlock with a higher virtual address and wait for a spinlock with a lower address, whereas another thread has to hold a spinlock with a lower virtual add and has to wait for a higher address. Such scenarios are not allowed by the given system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Multiple processes are synchronized by spinlocks in shared memory regions. Why is the approach in some situations unable to prevent deadlocks between the processes?

A

The approach requires spinlocks to have the same virtual address order in all participating processes. Shared memory segments, however, do not have to be mapped in the same order in all participating processes, so the order in which spinlocks can be acquired varies from process to process, allowing cycles to develop.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Incoming messages shall be processed by a group of processes. Which type of message passing do you have to use?

A

Indirect message passing (or message passing with mailboxes)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Name the three classes of countermeasures against deadlocks.

A
  • Deadlock prevention
  • Deadlock avoidance
  • Deadlock detection/recovery
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Explain why a spinlock is not suitable for synchronization of accesses by two threads
which execute on the same CPU core.

A

The thread spinning can potentially preempt the thread holding the lock. At this point, no progress can be made until the thread holding the lock is scheduled again, yet the thread which is spinning wastes CPU resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Kernels often use spinlocks, whereas applications do not. Why is sensible use easier in the kernel than in applications?

A

The kernel can disable preemption while holding a spinlock by disabling interrupts.

21
Q

Explain how a two-phase lock works.

A

The lock first spins for a fixed amount of time. If the lock can’t be acquired within this time, the threads execute a syscall to sleep until the lock is available.

22
Q

Describe the impact of a fence instruction on the behavior of the processor. Why is this behavior necessary for the code to enter and exit a critical section?

A

The instruction restricts the reordering of other instructions around the fence instructions by the processor. Instructions must not be reordered at the boundaries of a critical section, or else parts of the code in the critical section may actually take
effect while no lock is held.

23
Q

Sketch a method with which the operating system can record communication between two arbitrary processes which communicate via shared memory.

A

The shared memory region is unmapped from the processes. Each access generates a page fault, the OS emulates and records the accesses.

24
Q

Explain the condition “No preemption” for deadlocks.

A

“No preemption” means that resources can only be returned by tasks voluntarily but cannot be taken away forcefully.

25
Q

How are techniques called that negate such a precondition for deadlocks?

A

Deadlock Prevention

26
Q

Explain the difference between direct and indirect message passing.

A

Direct message passing means that tasks are addressed directly, whereas indirect message passing operations target mailboxes.

27
Q

When and why can asynchronous receiving be implemented completely without a buffer?

A

When sending is synchronous. In that case, the sender can hold the data until it can be copied to the receiver.

28
Q

Why is the x86 instruction add memory, 1 which increments the value at the address memory not atomic?

A

Because the CPU first loads the value and then later stores the modified value.
Other concurrent accesses between those two accesses cause race conditions.

29
Q

An application protects all data that is used by multiple threads with a single mutex. Name an advantage and a disadvantage of this approach compared to many mutexes which each protect a small subset of the data.

A

Potential advantages:
* Easier to implement
* Less memory required to store the mutexes
* No deadlocks

Potential disadvantages:
* Threads often have to wait even though the data they want to access is not used by any other thread.

30
Q

How does the approach affect the performance of the program if lock does not protect any other variables besides x?

A

Performance is commonly reduced as any concurrent access to x causes the calculation to be performed multiple times.

31
Q

Explain what is meant by a race condition.

A

Concurrent access to shared data by different threads can lead to wrong results.

32
Q

In which circumstances may disabling interrupts work as a synchronization mechanism?

A

You may implement synchronization by disabling interrupts only on single processor systems. On a multi-processor system, disabling interrupts (even globally) will not keep other CPUs from entering a critical section and therefore cannot prevent race
conditions.

33
Q

Which of the requirements for a deadlock is negated by ordering all resources and always acquiring them in this order?

A

This breaks circular wait, because a cycle in the resource acquisition graph would require at least one process to acquire a resource with a lower order than the resource it already holds

34
Q

Why does the reader-writer problem require the entry of writer threads into the critical section to be prioritized over the entry of reader threads if bounded waiting shall be fulfilled?

A

If reader threads are admitted into the critical section whenever they could enter it, a large number of reader threads can starve writer threads. If additional reader threads arrive at the section before the previous reader threads have left the section, there always is a reader thread in the section, so writer threads can never get in.

35
Q

Explain the term mailbox in the context of message passing.

A

A mailbox is an indirection mechanism: With mailboxes, communicating processes do not name the communication partner directly but rather deposit messages to a mailbox or receive them from the mailbox.

36
Q

Name an advantage of indirect message passing compared to direct message passing

A
  • . . . allows the communication partners to be replaced (e.g., after a software update).
  • . . . allows a message to be received by any process of a group of processes (e.g.,
    for a multi-process server).
37
Q

For which combinations of asynchronous or synchronous send and receive is buffered message passing fundamentally required?

A

Buffers are necessary whenever sending is asynchronous. Asynchronous send operations return immediately and potentially continues to overwrite the message, so the message has to be placed in a buffer. Synchronous send, in contrast, requires the sender to wait for the receiver, so rendezvous-style message passing can be implemented and the message can be copied directly to the receiver.

38
Q

Describe the producer-consumer problem. Name the cases in which one of the communication partners has to block.

A

The producer-consumer problem consists of senders (producer) and receivers (consumer) communicating via a shared bounded buffer. Producers put messages into the buffer and the receivers remove messages from the buffer. A producer has to block while the buffer is full, and a sender has to block while the buffer is empty.

39
Q

Name a synchronization primitive which is suitable to notify other threads

A

Semaphore or condition variable

40
Q

Explain the difference between deadlock prevention and deadlock avoidance

A

Deadlock prevention means that the system is restructured at design time so that arbitrary execution orders cannot lead to deadlocks, e.g., by negating one of the deadlock conditions.

Deadlock avoidance, instead, restricts the allowed resource requests at runtime so that the system stays in a safe state, on the basis of knowledge about potential future resource requests by all involved processes.

41
Q

Describe a method to detect deadlocks.

A

The system can be tested for deadlocks by creating the wait-for-graph of all resource acquisitions and by searching for cycles in that graph.

42
Q

Assume a single processor system with non-interruptible interrupt service routines.
A resource which is protected by a spinlock is used in an interrupt service routine as well as in interruptible kernel code.
In which situation does a deadlock occur? Identify the two resources involved in cyclic waiting.

A

Deadlocks occur if the spinlock is held by the interruptible kernel code, that code is interrupted and the interrupt handler tries to acquire the spinlock. The interrupt handler prevents the previous code from proceeding but cannot proceed
itself as the spinlock is locked. The second resource (besides the resource mentioned in the task) is the CPU (which is “locked” by the interrupt handler whereas the interruptible kernel code tries to acquire it).

43
Q

Explain the difference between synchronous and asynchronous IPC.

A

Whereas asynchronous IPC system calls return immediately even if the message has not been sent/received, synchronous IPC system calls block until a message is sent/received.

44
Q

Given an integer variable x, assume that x++ is executed by two threads in parallel.
Explain why overall the variable is sometimes only incremented by one.

A

The ++ operator first loads the old value, then stores the incremented value in the memory location of x, thus executing two distinct memory accesses. If the second thread loads the value before the first has stored the incremented value, the original value is incremented in both cases, so one update is lost.

45
Q

Explain how virtualization of resources can prevent deadlocks. Which of the requirements for the occurrance of deadlocks is negated?

A

Virtualization can prevent deadlocks if it implements multiplexing based on preemption. In that case, the no-preemption requirement does not hold anymore.

46
Q

Why are spinlocks not suitable for critical sections for which threads often have to wait for a long time when trying to enter (lock congestion)?

A

Spinlocks use busy waiting, which wastes CPU and blocks other threads from executing useful code.

47
Q

Assume a system which supports message passing with non-blocking send and blocking receive operations. Explain briefly how it is possible to implement message passing with blocking send and blocking receive on top of this system, without modifying the kernel.

A

After the non-blocking send, the sender executes a blocking receive. Upon receiving a message, the receiver sends an acknowledgment message. The sender blocks until that acknowledgment has been received.

48
Q

Why should user-level threads (many-to-one model) not use spinlocks for synchronization within the process?

A

Spinlocks perform busy waiting. In a many-to-one model, all other threads have to wait for the running thread, which hurts performance if another ULT from the same process holds the lock.

49
Q

Why does the reader-writer problem require the entry of writer threads into the critical section to be prioritized over the entry of reader threads if bounded waiting shall be fulfilled?

A

If reader threads are admitted into the critical section whenever they could enter it, a large number of reader threads can starve writer threads. If additional reader threads arrive at the section before the previous reader threads have left the section, there always is a reader thread in the section, so writer threads can never get in.