Synchronization Primitives Flashcards

1
Q

What is concurrency, and provide some real-world examples?

A

○ Concurrency refers to the execution of multiple tasks or processes seemingly at the same time. While it might appear that tasks are happening simultaneously, they are often interleaved or divided among available processing units.

○ Examples: Millions of drivers on a highway at once, a student doing homework while watching Netflix, faculty having lunch while grading papers and watching Netflix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How has Moore’s Law influenced the shift towards concurrent programming?

A

○ While transistor count has continued to rise, CPU clock speed has plateaued.

○ This has led to a focus on multi-core CPUs. To fully utilize these cores, we need to write applications that can execute tasks concurrently

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Explain the difference between a process and a thread. What are the advantages and disadvantages of each for concurrent programming?

A

○ Process: An executing instance of a program with its own address space, resources, and a single thread of control. Processes provide strong isolation—if one process crashes, others are unaffected. However, inter-process communication is typically slow and resource-intensive.

○ Thread: A lightweight unit of execution that shares the address space and resources of its parent process. Threads within a process can communicate quickly through shared memory, offering tighter integration for concurrent tasks. However, a crashing thread can bring down the entire process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the advantages and disadvantages of threads sharing data? What is a data race?

A

○ Advantages: Efficient communication and data access between threads.

○ Disadvantages: Prone to data races, where multiple threads try to access and modify shared data simultaneously.

○ Data Race: Occurs when the outcome of a program depends on the unpredictable interleaving of thread executions, potentially causing unexpected and incorrect results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Explain why a single line of code in a program might not be atomic. Use an example to illustrate.

A

○ Even simple operations, like a = a
+1;, often translate into multiple machine instructions:
1.Load the value of a from memory into a register.
2.Increment the value in the register.
3.Store the updated value back into memory.

○The scheduler can interrupt the execution of a thread between these instructions, allowing other threads to access and potentially modify shared data, leading to data races

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Study the thread schedule examples in slides 10-21. Explain why the final value of ‘balance’ can differ based on the scheduling of the threads.

A

○ The thread schedules (slides 10-21) demonstrate that the final value of the shared variable ‘balance’ can be 101 (incorrect) or 102 (correct) depending on how the operating system schedules the threads.

○ This highlights the non-deterministic nature of concurrent programs; the output can vary based on the order in which threads are allowed to execute their instructions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is meant by non-determinism in the context of concurrent programming? What challenges does it present?

A

○ Non-determinism means that the output of a concurrent program can differ even with the same input, depending on the unpredictable timing of thread execution.

○ This makes debugging and ensuring correctness difficult because you can’t always reproduce the same execution path.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define a critical section and explain the importance of mutual exclusion.

A

○ Critical Section: A section of code where shared resources (like shared memory) are accessed. It’s crucial to ensure that only one thread is executing within a critical section at any given time to prevent data races.

○ Mutual Exclusion: The property that guarantees that if one thread is executing inside a critical section, no other thread can enter that critical section at the same time. It’s fundamental to preventing concurrency bugs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do locks help achieve mutual exclusion in concurrent programs?

A

○ Locks are synchronization mechanisms that enforce mutual exclusion by allowing only one thread to hold the lock at a time.

○Before entering a critical section, a thread attempts to acquire the lock. If another thread holds the lock, the requesting thread blocks (waits) until the lock is released.

○This ensures that only one thread can access shared resources within the critical section, preventing data races.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the POSIX Threads (pthreads) library? What are some of its key functions?

A

○ The pthreads library provides a standard C/C++ API for thread management and synchronization.
○ Key functions:
■ pthread_create(): Create a new
thread.
■.pthread_exit(): Terminate the
calling thread.
■ pthread_join(): Wait for a specific
thread to finish executing.
■ pthread_mutex_lock(): Acquire a
lock (mutex).
■ pthread_mutex_unlock(): Release
a lock.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a deadlock? Describe a classic scenario that can lead to a deadlock.

A

○ Deadlock: A situation in concurrent programming where two or more threads are blocked indefinitely, each waiting for the other to release the resources that it needs.

○ Classic Example:
■ Thread A acquires lock 1 and then tries to acquire lock 2.
■ Thread B acquires lock 2 and then tries to acquire lock 1.
■ Both threads are now blocked, as each thread holds a lock the other one needs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Explain the purpose and use of condition variables in thread synchronization. When are they particularly useful?

A

○ Purpose: Condition variables are used to synchronize threads based on a particular condition becoming true.

○ Use Cases: Useful when a thread needs to wait for another thread to perform some action or change the state of shared data before it can proceed.

○ Example: A thread might use a condition variable to wait until a queue has data before trying to dequeue an item. This prevents the thread from busy-waiting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why must condition variables always be used in conjunction with a mutex?

A

○ Condition variables themselves do not provide mutual exclusion. Without a mutex, race conditions can arise when accessing the shared data or the condition variable itself.

○ The mutex ensures that the condition is checked and the wait operation (if needed) is performed atomically. This prevents situations where a thread might miss a signal or where multiple threads wake up spuriously

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explain the issue of spurious wake-ups with condition variables. How do you protect against them?

A

○ Spurious Wake-up: A thread waiting on a condition variable might sometimes wake up even if the condition hasn’t been signaled. This can happen due to various reasons, including signal delivery mechanisms or thread scheduling.

○ Protection: Always retest the condition in a loop after waking up from pthread_cond_wait(). This ensures that the thread only proceeds when the condition is genuinely met, preventing unexpected behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a semaphore? Describe its two primary operations.

A

○ Semaphore: A synchronization primitive that maintains a non-negative integer counter. It’s used to control access to a shared resource by multiple threads.

○ Operations:
■ sem_wait() (or wait): Attempts to decrement the counter. If the counter is 0, the thread blocks until it’s greater than 0.
■ sem_post() (or signal or post): Increments the counter. If any threads are blocked on the semaphore, one is unblocked, allowing it to decrement the counter and proceed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Give examples of common use cases for semaphores in concurrent programming.

A

○ Mutual Exclusion: A semaphore initialized to 1 can act as a lock, ensuring that only one thread can access a critical section at a time.

○ Bounding Concurrency: You can use semaphores to limit the number of threads that can access a resource concurrently. For instance, you might use a semaphore to restrict the number of threads accessing a database connection pool.

○ Producer-Consumer Problems: Semaphores are often used in producer-consumer scenarios to signal between threads. A producer thread might signal a semaphore when new data is available, and a consumer thread might wait on that semaphore to be signaled before processing data.

17
Q

Why are the wait and post (or signal) operations on a semaphore considered atomic?

A

Atomicity is crucial for semaphores to function correctly.

○ The modification to the semaphore’s counter and the checks to see if a thread needs to block or be woken up must happen as a single, uninterruptible unit.

○ This prevents race conditions where multiple threads might try to modify the counter or the waiting thread queue simultaneously.

18
Q

Explain the challenges of implementing synchronization primitives like locks and semaphores when restricted to only atomic load and store operations. Use the “Too Much Milk” analogy to discuss different attempts and their shortcomings.

A

○ The Problem: Imagine roommates Alice and Bob who want to avoid buying extra milk. Atomic load (checking the fridge) and store (leaving a note) are the only actions they can perform without interruption.

○ Attempt 1 (Simple Note): They try leaving a note if they go out for milk. This fails because one roommate might see “no milk” and be interrupted before leaving a note, leading to the other roommate also going out for milk.

○ Attempt 2 (Named Notes): Using notes with their names introduces a new deadlock scenario where both might see the other’s note and wait indefinitely, resulting in no milk at all.

○Attempt 3 (Busy-Waiting): One roommate waits for the other’s note to disappear before checking for milk. This avoids both previous problems but introduces “busy-waiting,” where a roommate continuously checks for a condition instead of doing other tasks, wasting CPU cycles (analogous to a thread repeatedly checking a variable instead of yielding to other threads).

19
Q

How can disabling interrupts be used to implement locks on a single-processor system? What are the advantages and potential problems?

A

○ The Idea: On a single-processor system, if interrupts are disabled, the currently running code cannot be preempted, effectively creating an atomic section.

○ Implementation: A lock can be implemented by disabling interrupts, checking the lock variable, and then re-enabling interrupts. If the lock is busy, the thread goes to sleep.

○ Advantages: Simple to implement and provides mutual exclusion as long as interrupts are disabled.
○ Problems:
■Not Scalable: Disabling interrupts for too long is inefficient, especially on multi-processor systems where other cores remain idle.
■Kernel-Only (Usually): Disabling interrupts for too long is dangerous in user mode; it’s typically a privileged operation restricted to the operating system kernel.
■Deadlock Risk: A thread holding a lock and being interrupted before going to sleep can lead to a deadlock situation. This is because no other thread can acquire the lock until the interrupted thread runs again, but the interrupted thread might be waiting for an event that can only occur when another thread runs.

20
Q

Why are hardware-supported atomic instructions like Test-and-Set or Compare-and-Swap essential for efficient synchronization on modern multi-core systems?

A

○ Out-of-Order Execution: Modern CPUs use complex techniques to improve performance, such as executing instructions out of order. While this is usually invisible to a single thread, it can lead to unexpected behavior in multithreaded programs, breaking the synchronization guarantees of simple load and store operations.

○ Hardware Atomicity: Instructions like Test-and-Set or Compare-and-Swap provide hardware-level atomicity. These operations are guaranteed to complete without interruption, even in the presence of out-of-order execution. This enables the creation of more complex and efficient synchronization primitives that work reliably on modern multi-core processors.