Concurrency Flashcards

1
Q

How do modules interact in the Shared Memory Model?

A

They read and write shared objects in memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the risks in the Share Memory Model?

A

Race conditions and incorrect states can occur.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do modules communicate in the Message Passing Model?

A

They pass messages through channels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What’s the benefit of the Message Passing Model?

A

Avoids shared state but still faces race condition risks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Given Moore’s Law limitations what should we consider?

A

Processor clock speeds have plateaued.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a Race Condition?

A

A race condition occurs when multiple threads or processes access shared resources concurrently leading to unpredictable behavior due to the order of execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can we prevent Race Conditions?

A
  1. Use synchronization mechanisms like locks semaphores
  2. Atomic operations to ensure exclusive access to shared resources.
  3. Design thread-safe data structures.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Deadlock?

A

Deadlock happens when two or more threads are blocked waiting for each other to release resources they hold. None can proceed resulting in a standstill.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do we avoid Deadlocks?

A

Implement
* locking hierarchies,
* timeouts, or
* resource allocation strategies
to prevent deadlocks from occurring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is Thread Starvation?

A

Thread starvation occurs when a thread doesn’t get enough CPU time due to priority inversion or resource contention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is Priority Inversion?

A

Priority inversion is a phenomenon in multi-threading where a higher-priority thread is forced to wait for a lower-priority thread to release a resource. This happens because the lower-priority thread holds a resource that the higher-priority thread needs, and an unrelated medium-priority thread preempts the lower-priority thread, causing the higher-priority thread to wait even longer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How can we address Priority Inversion?

A

Priority Inheritance Protocol (PIP): Temporarily raises the priority of the lower-priority thread holding the resource to match that of the highest-priority thread waiting for it2.
Priority Ceiling Protocol (PCP): Assigns a priority ceiling to each resource, ensuring that a thread can only acquire the resource if its priority is higher than the ceiling2.

Both of these approaches would prevent preemption of Lower priority thread (L) by medium priority thread (M) causing the higher priority thread (H) to wait even longer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are Locks and Mutexes?

A

Locks (or mutexes) provide mutual exclusion allowing only one thread to access a critical section at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a Read-Write Lock?

A

A read-write lock allows multiple threads to read simultaneously but ensures exclusive access for writing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are Condition Variables?

A

Condition variables allow threads to wait for specific conditions (e.g. a shared resource becoming available).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the Producer-Consumer Problem?

A

It’s a classic synchronization problem where producers add items to a shared buffer and consumers remove them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How can we solve the Producer-Consumer Problem?

A
  1. Use semaphores or monitors to coordinate producers and consumers.
  2. Implement bounded buffers to prevent overflow or underflow.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are Thread Pools?

A

Thread pools manage a fixed set of worker threads improving efficiency by reusing threads instead of creating new ones.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is Parallelism vs. Concurrency?

A

Parallelism involves executing tasks simultaneously while concurrency deals with managing multiple tasks concurrently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the Actor Model?

A

The Actor Model represents concurrent systems as actors (independent entities) that communicate via messages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are Fork-Join Frameworks?

A

Fork-Join frameworks (e.g. Java’s ForkJoinPool) divide tasks into smaller subtasks execute them concurrently and then combine results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is Amdahl’s Law?

A

Amdahl’s Law quantifies the speedup achievable by parallelizing a program based on the fraction of serial code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How does the operating system achieve concurrency through time-slicing?

A

The operating system switches between threads frequently and unpredictably allowing them to execute concurrently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are some synchronization primitives used for mutual exclusion?

A

Examples include
- locks
- monitors
- semaphores
- mutexes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Why is programming for mutual exclusion error-prone?

A

Ensuring exclusive access to shared state or memory can lead to correctness issues and performance bottlenecks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is the impact of thread switching on overall throughput?

A

Frequent thread switching requires saving and resuming thread states which can be time-consuming.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

How can we prevent the “hold-and-wait” condition in concurrent systems?

A

Processes should request all resources before execution begins to avoid holding resources while waiting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What does “no preemption” mean in the context of preventing deadlock?

A

Processes automatically release held resources if a newly requested resource is unavailable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What distinguishes livelock from deadlock?

A

Livelock involves processes continuously changing resource states without making progress unlike deadlock.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is starvation in concurrent systems?

A

Starvation occurs when a process cannot access shared resources and cannot make progress.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

How can we prevent starvation?

A

Use a priority queue with aging to increase the priority of waiting processes over time.
Or use fairness policy in Java Lock Api

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is the Actor Model in concurrent programming?

A

The Actor Model treats everything as an actor allowing actors to pass messages to each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Which programming languages/frameworks use Actor-Based Concurrency?

A

Akka and Scala are examples of languages/frameworks that implement the Actor Model.

34
Q

What is event-based concurrency in JavaScript?

A

Event-based concurrency in JavaScript addresses the costliness of spawning and operating native threads. It involves an event loop that works with an event provider and a set of event handlers. When an event arrives the event loop dispatches it to the appropriate handler.

35
Q

What are non-blocking algorithms and how do they relate to concurrency?

A

Non-blocking algorithms utilize the compare-and-swap atomic primitive provided by hardware. This allows for atomic operations without requiring explicit synchronization. Examples include Java’s non-blocking data structures like AtomicBoolean AtomicInteger AtomicLong and AtomicReference.

36
Q

What are green threads and how do they impact concurrency?

A

Green threads are scheduled by the runtime library rather than natively by the operating system. While they don’t solve all thread-based concurrency issues they can improve performance in certain cases. (Project Loom in Java)

37
Q

What are goroutines in Go and why are they lightweight?

A

Goroutines are lightweight threads in Go. They can run concurrently with other functions or methods and occupy only a small stack size. Goroutines are multiplexed with a limited number of native threads and communicate via channels avoiding shared memory access.

38
Q

What are fibers in Java and why were they proposed?

A

Fibers are a proposed concurrency abstraction in Java. Initially Java had green thread support (for Solaris) but it was discontinued. Project Loom aims to introduce continuations and fibers potentially changing how we write concurrent Java applications.

39
Q

How does Node.js handle concurrency at the web layer?

A

Node.js implements an event loop over a single thread. Callbacks handle blocking operations like I/O asynchronously.

40
Q

What approach does nginx take for concurrency?

A

nginx uses an asynchronous event-driven approach. It operates with a master process in a single thread.

41
Q

What is Akka and how does it handle concurrency in the application layer?

A

Akka is a toolkit for building highly concurrent and distributed applications on the JVM. It follows the actor model for handling concurrency.

42
Q

What is Project Reactor and how does it support non-blocking applications?

A

Project Reactor is a reactive library for building non-blocking applications on the JVM. It adheres to the Reactive Streams specification focusing on efficient message passing and demand management. Popular frameworks like Spring WebFlux and RSocket provide reactor implementations.

43
Q

What are the characteristics of Cassandra in terms of concurrency?

A

Cassandra is a NoSQL distributed database known for
* high availability
* scalability and
* fault tolerance.

However it doesn’t provide ACID transactions spanning multiple tables.

44
Q

What role does Kafka play in data layer concurrency?

A

Kafka is a distributed streaming platform that stores records in topics. It offers linear horizontal scalability for both producers and consumers while ensuring high reliability.

45
Q

What is the purpose of thread synchronization in concurrent programming?

A

Thread synchronization ensures that multiple threads coordinate their actions to avoid data races maintain consistency and prevent conflicts when accessing shared resources.

46
Q

How does the “happens-before” relationship impact memory consistency in Java?

A

The “happens-before” relationship defines the order in which memory operations occur. It ensures that changes made by one thread are visible to other threads maintaining memory consistency.

47
Q

What are the advantages of using thread pools over creating threads dynamically?

A

Thread pools reuse existing threads reducing overhead from thread creation and destruction. They manage thread lifecycle limit resource usage and improve performance.

48
Q

Explain the concept of thread-local storage (TLS) and its use cases?

A

TLS allows each thread to have its own private data storage. It’s useful for maintaining per-thread state such as thread-specific variables or caching.

49
Q

What is a critical section and how do you protect it from concurrent access?

A

A critical section is a part of code that accesses shared resources. Protect it using locks (e.g. mutexes or semaphores) to ensure exclusive access by one thread at a time.

50
Q

Compare and contrast mutexes and semaphores in terms of usage and behavior?

A

Mutexes provide exclusive access (1:1 relationship) while semaphores allow multiple threads to access a resource (n:1 relationship). Semaphores can also represent counting mechanisms.

51
Q

How does the “volatile” keyword affect memory visibility in Java?

A

The “volatile” keyword ensures that reads and writes to a variable are directly reflected in memory preventing certain optimizations and ensuring visibility across threads.

52
Q

What is Amdahl’s Law and how does it relate to parallelization and speedup?

A

Amdahl’s Law quantifies the potential speedup from parallelization. It states that the overall speedup is limited by the fraction of serial (non-parallelizable) code.

53
Q

What are the challenges of parallelizing code for multi-core processors?

A

Challenges include
- load balancing
- data dependencies
- cache coherence and
- scalability bottlenecks.

54
Q

How does the “fork-join” framework work in Java for parallel tasks?

A

The fork-join framework recursively splits tasks into smaller subtasks executes them in parallel and combines results. It’s useful for divide-and-conquer algorithms.

55
Q

What is a data race and how can you detect it during program execution?

A

A data race occurs when two threads access shared data concurrently without proper synchronization. Tools like static analyzers or dynamic race detectors can detect data races.

56
Q

Explain the concept of thread priority and its impact on scheduling?

A

Thread priority determines the order in which threads are scheduled. Higher-priority threads get more CPU time but relying solely on priorities can lead to priority inversion.

57
Q

What are the differences between asynchronous and synchronous programming models?

A

Asynchronous programming allows non-blocking execution while synchronous (blocking) programming waits for results. Asynchronous code often uses callbacks promises or async/await.

58
Q

How does the Actor model handle concurrency and what are its key components?

A

The Actor model represents concurrency using independent actors that communicate via messages.

Each actor has:
* its own state and
* processes messages asynchronously.

59
Q

What is the purpose of the “yield” operation in cooperative multitasking?

A

The “yield” operation voluntarily gives up the CPU to allow other threads or tasks to run. It’s used in cooperative multitasking to prevent monopolization.

60
Q

Discuss the impact of cache coherence protocols on shared-memory systems?

A

Cache coherence protocols maintain consistency across caches in a multi-processor system. They ensure that all processors see a consistent view of memory.

61
Q

How can you use atomic operations to implement lock-free data structures?

A

Atomic operations (e.g. compare-and-swap) allow non-blocking updates to shared data. Lock-free data structures use these operations to avoid traditional locks.

62
Q

What is a semaphore?

A

Here’s a simple analogy: Imagine a nightclub with a maximum capacity. The bouncer (semaphore) allows a fixed number of guests (threads) inside at once. As guests leave, new ones can enter, maintaining the limit. In code, you can create and use semaphores to manage concurrent access to resources like databases or other shared components1.

63
Q

What is a mutex?

A

Essentially, a mutex ensures that only one thread can access a critical section of code or shared resource at a time, preventing data corruption or race conditions

64
Q

Briefly explain the difference between preemptive and cooperative thread

A

Preemptive: threads do not decide when to run and are forced to share the CPU

Cooperative: each thread, once running decides for how long to keep the CPU, and (crucially) when it is time to give it up so that another thread can use it.

65
Q

Fully explain the difference between preemptive and cooperative threads

A

Preemptive

It means that threads are not in control on when and/or for how long they are going to use the CPU and run. It is the scheduler (a component of the OS) that decides at any moment which thread can run and which has to sleep. You have no strong guarantees on what will be the next time a thread will run, and for how long. It is completely up to the scheduler.

Cooperative

E.g. yield() in Java. In cooperative multitasking, what happens is that the scheduler has no say in when a thread can run. Each thread decides for how long it keeps the CPU. If it decided not to share the CPU with any other thread, then no other threads will run causing what is known as starvation.

Note that stopping one thread and starting another incurs in a certain amount of overhead. It means that you spend time and resources not to execute code from your tasks, but purely for the sake of enabling sharing the CPU. In certain real-time low latency application (like high frequency trading), this can be quite unacceptable.

66
Q

What are Reentrant Locks?

A

A thread which already has acquired the Lock object by calling the lock() can keep calling the lock() method to re-acquire the lock. In simple words, there is a counter associated with the Lock object which counts the number of times the Lock is acquired by the thread that has it.

67
Q

What are conditional variables?

A

Often a thread wants to perform certain operations only when a predicate/ condition holds true. The thread cannot continue to perform meaningful computation until the condition becomes true. We can use condition variables in such cases.

68
Q

What are Lock object?

A

A lock object can be used to synchronize access to executing certain lines of code. Only one thread that has obtained the lock object will be allowed to execute certain lines of code whereas the other threads that try to execute lines of code but that failed to acquire the lock object will get blocked.

69
Q

We can use semaphores to limit the __ of concurrent ___ accessing a specific ___.

__() – return true if a permit is available immediately and acquire it otherwise return false, but acquire() acquires a permit and blocking until one is available
__() – release a permit
__() – return number of current permits available

A

We can use semaphores to limit the number of concurrent threads accessing a specific resource.

In the following example, we will implement a simple login queue to limit the number of users in the system:

class LoginQueueUsingSemaphore {

private Semaphore semaphore;

public LoginQueueUsingSemaphore(int slotLimit) {
    semaphore = new Semaphore(slotLimit);
}

boolean tryLogin() {
    return semaphore.tryAcquire();
}

void logout() {
    semaphore.release();
}

int availableSlots() {
    return semaphore.availablePermits();
}

}
Copy
Notice how we used the following methods:

tryAcquire() – return true if a permit is available immediately and acquire it otherwise return false, but acquire() acquires a permit and blocking until one is available
release() – release a permit
availablePermits() – return number of current permits available

70
Q

Apache Commons TimedSemaphore. TimedSemaphore allows a number of permits as a simple Semaphore but in a given __, after this period the time reset and all permits are __.

A

Apache Commons TimedSemaphore. TimedSemaphore allows a number of permits as a simple Semaphore but in a given period of time, after this period the time reset and all permits are released.

71
Q

Simply put, a __ is a more flexible and sophisticated thread synchronization mechanism than the standard __ block.

There are a few differences between the use of synchronized block and using Lock APIs:

A synchronizedblock is fully contained within a __. We can have Lock APIs lock() and unlock() operation in separate __.
A synchronized block doesn’t support the __. Any thread can acquire the lock once released, and no preference can be specified. We can achieve fairness within the Lock APIs by specifying the __. It makes sure that the longest waiting thread is given access to the lock.
A thread gets blocked if it can’t get an access to the synchronized block. The Lock API provides tryLock() method. The thread acquires lock only if it’s available and not held by any other thread. This reduces __.
A thread that is in “waiting” state to acquire the access to synchronized block can’t be interrupted. The Lock API provides a method __() that can be used to interrupt the thread when it’s __.

A

Simply put, a lock is a more flexible and sophisticated thread synchronization mechanism than the standard synchronized block.

There are a few differences between the use of synchronized block and using Lock APIs:

A synchronizedblock is fully contained within a method. We can have Lock APIs lock() and unlock() operation in separate methods.
A synchronized block doesn’t support the fairness. Any thread can acquire the lock once released, and no preference can be specified. We can achieve fairness within the Lock APIs by specifying the fairness property. It makes sure that the longest waiting thread is given access to the lock.
A thread gets blocked if it can’t get an access to the synchronized block. The Lock API provides tryLock() method. The thread acquires lock only if it’s available and not held by any other thread. This reduces blocking time of thread waiting for the lock.
A thread that is in “waiting” state to acquire the access to synchronized block can’t be interrupted. The Lock API provides a method lockInterruptibly() that can be used to interrupt the thread when it’s waiting for the lock.

72
Q

the Lock interface:

  • void __() – Acquire the lock if it’s available. If the lock isn’t available, a thread gets blocked until the lock is released.
  • void __() – This is similar to the lock(), but it allows the blocked thread to be interrupted and resume the execution through a thrown java.lang.InterruptedException.
  • boolean __() – This is a nonblocking version of lock() method. It attempts to acquire the lock immediately, return true if locking succeeds.
  • boolean __(long timeout, TimeUnit timeUnit) – This is similar to tryLock(), except it waits up the given timeout before giving up trying to acquire the Lock.
  • void __() unlocks the Lock instance.
A

the Lock interface:

  • void lock() – Acquire the lock if it’s available. If the lock isn’t available, a thread gets blocked until the lock is released.
  • void lockInterruptibly() – This is similar to the lock(), but it allows the blocked thread to be interrupted and resume the execution through a thrown java.lang.InterruptedException.
  • boolean tryLock() – This is a nonblocking version of lock() method. It attempts to acquire the lock immediately, return true if locking succeeds.
  • boolean tryLock(long timeout, TimeUnit timeUnit) – This is similar to tryLock(), except it waits up the given timeout before giving up trying to acquire the Lock.
  • void unlock() unlocks the Lock instance.
73
Q

Lock Implementations:

  • __ class implements the Lock interface. It offers the same concurrency and memory semantics as the implicit monitor lock accessed using synchronized methods and statements, with extended capabilities.
  • __ class implements the ReadWriteLock interface.
    Read Lock – If no thread acquired the write lock or requested for it, multiple threads can acquire the read lock.
    *Write Lock *– If no threads are reading or writing, only one thread can acquire the write lock.
  • __ is introduced in Java 8. It also supports both read and write locks.
A

Lock Implementations:

  • ReentrantLock class implements the Lock interface. It offers the same concurrency and memory semantics as the implicit monitor lock accessed using synchronized methods and statements, with extended capabilities.
  • ReentrantReadWriteLock class implements the ReadWriteLock interface.
    Read Lock – If no thread acquired the write lock or requested for it, multiple threads can acquire the read lock.
    *Write Lock *– If no threads are reading or writing, only one thread can acquire the write lock.
  • StampedLock is introduced in Java 8. It also supports both read and write locks.
74
Q

The __ class provides the ability for a thread to wait for some condition to occur while executing the critical section.

This can occur when a thread acquires the access to the critical section but doesn’t have the necessary condition to perform its operation. For example, a __

Traditionally Java provides wait(), notify() and notifyAll() methods for thread intercommunication.

Conditions have similar mechanisms, but we can also specify __
:

A

The Condition class provides the ability for a thread to wait for some condition to occur while executing the critical section.

This can occur when a thread acquires the access to the critical section but doesn’t have the necessary condition to perform its operation. For example, a reader thread can get access to the lock of a shared queue that still doesn’t have any data to consume.

Traditionally Java provides wait(), notify() and notifyAll() methods for thread intercommunication.

Conditions have similar mechanisms, but we can also specify multiple conditions:

75
Q

List of the implementation of Fibers in Java
Fibers are instead a form of __ multitasking, meaning that a running thread will continue to run until it signals that it can yield to another

4.1. __
It works by having a Java agent that needs to run alongside the application, The use of a Java agent means that there are no special build steps needed.

4.2. __
Using bytecode weaving instead of a Java agent. This means that it can work in more places, but it makes the build process more complicated.

4.3. __
Project Loom is an experiment by the OpenJDK project to add fibers to the JVM itself,

A

List of the implementation of Fibers in Java
Fibers are instead a form of cooperative multitasking, meaning that a running thread will continue to run until it signals that it can yield to another

4.1. Quasar
It works by having a Java agent that needs to run alongside the application, The use of a Java agent means that there are no special build steps needed.

4.2. Kilim
Using bytecode weaving instead of a Java agent. This means that it can work in more places, but it makes the build process more complicated.

4.3. Project Loom
Project Loom is an experiment by the OpenJDK project to add fibers to the JVM itself,

76
Q

Fair locks, such as Java’s __ with the fairness parameter set to true, ensure that the __ thread gains access to resources, preventing __

A

Fair locks, such as Java’s ReentrantLock with the fairness parameter set to true, ensure that the longest waiting thread gains access to resources, preventing starvation.

77
Q

What are lock hierarchies?

A

order the mutexes by logically assigning numbers to them.

This involves structuring your locks in a hierarchy, such as a tree, where each node represents a lock. When you need to lock a node, you acquire locks from the root down to the target node1
By following a strict order, you prevent circular wait conditions, which are a primary cause of deadlocks.

78
Q

Complete the following async C# code

static __ __ Main(string[] args)
{
LongProcess();
}

static __ void LongProcess()
{

__ Task.Delay(4000); // hold execution for 4 seconds
}

A

static async Task Main(string[] args)
{
LongProcess();

ShortProcess(); }

static async void LongProcess()
{
Console.WriteLine(“LongProcess Started”);

await Task.Delay(4000); // hold execution for 4 seconds

Console.WriteLine("LongProcess Completed");

}

static void ShortProcess() {
Console.WriteLine(“ShortProcess Started”);

//do something here
        
Console.WriteLine("ShortProcess Completed");     }
79
Q

What is a reentrant lock?

A

A reentrant lock is one where a process can claim the lock multiple times without blocking on itself. It’s useful in situations where it’s not easy to keep track of whether you’ve already grabbed a lock. If a lock is non re-entrant you could grab the lock, then block when you go to grab it again, effectively deadlocking your own process.

80
Q

What is the StampedLock implementation in Java?

A

Lock acquisition methods return a stamp that is used to release a lock or to check if the lock is still valid
Another feature provided by StampedLock is optimistic locking . Most of the time, read operations don’t need to wait for write operation completion, and as a result of this, the full-fledged read lock isn’t required.