MULTITHREADING Flashcards

1
Q

What’s the difference between a thread and a process?

A

Processes and threads are related to each other but are fundamentally different.

A process can be thought of as an instance of a program in execution. A process is an independent entity to
which system resources (e.g., CPU time and memory) are allocated. Each process is executed in a separate
address space, and one process cannot access the variables and data structures of another process. If a
process wishes to access another process’ resources, inter-process communications have to be used. These
include pipes, files, sockets, and other forms.
A thread exists within a process and shares the process’ resources (including its heap space). Multiple
threads within the same process will share the same heap space. This is very different from processes, which
cannot directly access the memory of another process. Each thread still has its own registers and its own
stack, but other threads can read and write the heap memory.
A thread is a particular execution path of a proces

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Implement Dining Philosophers problems:
In the famous dining philosophers problem, a bunch of philosophers are sitting around a circular table with one chopstick between each of them. A philosopher needs both
chopsticks to eat, and always picks up the left chopstick before the right one. A deadlock could
potentially occur if all the philosophers reached for the left chopstick atthe same time. Using threads
and locks, implement a simulation of the dining philosophers problem that prevents deadlocks

A

https://app.gitbook.com/s/LtOCx4PXhezd4jPeeJTa/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is Concurrency model?

A

The concurrency model in java is defined as the model which is responsible for carrying out the communication between the threads in the system which is responsible for carrying out the large or executing the large process running in the system along with proper synchronization to prevent partial reading or writing of the final value in the program which is running in the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

List concurrency models that you know?

jenkov.com/tutorials/java-concurrency/concurrency-models.html
stackoverflow.com/questions/31627441/java-support-for-three-different-concurrency-models

A

1) Parallel workers
The first concurrency model is what I call the parallel worker model. Incoming jobs are assigned to different workers.

2) Assembly line
The workers are organized like workers at an assembly line in a factory. Each worker only performs a part of the full job. When that part is finished the worker forwards the job to the next worker.
Each worker is running in its own thread, and shares no state with other workers. This is also sometimes referred to as a shared nothing concurrency model.

3) Functional Parallelism
The basic idea of functional parallelism is that you implement your program using function calls. Functions can be seen as “agents” or “actors” that send messages to each other, just like in the assembly line concurrency model (AKA reactive or event driven systems). When one function calls another, that is similar to sending a message.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe Parallel Workers concurrency model
https://jenkov.com/tutorials/java-concurrency/concurrency-models.html

A

Incoming jobs are assigned to different workers.

In the parallel workers concurrency model a delegator distributes the incoming jobs to different workers. Each worker completes the full job. The workers work in parallel, running in different threads, and possibly on different CPUs.

If the parallel workers model was implemented in a car factory, each car would be produced by one worker. The worker would get the specification of the car to build, and would build everything from start to end.

The parallel workers concurrency model is the most commonly used concurrency model in Java applications (although that is changing). Many of the concurrency utilities in the java.util.concurrent Java package are designed for use with this model. You can also see traces of this model in the design of the Java Enterprise Edition application servers.

The parallel workers concurrency model can be designed to use both shared state or separate state, meaning the workers either has access to some shared state (shared objects or data), or they have no shared state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

List adventages and disadventages of parallel workers

A

The advantage of the parallel workers concurrency model is that it is easy to understand and implement. To increase the parallelization level of the application you just add more workers.

For instance, if you were implementing a web crawler, you could crawl a certain amount of pages with different numbers of workers and see which number gives the shortest total crawl time (meaning the highest performance). Since web crawling is an IO intensive job you will probably end up with a few threads per CPU / core in your computer. One thread per CPU would be too little, since it would be idle a lot of the time while waiting for data to download.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Whar are the disadventages of Parallel Workers?

A

[1] Share state can get complex
In case the shared workers need access to some kind of shared data, either in memory or in a shared database, managing correct concurrent access can get complex.
E.g. access to shared data in memory or database
Some of this shared state is in communication mechanisms like job queues. But some of this shared state is business data, data caches, connection pools to the database etc.

As soon as shared state sneaks into the parallel workers concurrency model it starts getting complicated. The threads need to access the shared data in a way that makes sure that changes by one thread are visible to the others (pushed to main memory and not just stuck in the CPU cache of the CPU executing the thread). Threads need to avoid race conditions, deadlock and many other shared state concurrency problems.

Additionally, part of the parallelization is lost when threads are waiting for each other when accessing the shared data structures. Many concurrent data structures are blocking, meaning one or a limited set of threads can access them at any given time. This may lead to contention on these shared data structures. High contention will essentially lead to a degree of serialization of execution (eliminating parallelization) of the part of the code that access the shared data structures.

Modern non-blocking concurrency algorithms may decrease contention and increase performance, but non-blocking algorithms are hard to implement.

[2] Stateless workers
Shared state can be modified by other threads in the system. Therefore workers must re-read the state every time they need it, to make sure they are working on the latest copy. This is true no matter whether the shared state is kept in memory or in an external database. A worker that does not keep state internally (but re-reads it every time it is needed) is called stateless .

Re-reading data every time you need it can get slow. Especially if the state is stored in an external database.

[3]Job Ordering is Nondeterministic
Another disadvantage of the parallel worker model is that the job execution order is nondeterministic. There is no way to guarantee which jobs are executed first or last. Job A may be given to a worker before job B, yet job B may be executed before job A.

The nondeterministic nature of the parallel worker model makes it hard to reason about the state of the system at any given point in time. It also makes it harder (if not impossible) to guarantee that one task finishes before another. This does not always cause problems, however. It depends on the needs of the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Describe “Assembly Line” (Reactive ,event driven) concurrency model
jenkov.com/tutorials/java-concurrency/concurrency-models.html

A

The second concurrency model is what I call the assembly line concurrency model. I chose that name just to fit with the “parallel worker” metaphor from earlier. Other developers use other names (e.g. reactive systems, or event driven systems) depending on the platform / community.

The workers are organized like workers at an assembly line in a factory. Each worker only performs a part of the full job. When that part is finished the worker forwards the job to the next worker.

Systems using the assembly line concurrency model are usually designed to use non-blocking IO. Non-blocking IO means that when a worker starts an IO operation (e.g. reading a file or data from a network connection) the worker does not wait for the IO call to finish. IO operations are slow, so waiting for IO operations to complete is a waste of CPU time. The CPU could be doing something else in the meanwhile. When the IO operation finishes, the result of the IO operation ( e.g. data read or status of data written) is passed on to another worker.

With non-blocking IO, the IO operations determine the boundary between workers. A worker does as much as it can until it has to start an IO operation. Then it gives up control over the job. When the IO operation finishes, the next worker in the assembly line continues working on the job, until that too has to start an IO operation etc.

In reality, the jobs may not flow along a single assembly line. Since most systems can perform more than one job, jobs flows from worker to worker depending on what part of the job that needs to be executed next. In reality there could be multiple different virtual assembly lines running on at the same time.

Jobs may even be forwarded to more than one worker for concurrent processing. For instance, a job may be forwarded to both a job executor and a job logger.

Systems using an assembly line concurrency model are also sometimes called reactive systems, or event driven systems. The system’s workers react to events occurring in the system, either received from the outside world or emitted by other workers. Examples of events could be an incoming HTTP request, or that a certain file finished loading into memory etc.

At the time of writing, there are a number of interesting reactive / event driven platforms available, and more will come in the future. Some of the more popular ones seems to be:

Vert.x
Akka
Node.JS (JavaScript)

ACTORS vs CHANNELS
Actors and channels are two similar examples of assembly line (or reactive / event driven) models.

In the actor model each worker is called an actor. Actors can send messages directly to each other. Messages are sent and processed asynchronously. Actors can be used to implement one or more job processing assembly lines, as described earlier. Here is a diagram illustrating the actor model:

In the channel model, workers do not communicate directly with each other. Instead they publish their messages (events) on different channels. Other workers can then listen for messages on these channels without the sender knowing who is listening.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the adventages of “Assembly line (reactive, / event driven) “ concurrency model?

A

The assembly line concurrency model has several advantages compared to the parallel worker model:

[1] No Shared State
The fact that workers share no state with other workers means that they can be implemented without having to think about all the concurrency problems that may arise from concurrent access to shared state. This makes it much easier to implement workers. You implement a worker as if it was the only thread performing that work - essentially a singlethreaded implementation.

[2] Stateful workers
Since workers know that no other threads modify their data, the workers can be stateful. By stateful I mean that they can keep the data they need to operate in memory, only writing changes back the eventual external storage systems. A stateful worker can therefore often be faster than a stateless worker.

[3] Better Hardware Conformity
Singlethreaded code has the advantage that it often conforms better with how the underlying hardware works. First of all, you can usually create more optimized data structures and algorithms when you can assume the code is executed in single threaded mode.

Second, singlethreaded stateful workers can cache data in memory as mentioned above. When data is cached in memory there is also a higher probability that this data is also cached in the CPU cache of the CPU executing the thread. This makes accessing cached data even faster.

I refer to it as hardware conformity when code is written in a way that naturally benefits from how the underlying hardware works. Some developers call this mechanical sympathy. I prefer the term hardware conformity because computers have very few mechanical parts, and the word “sympathy” in this context is used as a metaphor for “matching better” which I believe the word “conform” conveys reasonably well. Anyways, this is nitpicking. Use whatever term you prefer.

[4] Job Ordering is possible
It is possible to implement a concurrent system according to the assembly line concurrency model in a way that guarantees job ordering. Job ordering makes it much easier to reason about the state of a system at any given point in time. Furthermore, you could write all incoming jobs to a log. This log could then be used to rebuild the state of the system from scratch in case any part of the system fails. The jobs are written to the log in a certain order, and this order becomes the guaranteed job order.
Implementing a guaranteed job order is not necessarily easy, but it is often possible. If you can, it greatly simplifies tasks like backup, restoring data, replicating data etc. as this can all be done via the log file(s).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How Do Concurrent Modules Execute?

https://www.baeldung.com/concurrency-principles-patterns

A

It’s been a while since Moore’s Law hit a wall with respect to the clock speed of the processor. Instead, since we must grow, we’ve started to pack multiple processors onto the same chip, often called multicore processors. But still, it’s not common to hear about processors that have more than 32 cores.

Now, we know that a single core can execute only one thread, or set of instructions, at a time. However, the number of processes and threads can be in hundreds and thousands, respectively. So, how does it really work? This is where the operating system simulates concurrency for us. The operating system achieves this by time-slicing — which effectively means that the processor switches between threads frequently, unpredictably, and non-deterministically.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the problems in Concurrent Programming?

A

For a very large part, our experience with concurrent programming involves using native threads with shared memory.

Common problems:
[1] Mutual Exclusion (Synchronization Primitives): Interleaving threads need to have exclusive access to shared state or memory to ensure the correctness of programs. The synchronization of shared resources is a popular method to achieve mutual exclusion. There are several synchronization primitives available to use — for example, a lock, monitor, semaphore, or mutex. However, programming for mutual exclusion is error-prone and can often lead to performance bottlenecks. There are several well-discussed issues related to this like deadlock and livelock.
[2] Context Switching (Heavyweight Threads): Every operating system has native, albeit varied, support for concurrent modules like process and thread. As discussed, one of the fundamental services that an operating system provides is scheduling threads to execute on a limited number of processors through time-slicing. Now, this effectively means that threads are frequently switched between different states. In the process, their current state needs to be saved and resumed. This is a time-consuming activity directly impacting the overall throughput.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

List design patterns used to achieve high concurrency

A

[1] Actor based concurrency
[2] Event based concurrency
[3] Non-blocking algoithms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a “concurrency model”?

A

Concurrent systems can be implemented using different concurrency models. A concurrency model specifies how threads in the the system collaborate to complete the tasks they are are given. Different concurrency models split the tasks in different ways, and the threads may communicate and collaborate in different ways.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Java native concurrency model?

A

It is based on:
Threads
Semaphores
Locks
Synchronization (monitors)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How you can create and start Thread in java?

A

[1] Thread subclass
create a subclass of Thread and override the run() method. The run() method is what is executed by the thread after you call start().

public class MyThread extends Thread {

public void run(){
   System.out.println("MyThread running");
}   }

MyThread myThread = new MyThread();
myTread.start();

[2] Runnable Interface Implementation
The second way to specify what code a thread should run is by creating a class that implements the java.lang.Runnable interface. A Java object that implements the Runnable interface can be executed by a Java Thread.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a race condition?

A

A race condition is a concurrency problem that may occur inside a critical section. A critical section is a section of code that is executed by multiple threads and where the sequence of execution for the threads makes a difference in the result of the concurrent execution of the critical section.
When the result of multiple threads executing a critical section may differ depending on the sequence in which the threads execute, the critical section is said to contain a race condition. The term race condition stems from the metaphor that the threads are racing through the critical section, and that the result of that race impacts the result of executing the critical section.

There are 2 types of Race conditions:
1) Read-modify-write
2) check-then-act

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Describe two types of race condition

A

Race conditions can occur when two or more threads read and write the same variable according to one of these two patterns:
- Read-modify-write
- Check-then-act

[1] Read-modify-write
The read-modify-write pattern means, that two or more threads first read a given variable, then modify its value and write it back to the variable. For this to cause a problem, the new value must depend one way or another on the previous value. The problem that can occur is, if two threads read the value (into CPU registers) then modify the value (in the CPU registers) and then write the values back.

[2] Check-then-act
The check-then-act pattern means, that two or more threads check a given condition, for instance if a Map contains a given value, and then go on to act based on that information, e.g. taking the value from the Map. The problem may occur if two threads check the Map for a given value at the same time - see that the value is present - and then both threads try to take (remove) that value. However, only one of the threads can actually take the value. The other thread will get a null value back. This could also happen if a Queue was used instead of a Map.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Give an example of “Read-modify-Write” critical section and explain when race condition can occur

A

As mentioned above, a read-modify-write critical section can lead to race condition

public class Counter {

 protected long count = 0;

 public void add(long value){
     this.count = this.count + value;
 }   }

Imagine if two threads, A and B, are executing the add method on the same instance of the Counter class. There is no way to know when the operating system switches between the two threads. The code in the add() method is not executed as a single atomic instruction by the Java virtual machine. Rather it is executed as a set of smaller instructions, similar to this:

1) Read this.count from memory into register.
2) Add value to register.
3) Write register to memory.

The code in the add() method in the example earlier contains a critical section. When multiple threads execute this critical section, race conditions occur.

More formally, the situation where two threads compete for the same resource, where the sequence in which the resource is accessed is significant, is called race conditions. A code section that leads to race conditions is called a critical section.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Give an example of “Check-Then-Act” critical section

A

As also mentioned above, a check-then-act critical section can also lead to race conditions. If two threads check the same condition, then act upon that condition in a way that changes the condition it can lead to race conditions. If two threads both check the condition at the same time, and then one thread goes ahead and changes the condition, this can lead to the other thread acting incorrectly on that condition.

public class CheckThenActExample {

public void checkThenAct(Map sharedMap) {
    if(sharedMap.containsKey("key")){
        String val = sharedMap.remove("key");
        if(val == null) {
            System.out.println("Value for 'key' was null");
        }
    } else {
        sharedMap.put("key", "value");
    }
} }

If two or more threads call the checkThenAct() method on the same CheckThenActExample object, then two or more threads may execute the if-statement at the same time, evaluate sharedMap.containsKey(“key”) to true, and thus move into the body code block of the if-statement. In there, multiple threads may then try to remove the key,value pair stored for the key “key”, but only one of them will actually be able to do it. The rest will get a null value back, since another thread already removed the key,value pair.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How can we prevent race conditions?

A

To prevent race conditions from occurring you must make sure that the critical section is executed as an atomic instruction. That means that once a single thread is executing it, no other threads can execute it until the first thread has left the critical section.

Race conditions can be avoided by proper thread synchronization in critical sections. Thread synchronization can be achieved using a synchronized block of Java code. Thread synchronization can also be achieved using other synchronization constructs like locks or atomic variables like java.util.concurrent.atomic.AtomicInteger.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How can we increase Critical Section Throughput?

A

For smaller critical sections making the whole critical section a synchronized block may work. But, for larger critical sections it may be beneficial to break the critical section into smaller critical sections, to allow multiple threads to execute each a smaller critical section. This may decrease contention on the shared resource, and thus increase throughput of the total critical section.

Here is a very simplified Java code example to show what I mean:
public class TwoSums {

private int sum1 = 0;
private int sum2 = 0;

public void add(int val1, int val2){
    synchronized(this){
        this.sum1 += val1;   
        this.sum2 += val2;
    }
} }

Notice how the add() method adds values to two different sum member variables. To prevent race conditions the summing is executed inside a Java synchronized block. With this implementation only a single thread can ever execute the summing at the same time.

However, since the two sum variables are independent of each other, you could split their summing up into two separate synchronized blocks, like this:
public class TwoSums {

private int sum1 = 0;
private int sum2 = 0;

private Integer sum1Lock = new Integer(1);
private Integer sum2Lock = new Integer(2);

public void add(int val1, int val2){
    synchronized(this.sum1Lock){
        this.sum1 += val1;   
    }
    synchronized(this.sum2Lock){
        this.sum2 += val2;
    }
} }

Now two threads can execute the add() method at the same time. One thread inside the first synchronized block, and another thread inside the second synchronized block. The two synchronized blocks are synchronized on different objects, so two different threads can execute the two blocks independently. This way threads will have to wait less for each other to execute the add() method.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is a critical section?

A

A critical section is a section of code that is executed by multiple threads and where the sequence of execution for the threads makes a difference in the result of the concurrent execution of the critical section

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What does it mean ‘Thread safe’?

A

Code that is safe to call by multiple threads simultaneously is called thread safe. If a piece of code is thread safe, then it contains no race conditions. Race condition only occur when multiple threads update shared resources. Therefore it is important to know what resources Java threads share when executing:

a) local variables
Local variables are stored in each thread’s own stack. That means that local variables are never shared between threads. That also means that all local primitive variables are thread safe.

b) local object refrences
Local references to objects are a bit different. The reference itself is not shared. The object referenced however, is not stored in each threads’s local stack. All objects are stored in the shared heap.

If an object created locally never escapes the method it was created in, it is thread safe. In fact you can also pass it on to other methods and objects as long as none of these methods or objects make the passed object available to other threads.

c) object member variables
Object member variables (fields) are stored on the heap along with the object. Therefore, if two threads call a method on the same object instance and this method updates object member variables, the method is not thread safe

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is immutability and how it prevents race conditions?

A

Race conditions occur only if multiple threads are accessing the same resource, and one or more of the threads write to the resource. If multiple threads read the same resource race conditions do not occur.

We can make sure that objects shared between threads are never updated by any of the threads by making the shared objects immutable, and thereby thread safe

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Describe ‘The Java Memory Model’

A

The Java memory model used internally in the JVM divides memory between thread stacks and the heap.

Each thread running in the Java virtual machine has its own thread stack. The thread stack contains information about what methods the thread has called to reach the current point of execution. I will refer to this as the “call stack”. As the thread executes its code, the call stack changes.

The thread stack also contains all local variables for each method being executed (all methods on the call stack). A thread can only access it’s own thread stack. Local variables created by a thread are invisible to all other threads than the thread who created it. Even if two threads are executing the exact same code, the two threads will still create the local variables of that code in each their own thread stack. Thus, each thread has its own version of each local variable.

All local variables of primitive types ( boolean, byte, short, char, int, long, float, double) are fully stored on the thread stack and are thus not visible to other threads. One thread may pass a copy of a pritimive variable to another thread, but it cannot share the primitive local variable itself.

The heap contains all objects created in your Java application, regardless of what thread created the object. This includes the object versions of the primitive types (e.g. Byte, Integer, Long etc.). It does not matter if an object was created and assigned to a local variable, or created as a member variable of another object, the object is still stored on the heap.

A local variable may be of a primitive type, in which case it is totally kept on the thread stack.

A local variable may also be a reference to an object. In that case the reference (the local variable) is stored on the thread stack, but the object itself if stored on the heap.

An object may contain methods and these methods may contain local variables. These local variables are also stored on the thread stack, even if the object the method belongs to is stored on the heap.

An object’s member variables are stored on the heap along with the object itself. That is true both when the member variable is of a primitive type, and if it is a reference to an object.

Objects on the heap can be accessed by all threads that have a reference to the object. When a thread has access to an object, it can also get access to that object’s member variables. If two threads call a method on the same object at the same time, they will both have access to the object’s member variables, but each thread will have its own copy of the local variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Describe Hardware Memory Architecture

https://jenkov.com/tutorials/java-concurrency/java-memory-model.html

A

Modern hardware memory architecture is somewhat different from the internal Java memory model. It is important to understand the hardware memory architecture too, to understand how the Java memory model works with it.

A modern computer often has 2 or more CPUs in it. Some of these CPUs may have multiple cores too. The point is, that on a modern computer with 2 or more CPUs it is possible to have more than one thread running simultaneously. Each CPU is capable of running one thread at any given time. That means that if your Java application is multithreaded, one thread per CPU may be running simultaneously (concurrently) inside your Java application.

Each CPU contains a set of registers which are essentially in-CPU memory. The CPU can perform operations much faster on these registers than it can perform on variables in main memory. That is because the CPU can access these registers much faster than it can access main memory.

Each CPU may also have a CPU cache memory layer. In fact, most modern CPUs have a cache memory layer of some size. The CPU can access its cache memory much faster than main memory, but typically not as fast as it can access its internal registers. So, the CPU cache memory is somewhere in between the speed of the internal registers and main memory. Some CPUs may have multiple cache layers (Level 1 and Level 2), but this is not so important to know to understand how the Java memory model interacts with memory. What matters is to know that CPUs can have a cache memory layer of some sort.

A computer also contains a main memory area (RAM). All CPUs can access the main memory. The main memory area is typically much bigger than the cache memories of the CPUs.

Typically, when a CPU needs to access main memory it will read part of main memory into its CPU cache. It may even read part of the cache into its internal registers and then perform operations on it. When the CPU needs to write the result back to main memory it will flush the value from its internal register to the cache memory, and at some point flush the value back to main memory.

The values stored in the cache memory is typically flushed back to main memory when the CPU needs to store something else in the cache memory. The CPU cache can have data written to part of its memory at a time, and flush part of its memory at a time. It does not have to read / write the full cache each time it is updated. Typically the cache is updated in smaller memory blocks called “cache lines”. One or more cache lines may be read into the cache memory, and one or mor cache lines may be flushed back to main memory again.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is Java synchronized block?

A

A Java synchronized block marks a method or a block of code as synchronized. A synchronized block in Java can only be executed a single thread at a time (depending on how you use it). Java synchronized blocks can thus be used to avoid race conditions.

Synchronized blocks in Java are marked with the synchronized keyword. A synchronized block in Java is synchronized on some object. All synchronized blocks synchronized on the same object can only have one thread executing inside them at the same time. All other threads attempting to enter the synchronized block are blocked until the thread inside the synchronized block exits the block.

The synchronized keyword can be used to mark four different types of blocks:

  • Instance methods
  • Static methods
  • Code blocks inside instance methods
  • Code blocks inside static methods
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is Semaphore? What is it used for?

A

A Semaphore is a thread synchronization construct that can be used either to send signals between threads to avoid missed signals, or to guard a critical section like you would with a lock. Java 5 comes with semaphore implementations in the java.util.concurrent package so you don’t have to implement your own semaphores.

A Semaphore is used to limit the number of threads that want to access a shared resource. In other words, it is a non-negative variable that is shared among the threads known as a counter. It sets the limit of the threads. A mechanism in which a thread is waiting on a semaphore can be signaled by other threads.

If counter > 0, access to shared resources is provided.
If counter = 0, access to shared resources is denied.
In short, the counter keeps tracking the number of permissions it has given to a shared resource. Therefore, semaphore grants permission to threads to share a resource.

Semaphore controls over the shared resource through a counter variable. The counter is a non-negative value. It contains a value either greater than 0 or equal to 0.

If counter > 0, the thread gets permission to access the shared resource and the counter value is decremented by 1.
Else, the thread will be blocked until a permit can be acquired.
When the execution of the thread is completed then there is no need for the resource and the thread releases it. After releasing the resource, the counter value incremented by 1.
If another thread is waiting for acquiring a resource, the thread will acquire a permit at that time.
If counter = 0, the thread does not get permission to access the shared resource.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Compare Thread Synchronization Mechanisms in Java
Monitor, Lock and Semaphore

https://medium.com/swlh/comparing-thread-synchronization-mechanisms-in-java-53e66ea059be

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is multithreading?

A

Multithreading is the ability to have multiple threads executing concurrently. While each thread shares the same process resources, they operate independently of each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is the difference between a thread and a process?

A

A process is a single application or program, whereas a thread is a subprocess within that application or program. Each process has its own address space in memory; threads share their address space

32
Q

What is a thread pool?

(Asking about thread pools might be one way an interviewer determines if you know how to write performance-efficient code.)

A

A thread pool is a collection of worker threads created at start-up that can be assigned tasks as needed, then put back in the pool when complete. The main advantage of using a thread pool is having a supply of already-created threads when you need them, which improves application performance.

33
Q

During a thread’s lifetime, what states can it have?

(Knowing a thread’s various states can show you know how threads operate and that you could debug a nonfunctioning thread.)

A

There are five states a thread can have: New, Runnable, Running, Waited/Blocked and Dead/Terminated.

34
Q

What is context switching?

(An interviewer may ask you about context switching to see if you understand how multithreading works at the CPU level.)

A

Context switching is where the current state of a thread or process is stored so the execution of that thread can be resumed at a later time. This enables a single CPU to manage multiple threads or processes.

35
Q

Can you explain what the thread scheduler is and its relationship to thread priority?

(This is another question that addresses the way multithreading works at the CPU level.)

A

The thread scheduler is what allocates CPU time to threads and determines the order in which threads execute

36
Q

What is time slicing?

(Your answer to this question can show the interviewer you understand how the thread scheduler works.)

A

Time slicing is the process used by the thread scheduler to divide CPU time between the available active threads

37
Q

What is thread starvation?

(our awareness of thread starvation can be useful for code debugging.)

A

Thread starvation is when there is insufficient CPU capacity to execute a thread. This can happen with low-priority threads, or threads that are demoted in favor of other threads.

38
Q

Can you start a thread twice?

(This question shows you understand thread state and the lifecycle of a thread.)

A

Once a thread has been executed, it is considered dead. You cannot restart a dead thread.

39
Q

Can you describe a deadlock situation?

(Deadlock is a problem that can cause code to stall. Your ability to explain the problem indicates you may also know how to resolve it.)

A

A deadlock situation is where multiple threads are waiting on each other to release CPU resources so they can run. This can happen when, for example, a single thread has exclusive priority but needs resources from a waiting thread, or all the threads are depending on one another to release needed resources.

40
Q

What happens when a livelock occurs?

(This question is a follow-up from the previous question and is designed to show your understanding of this potential problem with multithreading.)

A

A livelock is similar to a deadlock situation, except with a livelock, the state of the threads change without ever making progress. For example, if all the threads are caught in infinite loops.

41
Q

In Java, what is the difference between the wait() and sleep() methods?

(These two common Java methods appear to do the same thing, so it is important you can distinguish their functions.)

A

The wait() method pauses a thread and waits until there are no notify() or notifyAll() method calls from other threads. The sleep() method pauses for a given period, allowing other threads to run, after which it resumes execution.

42
Q

What is a daemon thread?
(A daemon is a useful thread type that operates differently from normal threads, so you may be asked about it.)

A

A daemon thread is a low-priority thread. It may provide background services or support to the other threads. When those threads die, the daemon thread automatically terminates.

43
Q

How might you achieve thread safety?

A

You can achieve thread safety through several techniques, including synchronization, using the Volatile keyword or using atomic wrapper classes.

44
Q

What is the difference between synchronous and asynchronous programming?

(You may encounter this question about programming methods because they can affect your code’s performance.)

A

Synchronous programming is when a single thread is assigned a single task. Asynchronous programming is when a single task is shared between multiple threads.

45
Q

What is the function of the join() method?

A

The join() method causes the current thread to stop running until the thread due to congregate with completes its task. It is a commonly used function that facilitates the execution of multiple threads in an organized manner.

46
Q

How do you detect deadlock situations in Java?

A

Deadlock situations can be detected by running the executable code on cmd and subsequently collecting the thread dump. If deadlock situations are present, the cmd will throw up a message.

47
Q

Do individual threads have their respective stacks in Multithreaded programming?

A

Yes, individual threads have their stacks in multithreaded programming. Each thread is independent of the other and maintains its own stack in the memory.

48
Q

How is thread-safety achieved in multithreaded programming?

A

Thready safety can be achieved if multiple threads can use a particular class function without the occurrence of the race condition. In Multithreaded programming, thread safety can be achieved by:

Use of atomic wrapper class
Use of a volatile keyword
Employing a lock-based mechanism
Synchronization

49
Q

How do you pause a thread from running in Java?

A

A thread that is currently running can be paused using the sleep() method.

50
Q

Describe the Different States of a Thread and When Do the State Transitions Occur.

A

The state of a Thread can be checked using the Thread.getState() method. Different states of a Thread are described in the Thread.State enum. They are:

NEW — a new Thread instance that was not yet started via Thread.start()
RUNNABLE — a running thread. It is called runnable because at any given time it could be either running or waiting for the next quantum of time from the thread scheduler. A NEW thread enters the RUNNABLE state when you call Thread.start() on it
BLOCKED — a running thread becomes blocked if it needs to enter a synchronized section but cannot do that due to another thread holding the monitor of this section
WAITING — a thread enters this state if it waits for another thread to perform a particular action. For instance, a thread enters this state upon calling the Object.wait() method on a monitor it holds, or the Thread.join() method on another thread
TIMED_WAITING — same as the above, but a thread enters this state after calling timed versions of Thread.sleep(), Object.wait(), Thread.join() and some other methods
TERMINATED — a thread has completed the execution of its Runnable.run() method and terminated

51
Q

What Is the Difference Between the Runnable and Callable Interfaces? How Are They Used?

A

The Runnable interface has a single run method. It represents a unit of computation that has to be run in a separate thread. The Runnable interface does not allow this method to return value or to throw unchecked exceptions.

The Callable interface has a single call method and represents a task that has a value. That’s why the call method returns a value. It can also throw exceptions. Callable is generally used in ExecutorService instances to start an asynchronous task and then call the returned Future instance to get its value.

52
Q

What Is a Daemon Thread, What Are Its Use Cases? How Can You Create a Daemon Thread?

A

A daemon thread is a thread that does not prevent JVM from exiting. When all non-daemon threads are terminated, the JVM simply abandons all remaining daemon threads. Daemon threads are usually used to carry out some supportive or service tasks for other threads, but you should take into account that they may be abandoned at any time.

To start a thread as a daemon, you should use the setDaemon() method before calling start():

53
Q

What Are Executor and Executorservice? What Are the Differences Between These Interfaces?

A

Executor and ExecutorService are two related interfaces of java.util.concurrent framework. Executor is a very simple interface with a single execute method accepting Runnable instances for execution. In most cases, this is the interface that your task-executing code should depend on.

ExecutorService extends the Executor interface with multiple methods for handling and checking the lifecycle of a concurrent task execution service (termination of tasks in case of shutdown) and methods for more complex asynchronous task handling including Futures.

For more info on using Executor and ExecutorService, see the article A Guide to Java ExecutorService.

54
Q

What Are the Available Implementations of Executorservice in the Standard Library?

A

The ExecutorService interface has three standard implementations:

ThreadPoolExecutor — for executing tasks using a pool of threads. Once a thread is finished executing the task, it goes back into the pool. If all threads in the pool are busy, then the task has to wait for its turn.
ScheduledThreadPoolExecutor allows to schedule task execution instead of running it immediately when a thread is available. It can also schedule tasks with fixed rate or fixed delay.
ForkJoinPool is a special ExecutorService for dealing with recursive algorithms tasks. If you use a regular ThreadPoolExecutor for a recursive algorithm, you will quickly find all your threads are busy waiting for the lower levels of recursion to finish. The ForkJoinPool implements the so-called work-stealing algorithm that allows it to use available threads more efficiently.

55
Q

What Is a Volatile Field and What Guarantees Does the Jmm Hold for Such Field?

A

A volatile field has special properties according to the Java Memory Model (see Q9). The reads and writes of a volatile variable are synchronization actions, meaning that they have a total ordering (all threads will observe a consistent order of these actions). A read of a volatile variable is guaranteed to observe the last write to this variable, according to this order.

If you have a field that is accessed from multiple threads, with at least one thread writing to it, then you should consider making it volatile, or else there is a little guarantee to what a certain thread would read from this field.

Another guarantee for volatile is atomicity of writing and reading 64-bit values (long and double). Without a volatile modifier, a read of such field could observe a value partly written by another thread.

56
Q

Which of the Following Operations Are Atomic?

A

writing to a non-volatile int;
writing to a volatile int;
writing to a non-volatile long;
writing to a volatile long;
incrementing a volatile long?
A write to an int (32-bit) variable is guaranteed to be atomic, whether it is volatile or not. A long (64-bit) variable could be written in two separate steps, for example, on 32-bit architectures, so by default, there is no atomicity guarantee. However, if you specify the volatile modifier, a long variable is guaranteed to be accessed atomically.

The increment operation is usually done in multiple steps (retrieving a value, changing it and writing back), so it is never guaranteed to be atomic, wether the variable is volatile or not. If you need to implement atomic increment of a value, you should use classes AtomicInteger, AtomicLong etc.

57
Q

If Two Threads Call a Synchronized Method on Different Object Instances Simultaneously, Could One of These Threads Block? What If the Method Is Static?

A

f the method is an instance method, then the instance acts as a monitor for the method. Two threads calling the method on different instances acquire different monitors, so none of them gets blocked.

If the method is static, then the monitor is the Class object. For both threads, the monitor is the same, so one of them will probably block and wait for another to exit the synchronized method.

58
Q

What Is the Purpose of the Wait, Notify and Notifyall Methods of the Object Class?

A

thread that owns the object’s monitor (for instance, a thread that has entered a synchronized section guarded by the object) may call object.wait() to temporarily release the monitor and give other threads a chance to acquire the monitor. This may be done, for instance, to wait for a certain condition.

When another thread that acquired the monitor fulfills the condition, it may call object.notify() or object.notifyAll() and release the monitor. The notify method awakes a single thread in the waiting state, and the notifyAll method awakes all threads that wait for this monitor, and they all compete for re-acquiring the lock.

The following BlockingQueue implementation shows how multiple threads work together via the wait-notify pattern. If we put an element into an empty queue, all threads that were waiting in the take method wake up and try to receive the value. If we put an element into a full queue, the put method waits for the call to the get method. The get method removes an element and notifies the threads waiting in the put method that the queue has an empty place for a new item.

59
Q

Describe the Conditions of Deadlock, Livelock, and Starvation. Describe the Possible Causes of These Conditions.

A

Deadlock is a condition within a group of threads that cannot make progress because every thread in the group has to acquire some resource that is already acquired by another thread in the group. The most simple case is when two threads need to lock both of two resources to progress, the first resource is already locked by one thread, and the second by another. These threads will never acquire a lock to both resources and thus will never progress.

Livelock is a case of multiple threads reacting to conditions, or events, generated by themselves. An event occurs in one thread and has to be processed by another thread. During this processing, a new event occurs which has to be processed in the first thread, and so on. Such threads are alive and not blocked, but still, do not make any progress because they overwhelm each other with useless work.

Starvation is a case of a thread unable to acquire resource because other thread (or threads) occupy it for too long or have higher priority. A thread cannot make progress and thus is unable to fulfill useful work.

60
Q

Describe the Purpose and Use-Cases of the Fork/Join Framework

A

The fork/join framework allows parallelizing recursive algorithms. The main problem with parallelizing recursion using something like ThreadPoolExecutor is that you may quickly run out of threads because each recursive step would require its own thread, while the threads up the stack would be idle and waiting.

The fork/join framework entry point is the ForkJoinPool class which is an implementation of ExecutorService. It implements the work-stealing algorithm, where idle threads try to “steal” work from busy threads. This allows to spread the calculations between different threads and make progress while using fewer threads than it would require with a usual thread pool.

More information and code samples for the fork/join framework may be found in the article “Guide to the Fork/Join Framework in Java”.

61
Q

What do you mean by shutdown hook?

A

A shutdown hook is a thread that gets completely invoked before JVM shuts down. It is one of the essential features of JVM as it offers the capacity for resource cleanup or saving the application before JVM shuts down. The shutdown hook can be halted by calling the halt(int) method. It can be added using the following method:

public void addShutdownHook (Thread hook) { }
Runtime r=Runtime.getRuntime ( ) ;
r.addShutdownHook (new MyThread ( ) ) ;

62
Q

How can the data be shared between the thread?

A

The data can be shared between threads with the help of using the shared object or concurrent data structure like a Blocking queue. It mainly follows the producer-consumer pattern using wait and notifies methods that involve sharing an object between the two threads.

63
Q

What is a monitor?

A

The monitor is a body of code that can be executed by only one thread at a time.
If any other thread attempts to get access at the same time, it will be suspended until the current thread releases the Monitor.
In java we use synchronize keyword.

64
Q

What is the difference between Semaphore and monitor? Give some examples where would you use Semaphore and where Monitor.

A

Semaphore :

Using a counter or flag to control access some shared resources in a concurrent system, implies use of Semaphore.

Example:

A counter to allow only 50 Passengers to acquire the 50 seats (Shared resource) of any Theatre/Bus/Train/Fun ride/Classroom. And to allow a new Passenger only if someone vacates a seat.
A binary flag indicating the free/occupied status of any Bathroom.
Traffic lights are good example of flags. They control flow by regulating passage of vehicles on Roads (Shared resource)
Flags only reveal the current state of Resource, no count or any other information on the waiting or running objects on the resource.

Monitor :

A Monitor synchronizes access to an Object by communicating with threads interested in the object, asking them to acquire access or wait for some condition to become true.

Example:

A Father may acts as a monitor for her daughter, allowing her to date only one guy at a time.
A school teacher using baton to allow only one child to speak in the class.
Lastly a technical one, transactions (via threads) on an Account object synchronized to maintain integrity.

65
Q

What is monitor, semaphore, lock (mutex)? compare

A

[1]
A semaphore is a signaling mechanism used to coordinate between threads. Example: One thread is downloading files from the internet and another thread is analyzing the files. This is a classic producer/consumer scenario. The producer calls signal() on the semaphore when a file is downloaded. The consumer calls wait() on the same semaphore in order to be blocked until the signal indicates a file is ready. If the semaphore is already signaled when the consumer calls wait, the call does not block. Multiple threads can wait on a semaphore, but each signal will only unblock a single thread.

A counting semaphore keeps track of the number of signals. E.g. if the producer signals three times in a row, wait() can be called three times without blocking. A binary semaphore does not count but just have the “waiting” and “signalled” states.

[2]
A mutex (mutual exclusion lock) is a lock which is owned by a single thread. Only the thread which have acquired the lock can realease it again. Other threads which try to acquire the lock will be blocked until the current owner thread releases it. A mutex lock does not in itself lock anything - it is really just a flag. But code can check for ownership of a mutex lock to ensure that only one thread at a time can access some object or resource.

[3]
A monitor is a higher-level construct which uses an underlying mutex lock to ensure thread-safe access to some object. Unfortunately the word “monitor” is used in a few different meanings depending on context and platform and context, but in Java for example, a monitor is a mutex lock which is implicitly associated with an object, and which can be invoked with the synchronized keyword. The synchronized keyword can be applied to a class, method or block and ensures only one thread can execute the code at a time.

66
Q

Describe challanges that you had to face in project?

A
  1. When it comes to project for Ministry of finance the challenge for me was that this project was big. There were many developers and I was unexperienc developer, I had to quickly learn new things in order to be productive.
67
Q

What are some of the differences between a process and a thread?

A

Some of the differences between a process and a thread are:

a) A process can have many threads, whereas a thread can belong to only one process.

b) A thread is lightweight than a process and uses less resources than a process.

c) A thread has some state private to itself but threads of a process can share the resources allocated to the process including memory address space

68
Q

Can you list some of the problems with using threads?

A

Threads if used without thought can sometimes lead to performance degradation for the following reasons:

Usually hard to find bugs, some that may only rear head in production environments (think race conditions)

Higher cost of code maintenance since the code inherently becomes harder to reason about

Increased utilization of system resources. Creation of each thread consumes additional memory, CPU cycles for book-keeping and waste of time in context switches.

Too many threads can decrease program performance due to increased competition to acquire locks (lock contention).

69
Q

What is a deadlock?

A

Deadlocks happen when two or more threads aren’t able to make any progress because the resource required by the first thread is held by the second and the resource required by the second thread is held by the first and both threads wait on eachother to release the required resource.

Below is an example program demonstrating deadlock for two threads. The code widget will generate an error since the deadlock causes program execution to timeout.

70
Q

What is Liveness?

A

Ability of a program or an application to execute in a timely manner is called liveness. If a program experiences a deadlock then it’s not exhibiting liveness.

71
Q

What is starvation?

A

Other than a deadlock, an application thread can also experience starvation, where the thread never gets CPU time or access to shared resources because other “greedy” threads hog system resources.

72
Q

[1] What is a “Deamon thread”?
[2] How to implement the “Deamon thread”?

A

[1] Daemon threads are low priority threads which always run in background. The “Deamon thread” will be killed by JVM before it can run to completion.
[2] innerThread.setDaemon(true);

73
Q

[1] What is Mutex?
[2] What is the simplest way to implement mutex in java?
https://www.baeldung.com/java-mutex

[3] Give an example to use mutex to below fragment of code:

public class SequenceGenerator {
private int currentValue = 0;

public int getNextSequence() {
    currentValue = currentValue + 1;
    return currentValue;
} }
A

[1] In a multithreaded application, two or more threads may need to access a shared resource at the same time, resulting in unexpected behavior. Examples of such shared resources are data-structures, input-output devices, files, and network connections.

We call this scenario a race condition. And, the part of the program which accesses the shared resource is known as the critical section. So, to avoid a race condition, we need to synchronize access to the critical section.

A mutex (or mutual exclusion) is the simplest type of synchronizer – it ensures that only one thread can execute the critical section of a computer program at a time.

To access a critical section, a thread acquires the mutex, then accesses the critical section, and finally releases the mutex. In the meantime, all other threads block till the mutex releases. As soon as a thread exits the critical section, another thread can enter the critical section.

In contrast, think of a mutex like a lone runway on a remote airport. Only a single jet can land or take-off from the runway at a given point in time. No other jet can use the runway simultaneously with the first aircraft.

[2] First, we’ll discuss the synchronized keyword, which is the simplest way to implement a mutex in Java.

Every object in Java has an intrinsic lock associated with it. The synchronized method and the synchronized block use this intrinsic lock to restrict the access of the critical section to only one thread at a time.

Therefore, when a thread invokes a synchronized method or enters a synchronized block, it automatically acquires the lock. The lock releases when the method or block completes or an exception is thrown from them.

[3] You can implement mutex as a monitor using the keyword synchronized:
public class SequenceGeneratorUsingSynchronizedMethod extends SequenceGenerator {

@Override
public synchronized int getNextSequence() {
    return super.getNextSequence();
}

}

OR

@Override
public int getNextSequence() {
    synchronized (mutex) {
        return super.getNextSequence();
    }
}
74
Q

[1] What is a semaphore?

baeldung.com/cs/semaphore

A

[1] Semaphore allows a fixed number of threads to access a critical section. Therefore, we can also implement a mutex by setting the number of allowed threads in a Semaphore to one.
There are two types of semaphores:
- Binary semaphore
- Counting Semaphore

Think of semaphore analogous to a car rental service such as Hertz. Each outlet has a certain number of cars, it can rent out to customers. It can rent several cars to several customers at the same time but if all the cars are rented out then any new customers need to be put on a waitlist till one of the rented cars is returned.

75
Q

What is the difference between Mutex and Semaphore?
TODO…

https://afteracademy.com/blog/difference-between-mutex-and-semaphore-in-operating-system

A

Think of semaphore analogous to a car rental service such as Hertz. Each outlet has a certain number of cars, it can rent out to customers. It can rent several cars to several customers at the same time but if all the cars are rented out then any new customers need to be put on a waitlist till one of the rented cars is returned. In contrast, think of a mutex like a lone runway on a remote airport. Only a single jet can land or take-off from the runway at a given point in time. No other jet can use the runway simultaneously with the first aircraft.