Chapter 12 - Concurrency essentials Flashcards

1
Q

Sequential Computing:

A

Sequential computing involves executing instructions or tasks one after another in a linear, step-by-step fashion. Each instruction or task is completed before the next one begins. In this approach, the processor handles one task at a time, and the execution follows a single path or sequence. This is the traditional form of computing.

Advantages:

Simplicity in programming and debugging.
Deterministic behavior, making it easier to reason about the program’s execution flow.
Absence of concurrency-related issues.
Disadvantages:

Potential inefficiencies, as the processor may remain idle during some parts of the computation.
Limited scalability and slower execution for complex or time-consuming tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Parallel Computing

A

Parallel computing involves simultaneously executing multiple tasks or instructions at the same time. It leverages the presence of multiple processing units (e.g., multiple cores, multiple computers) to perform computations concurrently. Parallelism can be at different levels, including instruction-level, task-level, data-level, or higher levels of granularity.

Advantages:

Faster processing and improved performance, especially for complex and computationally intensive tasks.
Enhanced scalability, allowing the system to handle larger workloads efficiently.
Effective utilization of resources, leading to higher throughput.
Disadvantages:

Complexity in programming and debugging due to the need to manage synchronization and coordination between concurrent tasks.
Potential for race conditions, deadlocks, and other concurrency-related issues that require careful design and management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Parallel computing architechture

A

Shared Memory Architecture:

Description: In this architecture, multiple processors share a common memory space, allowing them to directly access and modify shared data.

Distributed Memory Architecture:

Description: Each processor has its own private memory, and communication between processors occurs explicitly via message passing.

SIMD (Single Instruction, Multiple Data):

Description: Executes the same instruction on multiple data points simultaneously using a single control unit.

MIMD (Multiple Instruction, Multiple Data):

Description: Allows different processors to execute different instructions on different data concurrently.

NUMA (Non-Uniform Memory Access):

Description: Multiple processors have access to a shared memory space, but access times can vary based on the distance of the memory from the processor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Shared Memory

A

Shared Memory:

Description: In a shared memory architecture, multiple processors share a common, global memory space. All processors can directly access any part of this shared memory to read or write data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Distributed Memory

A

Distributed Memory:

Description: In a distributed memory architecture, each processor has its own private memory, and there is no shared global memory. Processors communicate by explicitly sending and receiving messages to share data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Comparison of shared and distributed memory

A

Scalability:

Shared Memory: Limited scalability due to contention for shared memory.
Distributed Memory: Highly scalable due to dedicated memory for each processor.
Programming Model:

Shared Memory: Simpler programming model, as all processors can access shared data directly.
Distributed Memory: More complex programming model, as communication and synchronization require explicit message passing.
Communication:

Shared Memory: Implicit communication through shared memory.
Distributed Memory: Explicit communication via message passing.
Synchronization:

Shared Memory: Synchronization is typically easier due to shared memory constructs.
Distributed Memory: Requires careful synchronization through message passing and other synchronization mechanisms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

SMP (Symmetric Multiprocessing)

A

architecture is a common and widely used parallel computing architecture that involves multiple identical processors or cores that share a common memory and are capable of processing tasks concurrently. In an SMP system, each processor has equal access to all the memory and I/O devices, and each processor can perform any task or job in the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

NUMA (Non-Uniform Memory Access):

A

Description: In a NUMA architecture, multiple processors (or nodes) are connected to a shared memory system, but the memory access time can vary depending on the proximity of the memory to the processor. Each processor has access to its local memory, but it can also access memory from other nodes, albeit with higher latency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

UMA (Uniform Memory Access)

A

:

Description: In a UMA architecture, all processors have equal and uniform access time to the shared main memory. It implies that the memory access time is the same regardless of the location of the memory in the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

difference between processes and threads

A

Processes and threads are fundamental concepts in operating systems and concurrent programming. They are both units of execution, but they have key differences in terms of resource allocation, memory space, and communication.

Communication and Synchronization:

Process: Communication between processes is typically more complex and involves inter-process communication (IPC) mechanisms such as pipes, sockets, message queues, or shared memory.
Thread: Communication between threads within the same process is simpler and can be achieved through shared variables and data structures. Threads can also use synchronization mechanisms like locks, semaphores, and barriers to coordinate their actions.

Context Switching:

Process: Context switching between processes is typically more time-consuming and resource-intensive due to the need to save and restore the entire process state.
Thread: Context switching between threads within the same process is faster and more efficient because threads share the same memory space and most of the process state.

Independence:

Process: Processes are independent and can execute independently of each other. Failure in one process does not affect others.
Thread: Threads within a process share the process’s resources and are dependent on each other. If one thread encounters an error, it can affect the entire process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Process

A

A process is a standalone program execution unit that has its own memory space, file handles, and system resources. It encapsulates the program code, data, and resources needed for the program to execute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Thread

A

A thread is a subset of a process, representing a single sequential flow of control within the process. Threads share the same memory space and resources within a process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

is threads part of processes/

A

Yes, threads are typically part of processes. A process is a unit of execution in an operating system that includes its own memory space, resources, and at least one thread of execution. When a program is run, the operating system creates a process for it, and within that process, at least one thread is created to execute the program’s instructions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Difference between concurency and parallelism

A

Key Differences:

Execution Model:

Concurrency: Concurrent tasks can overlap in time, and the system manages their execution to maximize overall throughput. It doesn’t guarantee simultaneous execution but ensures efficient progress on multiple tasks.
Parallelism: Parallel tasks or subtasks are executed simultaneously, taking advantage of multiple processing units (cores, processors) to achieve faster execution.
Resource Usage:

Concurrency: Concurrency can be achieved in a single-core system or a multi-core system. In a single-core system, tasks may time-share the available processing unit.
Parallelism: Requires a multi-core system or a distributed computing environment where tasks can execute in parallel across multiple processing units.
Achieving Efficiency:

Concurrency: Aimed at improving the responsiveness and efficiency of a system by overlapping tasks and managing their execution efficiently.
Parallelism: Aimed at improving performance and throughput by executing tasks simultaneously, which can lead to faster task completion.
Interdependency:

Concurrency: Tasks can be independent or dependent on each other, and concurrency handles the management of their overlapping execution.
Parallelism: Tasks are typically independent or loosely coupled to allow for efficient parallel execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

execution scheduling

A

Thread execution scheduling is a crucial aspect of multithreading, where multiple threads within a program compete for execution time on the CPU. Effective scheduling can improve system performance, resource utilization, and overall responsiveness. Different operating systems and programming languages may have their own mechanisms for thread scheduling, but I’ll provide a general overview of thread scheduling concepts.

Thread Scheduling Concepts:
Preemptive vs. Non-preemptive Scheduling:

Preemptive scheduling: The operating system can interrupt a thread’s execution and allocate the CPU to another thread if a higher-priority thread becomes available.
Non-preemptive (cooperative) scheduling: Threads yield control voluntarily, and scheduling decisions are made by the threads themselves. This is generally less common in modern operating systems.
Scheduling Algorithms:

Round Robin: Threads are assigned a fixed time slice (quantum) during which they can execute. After this time slice, they are moved to the back of the queue, and the next thread in line gets a turn.
Priority-based Scheduling: Threads are assigned priorities, and the scheduler executes higher-priority threads first. This can be preemptive or non-preemptive.
First-Come-First-Serve (FCFS): Threads are scheduled in the order they request execution.
Shortest Job First (SJF): Threads with the shortest estimated execution time are scheduled first.
Multilevel Queue Scheduling: Threads are placed into different priority queues, and each queue has its own scheduling algorithm. Threads move between queues based on their behavior and priority.
Priority Levels:

Threads can be assigned different priority levels, usually based on their importance or criticality. Higher-priority threads are scheduled to run before lower-priority ones.
Thread States:

Running: The thread is currently executing on a CPU.
Ready: The thread is ready to run but is waiting for its turn to execute.
Blocked (or Waiting): The thread is unable to execute due to waiting for some event or resource.
Terminated (or Finished): The thread has completed its execution.
Thread Synchronization:

Synchronization mechanisms like locks, semaphores, and barriers are used to control access to shared resources and coordinate thread execution.
Thread Prioritization and Affinity:

Some operating systems allow setting thread priorities and specifying CPU affinity, which determines on which CPU core a thread should run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Thread LifeCycle

A

New: The thread is in this state after it has been created using the new Thread() constructor, but start() method has not been called yet.

Runnable: Once the start() method is invoked, the thread becomes runnable. It means the thread is ready to run and awaits its turn to be executed by the CPU.

Running: The thread is currently being executed by the CPU.

Blocked/Waiting: The thread is in this state when it’s waiting for a monitor lock to enter a synchronized block or waiting for some condition to be met. It might be waiting for user input or for some I/O operation to complete.

Timed Waiting: The thread is in this state when it’s waiting for a specified amount of time. This can happen due to calling methods like sleep() or join() with a specific timeout.

Terminated/Dead: The thread enters this state when it completes its execution or when the stop() method is called (though this method is deprecated and not recommended for use). Once a thread is in this state, it cannot be started again.

17
Q

Daemon Thread

A

In Java, a daemon thread is a special type of thread that runs in the background, providing services to non-daemon threads. The primary distinction between a daemon thread and a user thread is that the JVM will only exit once all user threads have completed their execution. If there are only daemon threads running, the JVM will exit regardless of whether they have finished.

in the Java Virtual Machine (JVM), the garbage collector (GC) is typically implemented as a daemon thread. The garbage collector is responsible for automatically reclaiming memory that is no longer in use by the application, helping to manage memory and prevent memory leaks.

18
Q

Thread Attributes

A

Thread class provides several methods and attributes for managing and accessing information about threads.

Name:

getName(): Returns the name of the thread.
setName(String name): Sets the name of the thread.
Priority:

getPriority(): Returns the priority of the thread (a value between 1 and 10, with 1 being the lowest and 10 being the highest).
setPriority(int priority): Sets the priority of the thread.
Thread ID:

getId(): Returns the unique identifier (ID) of the thread.
Thread State:

getState(): Returns the current state of the thread (e.g., NEW, RUNNABLE, BLOCKED, WAITING, TIMED_WAITING, TERMINATED).
Daemon Status:

isDaemon(): Checks if the thread is a daemon thread.
setDaemon(boolean on): Sets the thread as a daemon thread or not.
Thread Group:

getThreadGroup(): Returns the thread group to which the thread belongs.
Interrupted Status:

isInterrupted(): Checks if the thread has been interrupted.
interrupt(): Interrupts the thread.
Uncaught Exception Handler:

setUncaughtExceptionHandler(Thread.UncaughtExceptionHandler eh): Sets the uncaught exception handler for the thread.
Context Class Loader:

getContextClassLoader(): Returns the context class loader for the thread.
setContextClassLoader(ClassLoader cl): Sets the context class loader for the thread.

19
Q

Runnable interface

A

In Java, the Runnable interface is a functional interface used to define a unit of work that can be executed in a separate thread. It’s a fundamental part of the Java concurrency model and is often used to create and manage threads more flexibly than directly extending the Thread class.

The Runnable interface declares a single abstract method called run(), which represents the task or job that a thread should execute when started. The run() method contains the code that defines the behavior of the thread. When implementing Runnable, you provide the implementation for the run() method, and this is the code that will be executed when the thread is started.

20
Q

difference between Runnable and thread

A

Implementing Runnable Interface:

When implementing the Runnable interface, you create a separate class that implements the Runnable interface and provides an implementation for the run() method. The run() method represents the task or job to be executed by the thread. This approach promotes cleaner code organization and better separation of concerns.

By using the Runnable interface, the thread’s behavior is defined independently of its management. The Runnable instance represents the task to be performed, and this instance can be shared among multiple threads, allowing for efficient code reuse and flexibility in thread usage. Since Java supports single inheritance, this approach allows for extending other classes if needed.

Extending Thread Class:

On the other hand, when extending the Thread class, you create a class that directly extends Thread and overrides its run() method to define the behavior of the thread. The run() method encapsulates the task to be executed by the thread. While this approach can be simpler and more straightforward, it can limit flexibility and extensibility.

Extending the Thread class combines the behavior and management of the thread within the same class. This can make the code less flexible and harder to manage as the application grows. Additionally, since Java supports single inheritance only, this approach restricts the ability to extend other classes.

21
Q

data race

A

occurs when two or more threads concurrently access a shared piece of data, and at least one of the accesses is a write operation, without proper synchronization mechanisms in place. This can lead to unpredictable and incorrect behavior of the program. To prevent this data race, we can use synchronization mechanisms. One common approach is to use the synchronized keyword or synchronized blocks to ensure that only one thread can execute a critical section of code at a time.

22
Q

mutual exclusion

A

Mutual exclusion in Java refers to the concept of allowing only one thread at a time to access a critical section of code or a shared resource. This is essential to prevent data corruption or inconsistent behavior when multiple threads are attempting to modify shared data simultaneously.

23
Q

Deadlocks

A

In Java, a deadlock can occur when two or more threads are blocked forever, each waiting for the other to release a lock. This situation typically arises when multiple threads acquire locks on multiple resources in a specific order and then attempt to acquire additional locks in a different order.
Imagine you have two friends, Alice and Bob, who both want to borrow each other’s items (let’s say books and pens) to study. However, they have a rule: Alice will only lend her book if she gets Bob’s pen, and Bob will only lend his pen if he gets Alice’s book.

Here’s how a deadlock can happen:

Alice and Bob both want to exchange items.
Alice grabs her book and waits for Bob’s pen.
Bob grabs his pen and waits for Alice’s book.
Now, both Alice and Bob are waiting for each other to give up their items, but neither can proceed because they’re each holding onto an item the other person needs. They’re stuck, and this situation is a deadlock.

24
Q

Abandoned locks

A

An abandoned lock, also known as a “deadlock” or “abandoned lock,” occurs when a thread acquires a lock on a resource but fails to release it properly. As a result, other threads in the system are unable to acquire the lock, and the resource remains locked indefinitely, leading to potential performance issues or even system hangs.

This situation is similar to a physical lock where someone locks a door but then leaves without unlocking it, preventing others from accessing the space behind the door.

25
Q

Starvation

A

Imagine you have a bunch of people waiting in line to use a single bathroom. However, one person, let’s call them “Thread A,” always manages to get into the bathroom whenever it’s available because they are quick. Other people, like “Thread B” and “Thread C,” keep waiting, and sometimes they never get a chance to use the bathroom because Thread A keeps getting in there first.

In this analogy:

The bathroom is a shared resource (like a critical section of code that only one thread can access at a time).
Thread A represents a thread that’s very efficient or has a higher priority, always getting access to the resource.
Thread B and Thread C represent other threads that struggle to get access to the resource, i.e., they experience starvation.
Starvation in the context of Java threads refers to a situation where a thread is unable to gain regular access to shared resources or the CPU’s processing time, even though it’s active and ready to execute. Other threads or processes might be continuously accessing the resources, causing the starved thread to wait indefinitely.

26
Q

Livelock

A

Livelock is a situation in concurrent programming where two or more threads keep responding to each other’s actions in a way that none of them can make progress. It’s similar to a deadlock in that threads are stuck, but in a livelock, they are actively trying to resolve the situation, resulting in a loop of actions.

A common analogy is two people in a narrow hallway, trying to let the other pass first. Each person steps aside, but then the other person also steps aside, and they end up stuck, constantly trying to give way to each other.

27
Q
A